id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
BigSuperbPrivate/SpeakerCounting_LibrittsTrainClean100 | 2023-07-31T08:01:44.000Z | [
"region:us"
] | BigSuperbPrivate | null | null | 0 | 8 | 2023-07-13T18:23:04 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: instruction
dtype: string
- name: label
dtype: string
- name: utterance 1
dtype: string
- name: utterance 2
dtype: string
- name: utterance 3
dtype: string
- name: utterance 4
dtype: string
- name: utterance 5
dtype: string
splits:
- name: train
num_bytes: 1438538131.0
num_examples: 10000
- name: validation
num_bytes: 199304545.0
num_examples: 1000
download_size: 2240435961
dataset_size: 1637842676.0
---
# Dataset Card for "SpeakerCounting_LibriTTSTrainClean100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 771 | [
[
-0.05584716796875,
-0.012298583984375,
0.0177459716796875,
0.0084381103515625,
-0.003513336181640625,
-0.00284576416015625,
-0.0038356781005859375,
0.0019054412841796875,
0.06463623046875,
0.04150390625,
-0.053985595703125,
-0.048187255859375,
-0.030868530273437... |
DynamicSuperb/NoiseDetection_LJSpeech_MUSAN-Speech | 2023-07-18T09:10:45.000Z | [
"region:us"
] | DynamicSuperb | null | null | 0 | 8 | 2023-07-14T03:16:21 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: instruction
dtype: string
- name: label
dtype: string
splits:
- name: test
num_bytes: 3371932555.0
num_examples: 26200
download_size: 3362676277
dataset_size: 3371932555.0
---
# Dataset Card for "NoiseDetectionspeech_LJSpeechMusan"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 496 | [
[
-0.036529541015625,
-0.01282501220703125,
0.0181427001953125,
0.0159149169921875,
-0.0117645263671875,
0.005901336669921875,
0.00775909423828125,
-0.02252197265625,
0.061737060546875,
0.0244293212890625,
-0.061859130859375,
-0.0418701171875,
-0.03753662109375,
... |
recastai/coyo-10m-aesthetic | 2023-07-15T05:46:54.000Z | [
"region:us"
] | recastai | null | null | 0 | 8 | 2023-07-15T05:23:43 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
MichaelR207/MultiSim | 2023-07-18T23:19:38.000Z | [
"task_categories:summarization",
"task_categories:text2text-generation",
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:en",
"language:fr",
"language:ru",
"language:ja",
"language:it",
"language:da",
"language:es",
"language:de",
"language:pt",
"language:sl",
"l... | MichaelR207 | null | null | 0 | 8 | 2023-07-18T21:55:31 | ---
license: mit
language:
- en
- fr
- ru
- ja
- it
- da
- es
- de
- pt
- sl
- ur
- eu
task_categories:
- summarization
- text2text-generation
- text-generation
pretty_name: MultiSim
tags:
- medical
- legal
- wikipedia
- encyclopedia
- science
- literature
- news
- websites
size_categories:
- 1M<n<10M
---
# Dataset Card for MultiSim Benchmark
## Dataset Description
- **Repository:https://github.com/XenonMolecule/MultiSim/tree/main**
- **Paper:https://aclanthology.org/2023.acl-long.269/ https://arxiv.org/pdf/2305.15678.pdf**
- **Point of Contact: michaeljryan@stanford.edu**
### Dataset Summary
The MultiSim benchmark is a growing collection of text simplification datasets targeted at sentence simplification in several languages. Currently, the benchmark spans 12 languages.

### Supported Tasks
- Sentence Simplification
### Usage
```python
from datasets import load_dataset
dataset = load_dataset("MichaelR207/MultiSim")
```
### Citation
If you use this benchmark, please cite our [paper](https://aclanthology.org/2023.acl-long.269/):
```
@inproceedings{ryan-etal-2023-revisiting,
title = "Revisiting non-{E}nglish Text Simplification: A Unified Multilingual Benchmark",
author = "Ryan, Michael and
Naous, Tarek and
Xu, Wei",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.269",
pages = "4898--4927",
abstract = "Recent advancements in high-quality, large-scale English resources have pushed the frontier of English Automatic Text Simplification (ATS) research. However, less work has been done on multilingual text simplification due to the lack of a diverse evaluation benchmark that covers complex-simple sentence pairs in many languages. This paper introduces the MultiSim benchmark, a collection of 27 resources in 12 distinct languages containing over 1.7 million complex-simple sentence pairs. This benchmark will encourage research in developing more effective multilingual text simplification models and evaluation metrics. Our experiments using MultiSim with pre-trained multilingual language models reveal exciting performance improvements from multilingual training in non-English settings. We observe strong performance from Russian in zero-shot cross-lingual transfer to low-resource languages. We further show that few-shot prompting with BLOOM-176b achieves comparable quality to reference simplifications outperforming fine-tuned models in most languages. We validate these findings through human evaluation.",
}
```
### Contact
**Michael Ryan**: [Scholar](https://scholar.google.com/citations?user=8APGEEkAAAAJ&hl=en) | [Twitter](http://twitter.com/michaelryan207) | [Github](https://github.com/XenonMolecule) | [LinkedIn](https://www.linkedin.com/in/michael-ryan-207/) | [Research Gate](https://www.researchgate.net/profile/Michael-Ryan-86) | [Personal Website](http://michaelryan.tech/) | [michaeljryan@stanford.edu](mailto://michaeljryan@stanford.edu)
### Languages
- English
- French
- Russian
- Japanese
- Italian
- Danish (on request)
- Spanish (on request)
- German
- Brazilian Portuguese
- Slovene
- Urdu (on request)
- Basque (on request)
## Dataset Structure
### Data Instances
MultiSim is a collection of 27 existing datasets:
- AdminIT
- ASSET
- CBST
- CLEAR
- DSim
- Easy Japanese
- Easy Japanese Extended
- GEOLino
- German News
- Newsela EN/ES
- PaCCSS-IT
- PorSimples
- RSSE
- RuAdapt Encyclopedia
- RuAdapt Fairytales
- RuAdapt Literature
- RuWikiLarge
- SIMPITIKI
- Simple German
- Simplext
- SimplifyUR
- SloTS
- Teacher
- Terence
- TextComplexityDE
- WikiAuto
- WikiLargeFR

### Data Fields
In the train set, you will only find `original` and `simple` sentences. In the validation and test sets you may find `simple1`, `simple2`, ... `simpleN` because a given sentence can have multiple reference simplifications (useful in SARI and BLEU calculations)
### Data Splits
The dataset is split into a train, validation, and test set.

## Dataset Creation
### Curation Rationale
I hope that collecting all of these independently useful resources for text simplification together into one benchmark will encourage multilingual work on text simplification!
### Source Data
#### Initial Data Collection and Normalization
Data is compiled from the 27 existing datasets that comprise the MultiSim Benchmark. For details on each of the resources please see Appendix A in the [paper](https://aclanthology.org/2023.acl-long.269.pdf).
#### Who are the source language producers?
Each dataset has different sources. At a high level the sources are: Automatically Collected (ex. Wikipedia, Web data), Manually Collected (ex. annotators asked to simplify sentences), Target Audience Resources (ex. Newsela News Articles), or Translated (ex. Machine translations of existing datasets).
These sources can be seen in Table 1 pictured above (Section: `Dataset Structure/Data Instances`) and further discussed in section 3 of the [paper](https://aclanthology.org/2023.acl-long.269.pdf). Appendix A of the paper has details on specific resources.
### Annotations
#### Annotation process
Annotators writing simplifications (only for some datasets) typically follow an annotation guideline. Some example guidelines come from [here](https://dl.acm.org/doi/10.1145/1410140.1410191), [here](https://link.springer.com/article/10.1007/s11168-006-9011-1), and [here](https://link.springer.com/article/10.1007/s10579-017-9407-6).
#### Who are the annotators?
See Table 1 (Section: `Dataset Structure/Data Instances`) for specific annotators per dataset. At a high level the annotators are: writers, translators, teachers, linguists, journalists, crowdworkers, experts, news agencies, medical students, students, writers, and researchers.
### Personal and Sensitive Information
No dataset should contain personal or sensitive information. These were previously collected resources primarily collected from news sources, wikipedia, science communications, etc. and were not identified to have personally identifiable information.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this dataset will make a greatly positive social impact as text simplification is a task that serves children, second language learners, and people with reading/cognitive disabilities. By publicly releasing a dataset in 12 languages we hope to serve these global communities.
One negative and unintended use case for this data would be reversing the labels to make a "text complification" model. We beleive the benefits of releasing this data outweigh the harms and hope that people use the dataset as intended.
### Discussion of Biases
There may be biases of the annotators involved in writing the simplifications towards how they believe a simpler sentence should be written. Additionally annotators and editors have the choice of what information does not make the cut in the simpler sentence introducing information importance bias.
### Other Known Limitations
Some of the included resources were automatically collected or machine translated. As such not every sentence is perfectly aligned. Users are recommended to use such individual resources with caution.
## Additional Information
### Dataset Curators
**Michael Ryan**: [Scholar](https://scholar.google.com/citations?user=8APGEEkAAAAJ&hl=en) | [Twitter](http://twitter.com/michaelryan207) | [Github](https://github.com/XenonMolecule) | [LinkedIn](https://www.linkedin.com/in/michael-ryan-207/) | [Research Gate](https://www.researchgate.net/profile/Michael-Ryan-86) | [Personal Website](http://michaelryan.tech/) | [michaeljryan@stanford.edu](mailto://michaeljryan@stanford.edu)
### Licensing Information
MIT License
### Citation Information
Please cite the individual datasets that you use within the MultiSim benchmark as appropriate. Proper bibtex attributions for each of the datasets are included below.
#### AdminIT
```
@inproceedings{miliani-etal-2022-neural,
title = "Neural Readability Pairwise Ranking for Sentences in {I}talian Administrative Language",
author = "Miliani, Martina and
Auriemma, Serena and
Alva-Manchego, Fernando and
Lenci, Alessandro",
booktitle = "Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing",
month = nov,
year = "2022",
address = "Online only",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.aacl-main.63",
pages = "849--866",
abstract = "Automatic Readability Assessment aims at assigning a complexity level to a given text, which could help improve the accessibility to information in specific domains, such as the administrative one. In this paper, we investigate the behavior of a Neural Pairwise Ranking Model (NPRM) for sentence-level readability assessment of Italian administrative texts. To deal with data scarcity, we experiment with cross-lingual, cross- and in-domain approaches, and test our models on Admin-It, a new parallel corpus in the Italian administrative language, containing sentences simplified using three different rewriting strategies. We show that NPRMs are effective in zero-shot scenarios ({\textasciitilde}0.78 ranking accuracy), especially with ranking pairs containing simplifications produced by overall rewriting at the sentence-level, and that the best results are obtained by adding in-domain data (achieving perfect performance for such sentence pairs). Finally, we investigate where NPRMs failed, showing that the characteristics of the training data, rather than its size, have a bigger effect on a model{'}s performance.",
}
```
#### ASSET
```
@inproceedings{alva-manchego-etal-2020-asset,
title = "{ASSET}: {A} Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations",
author = "Alva-Manchego, Fernando and
Martin, Louis and
Bordes, Antoine and
Scarton, Carolina and
Sagot, Beno{\^\i}t and
Specia, Lucia",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.424",
pages = "4668--4679",
}
```
#### CBST
```
@article{10.1007/s10579-017-9407-6,
title={{The corpus of Basque simplified texts (CBST)}},
author={Gonzalez-Dios, Itziar and Aranzabe, Mar{\'\i}a Jes{\'u}s and D{\'\i}az de Ilarraza, Arantza},
journal={Language Resources and Evaluation},
volume={52},
number={1},
pages={217--247},
year={2018},
publisher={Springer}
}
```
#### CLEAR
```
@inproceedings{grabar-cardon-2018-clear,
title = "{CLEAR} {--} Simple Corpus for Medical {F}rench",
author = "Grabar, Natalia and
Cardon, R{\'e}mi",
booktitle = "Proceedings of the 1st Workshop on Automatic Text Adaptation ({ATA})",
month = nov,
year = "2018",
address = "Tilburg, the Netherlands",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W18-7002",
doi = "10.18653/v1/W18-7002",
pages = "3--9",
}
```
#### DSim
```
@inproceedings{klerke-sogaard-2012-dsim,
title = "{DS}im, a {D}anish Parallel Corpus for Text Simplification",
author = "Klerke, Sigrid and
S{\o}gaard, Anders",
booktitle = "Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}'12)",
month = may,
year = "2012",
address = "Istanbul, Turkey",
publisher = "European Language Resources Association (ELRA)",
url = "http://www.lrec-conf.org/proceedings/lrec2012/pdf/270_Paper.pdf",
pages = "4015--4018",
abstract = "We present DSim, a new sentence aligned Danish monolingual parallel corpus extracted from 3701 pairs of news telegrams and corresponding professionally simplified short news articles. The corpus is intended for building automatic text simplification for adult readers. We compare DSim to different examples of monolingual parallel corpora, and we argue that this corpus is a promising basis for future development of automatic data-driven text simplification systems in Danish. The corpus contains both the collection of paired articles and a sentence aligned bitext, and we show that sentence alignment using simple tf*idf weighted cosine similarity scoring is on line with state―of―the―art when evaluated against a hand-aligned sample. The alignment results are compared to state of the art for English sentence alignment. We finally compare the source and simplified sides of the corpus in terms of lexical and syntactic characteristics and readability, and find that the one―to―many sentence aligned corpus is representative of the sentence simplifications observed in the unaligned collection of article pairs.",
}
```
#### Easy Japanese
```
@inproceedings{maruyama-yamamoto-2018-simplified,
title = "Simplified Corpus with Core Vocabulary",
author = "Maruyama, Takumi and
Yamamoto, Kazuhide",
booktitle = "Proceedings of the Eleventh International Conference on Language Resources and Evaluation ({LREC} 2018)",
month = may,
year = "2018",
address = "Miyazaki, Japan",
publisher = "European Language Resources Association (ELRA)",
url = "https://aclanthology.org/L18-1185",
}
```
#### Easy Japanese Extended
```
@inproceedings{katsuta-yamamoto-2018-crowdsourced,
title = "Crowdsourced Corpus of Sentence Simplification with Core Vocabulary",
author = "Katsuta, Akihiro and
Yamamoto, Kazuhide",
booktitle = "Proceedings of the Eleventh International Conference on Language Resources and Evaluation ({LREC} 2018)",
month = may,
year = "2018",
address = "Miyazaki, Japan",
publisher = "European Language Resources Association (ELRA)",
url = "https://aclanthology.org/L18-1072",
}
```
#### GEOLino
```
@inproceedings{mallinson2020,
title={Zero-Shot Crosslingual Sentence Simplification},
author={Mallinson, Jonathan and Sennrich, Rico and Lapata, Mirella},
year={2020},
booktitle={2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020)}
}
```
#### German News
```
@inproceedings{sauberli-etal-2020-benchmarking,
title = "Benchmarking Data-driven Automatic Text Simplification for {G}erman",
author = {S{\"a}uberli, Andreas and
Ebling, Sarah and
Volk, Martin},
booktitle = "Proceedings of the 1st Workshop on Tools and Resources to Empower People with REAding DIfficulties (READI)",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.readi-1.7",
pages = "41--48",
abstract = "Automatic text simplification is an active research area, and there are first systems for English, Spanish, Portuguese, and Italian. For German, no data-driven approach exists to this date, due to a lack of training data. In this paper, we present a parallel corpus of news items in German with corresponding simplifications on two complexity levels. The simplifications have been produced according to a well-documented set of guidelines. We then report on experiments in automatically simplifying the German news items using state-of-the-art neural machine translation techniques. We demonstrate that despite our small parallel corpus, our neural models were able to learn essential features of simplified language, such as lexical substitutions, deletion of less relevant words and phrases, and sentence shortening.",
language = "English",
ISBN = "979-10-95546-45-0",
}
```
#### Newsela EN/ES
```
@article{xu-etal-2015-problems,
title = "Problems in Current Text Simplification Research: New Data Can Help",
author = "Xu, Wei and
Callison-Burch, Chris and
Napoles, Courtney",
journal = "Transactions of the Association for Computational Linguistics",
volume = "3",
year = "2015",
address = "Cambridge, MA",
publisher = "MIT Press",
url = "https://aclanthology.org/Q15-1021",
doi = "10.1162/tacl_a_00139",
pages = "283--297",
abstract = "Simple Wikipedia has dominated simplification research in the past 5 years. In this opinion paper, we argue that focusing on Wikipedia limits simplification research. We back up our arguments with corpus analysis and by highlighting statements that other researchers have made in the simplification literature. We introduce a new simplification dataset that is a significant improvement over Simple Wikipedia, and present a novel quantitative-comparative approach to study the quality of simplification data resources.",
}
```
#### PaCCSS-IT
```
@inproceedings{brunato-etal-2016-paccss,
title = "{P}a{CCSS}-{IT}: A Parallel Corpus of Complex-Simple Sentences for Automatic Text Simplification",
author = "Brunato, Dominique and
Cimino, Andrea and
Dell{'}Orletta, Felice and
Venturi, Giulia",
booktitle = "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2016",
address = "Austin, Texas",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D16-1034",
doi = "10.18653/v1/D16-1034",
pages = "351--361",
}
```
#### PorSimples
```
@inproceedings{aluisio-gasperin-2010-fostering,
title = "Fostering Digital Inclusion and Accessibility: The {P}or{S}imples project for Simplification of {P}ortuguese Texts",
author = "Alu{\'\i}sio, Sandra and
Gasperin, Caroline",
booktitle = "Proceedings of the {NAACL} {HLT} 2010 Young Investigators Workshop on Computational Approaches to Languages of the {A}mericas",
month = jun,
year = "2010",
address = "Los Angeles, California",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W10-1607",
pages = "46--53",
}
```
```
@inproceedings{10.1007/978-3-642-16952-6_31,
author="Scarton, Carolina and Gasperin, Caroline and Aluisio, Sandra",
editor="Kuri-Morales, Angel and Simari, Guillermo R.",
title="Revisiting the Readability Assessment of Texts in Portuguese",
booktitle="Advances in Artificial Intelligence -- IBERAMIA 2010",
year="2010",
publisher="Springer Berlin Heidelberg",
address="Berlin, Heidelberg",
pages="306--315",
isbn="978-3-642-16952-6"
}
```
#### RSSE
```
@inproceedings{sakhovskiy2021rusimplesenteval,
title={{RuSimpleSentEval-2021 shared task:} evaluating sentence simplification for Russian},
author={Sakhovskiy, Andrey and Izhevskaya, Alexandra and Pestova, Alena and Tutubalina, Elena and Malykh, Valentin and Smurov, Ivana and Artemova, Ekaterina},
booktitle={Proceedings of the International Conference “Dialogue},
pages={607--617},
year={2021}
}
```
#### RuAdapt
```
@inproceedings{Dmitrieva2021Quantitative,
title={A quantitative study of simplification strategies in adapted texts for L2 learners of Russian},
author={Dmitrieva, Anna and Laposhina, Antonina and Lebedeva, Maria},
booktitle={Proceedings of the International Conference “Dialogue},
pages={191--203},
year={2021}
}
```
```
@inproceedings{dmitrieva-tiedemann-2021-creating,
title = "Creating an Aligned {R}ussian Text Simplification Dataset from Language Learner Data",
author = {Dmitrieva, Anna and
Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing",
month = apr,
year = "2021",
address = "Kiyv, Ukraine",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.bsnlp-1.8",
pages = "73--79",
abstract = "Parallel language corpora where regular texts are aligned with their simplified versions can be used in both natural language processing and theoretical linguistic studies. They are essential for the task of automatic text simplification, but can also provide valuable insights into the characteristics that make texts more accessible and reveal strategies that human experts use to simplify texts. Today, there exist a few parallel datasets for English and Simple English, but many other languages lack such data. In this paper we describe our work on creating an aligned Russian-Simple Russian dataset composed of Russian literature texts adapted for learners of Russian as a foreign language. This will be the first parallel dataset in this domain, and one of the first Simple Russian datasets in general.",
}
```
#### RuWikiLarge
```
@inproceedings{sakhovskiy2021rusimplesenteval,
title={{RuSimpleSentEval-2021 shared task:} evaluating sentence simplification for Russian},
author={Sakhovskiy, Andrey and Izhevskaya, Alexandra and Pestova, Alena and Tutubalina, Elena and Malykh, Valentin and Smurov, Ivana and Artemova, Ekaterina},
booktitle={Proceedings of the International Conference “Dialogue},
pages={607--617},
year={2021}
}
```
#### SIMPITIKI
```
@article{tonelli2016simpitiki,
title={SIMPITIKI: a Simplification corpus for Italian},
author={Tonelli, Sara and Aprosio, Alessio Palmero and Saltori, Francesca},
journal={Proceedings of CLiC-it},
year={2016}
}
```
#### Simple German
```
@inproceedings{battisti-etal-2020-corpus,
title = "A Corpus for Automatic Readability Assessment and Text Simplification of {G}erman",
author = {Battisti, Alessia and
Pf{\"u}tze, Dominik and
S{\"a}uberli, Andreas and
Kostrzewa, Marek and
Ebling, Sarah},
booktitle = "Proceedings of the Twelfth Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.lrec-1.404",
pages = "3302--3311",
abstract = "In this paper, we present a corpus for use in automatic readability assessment and automatic text simplification for German, the first of its kind for this language. The corpus is compiled from web sources and consists of parallel as well as monolingual-only (simplified German) data amounting to approximately 6,200 documents (nearly 211,000 sentences). As a unique feature, the corpus contains information on text structure (e.g., paragraphs, lines), typography (e.g., font type, font style), and images (content, position, and dimensions). While the importance of considering such information in machine learning tasks involving simplified language, such as readability assessment, has repeatedly been stressed in the literature, we provide empirical evidence for its benefit. We also demonstrate the added value of leveraging monolingual-only data for automatic text simplification via machine translation through applying back-translation, a data augmentation technique.",
language = "English",
ISBN = "979-10-95546-34-4",
}
```
#### Simplext
```
@article{10.1145/2738046,
author = {Saggion, Horacio and \v{S}tajner, Sanja and Bott, Stefan and Mille, Simon and Rello, Luz and Drndarevic, Biljana},
title = {Making It Simplext: Implementation and Evaluation of a Text Simplification System for Spanish},
year = {2015},
issue_date = {June 2015}, publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {6},
number = {4},
issn = {1936-7228},
url = {https://doi.org/10.1145/2738046},
doi = {10.1145/2738046},
journal = {ACM Trans. Access. Comput.},
month = {may},
articleno = {14},
numpages = {36},
keywords = {Spanish, text simplification corpus, human evaluation, readability measures}
}
```
#### SimplifyUR
```
@inproceedings{qasmi-etal-2020-simplifyur,
title = "{S}implify{UR}: Unsupervised Lexical Text Simplification for {U}rdu",
author = "Qasmi, Namoos Hayat and
Zia, Haris Bin and
Athar, Awais and
Raza, Agha Ali",
booktitle = "Proceedings of the Twelfth Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.lrec-1.428",
pages = "3484--3489",
language = "English",
ISBN = "979-10-95546-34-4",
}
```
#### SloTS
```
@misc{gorenc2022slovene,
title = {Slovene text simplification dataset {SloTS}},
author = {Gorenc, Sabina and Robnik-{\v S}ikonja, Marko},
url = {http://hdl.handle.net/11356/1682},
note = {Slovenian language resource repository {CLARIN}.{SI}},
copyright = {Creative Commons - Attribution 4.0 International ({CC} {BY} 4.0)},
issn = {2820-4042},
year = {2022}
}
```
#### Terence and Teacher
```
@inproceedings{brunato-etal-2015-design,
title = "Design and Annotation of the First {I}talian Corpus for Text Simplification",
author = "Brunato, Dominique and
Dell{'}Orletta, Felice and
Venturi, Giulia and
Montemagni, Simonetta",
booktitle = "Proceedings of the 9th Linguistic Annotation Workshop",
month = jun,
year = "2015",
address = "Denver, Colorado, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W15-1604",
doi = "10.3115/v1/W15-1604",
pages = "31--41",
}
```
#### TextComplexityDE
```
@article{naderi2019subjective,
title={Subjective Assessment of Text Complexity: A Dataset for German Language},
author={Naderi, Babak and Mohtaj, Salar and Ensikat, Kaspar and M{\"o}ller, Sebastian},
journal={arXiv preprint arXiv:1904.07733},
year={2019}
}
```
#### WikiAuto
```
@inproceedings{acl/JiangMLZX20,
author = {Chao Jiang and
Mounica Maddela and
Wuwei Lan and
Yang Zhong and
Wei Xu},
editor = {Dan Jurafsky and
Joyce Chai and
Natalie Schluter and
Joel R. Tetreault},
title = {Neural {CRF} Model for Sentence Alignment in Text Simplification},
booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational
Linguistics, {ACL} 2020, Online, July 5-10, 2020},
pages = {7943--7960},
publisher = {Association for Computational Linguistics},
year = {2020},
url = {https://www.aclweb.org/anthology/2020.acl-main.709/}
}
```
#### WikiLargeFR
```
@inproceedings{cardon-grabar-2020-french,
title = "{F}rench Biomedical Text Simplification: When Small and Precise Helps",
author = "Cardon, R{\'e}mi and
Grabar, Natalia",
booktitle = "Proceedings of the 28th International Conference on Computational Linguistics",
month = dec,
year = "2020",
address = "Barcelona, Spain (Online)",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2020.coling-main.62",
doi = "10.18653/v1/2020.coling-main.62",
pages = "710--716",
abstract = "We present experiments on biomedical text simplification in French. We use two kinds of corpora {--} parallel sentences extracted from existing health comparable corpora in French and WikiLarge corpus translated from English to French {--} and a lexicon that associates medical terms with paraphrases. Then, we train neural models on these parallel corpora using different ratios of general and specialized sentences. We evaluate the results with BLEU, SARI and Kandel scores. The results point out that little specialized data helps significantly the simplification.",
}
```
## Data Availability
### Public Datasets
Most of the public datasets are available as a part of this MultiSim Repo. A few are still pending availability. For all resources we provide alternative download links.
| Dataset | Language | Availability in MultiSim Repo | Alternative Link |
|---|---|---|---|
| ASSET | English | Available | https://huggingface.co/datasets/asset |
| WikiAuto | English | Available | https://huggingface.co/datasets/wiki_auto |
| CLEAR | French | Available | http://natalia.grabar.free.fr/resources.php#remi |
| WikiLargeFR | French | Available | http://natalia.grabar.free.fr/resources.php#remi |
| GEOLino | German | Available | https://github.com/Jmallins/ZEST-data |
| TextComplexityDE | German | Available | https://github.com/babaknaderi/TextComplexityDE |
| AdminIT | Italian | Available | https://github.com/Unipisa/admin-It |
| Simpitiki | Italian | Available | https://github.com/dhfbk/simpitiki# |
| PaCCSS-IT | Italian | Available | http://www.italianlp.it/resources/paccss-it-parallel-corpus-of-complex-simple-sentences-for-italian/ |
| Terence and Teacher | Italian | Available | http://www.italianlp.it/resources/terence-and-teacher/ |
| Easy Japanese | Japanese | Available | https://www.jnlp.org/GengoHouse/snow/t15 |
| Easy Japanese Extended | Japanese | Available | https://www.jnlp.org/GengoHouse/snow/t23 |
| RuAdapt Encyclopedia | Russian | Available | https://github.com/Digital-Pushkin-Lab/RuAdapt |
| RuAdapt Fairytales | Russian | Available | https://github.com/Digital-Pushkin-Lab/RuAdapt |
| RuSimpleSentEval | Russian | Available | https://github.com/dialogue-evaluation/RuSimpleSentEval |
| RuWikiLarge | Russian | Available | https://github.com/dialogue-evaluation/RuSimpleSentEval |
| SloTS | Slovene | Available | https://github.com/sabina-skubic/text-simplification-slovene |
| SimplifyUR | Urdu | Pending | https://github.com/harisbinzia/SimplifyUR |
| PorSimples | Brazilian Portuguese | Available | [sandra@icmc.usp.br](mailto:sandra@icmc.usp.br) |
### On Request Datasets
The authors of the original papers must be contacted for on request datasets. Contact information for the authors of each dataset is provided below.
| Dataset | Language | Contact |
|---|---|---|
| CBST | Basque | http://www.ixa.eus/node/13007?language=en <br/> [itziar.gonzalezd@ehu.eus](mailto:itziar.gonzalezd@ehu.eus) |
| DSim | Danish | [sk@eyejustread.com](mailto:sk@eyejustread.com) |
| Newsela EN | English | [https://newsela.com/data/](https://newsela.com/data/) |
| Newsela ES | Spanish | [https://newsela.com/data/](https://newsela.com/data/) |
| German News | German | [ebling@cl.uzh.ch](mailto:ebling@cl.uzh.ch) |
| Simple German | German | [ebling@cl.uzh.ch](mailto:ebling@cl.uzh.ch) |
| Simplext | Spanish | [horacio.saggion@upf.edu](mailto:horacio.saggion@upf.edu) |
| RuAdapt Literature | Russian | Partially Available: https://github.com/Digital-Pushkin-Lab/RuAdapt <br/> Full Dataset: [anna.dmitrieva@helsinki.fi](mailto:anna.dmitrieva@helsinki.fi) | | 31,423 | [
[
-0.0240936279296875,
-0.034912109375,
0.0217742919921875,
0.026123046875,
-0.0174713134765625,
-0.01491546630859375,
-0.043975830078125,
-0.035797119140625,
0.0096893310546875,
0.012969970703125,
-0.058502197265625,
-0.046539306640625,
-0.044036865234375,
0.... |
atmallen/popqa-parents-lying | 2023-07-19T15:57:51.000Z | [
"region:us"
] | atmallen | null | null | 0 | 8 | 2023-07-19T00:40:17 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'false'
'1': 'true'
- name: true_label
dtype: int64
splits:
- name: train
num_bytes: 3223356
num_examples: 31936
- name: validation
num_bytes: 695352
num_examples: 6848
- name: test
num_bytes: 700442
num_examples: 6880
download_size: 750525
dataset_size: 4619150
---
# Dataset Card for "popqa-parents-lying"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 631 | [
[
-0.036346435546875,
-0.0145263671875,
0.00589752197265625,
0.0174407958984375,
0.0033435821533203125,
0.001987457275390625,
0.0308837890625,
-0.0009021759033203125,
0.03125,
0.032623291015625,
-0.08929443359375,
-0.0301513671875,
-0.0318603515625,
-0.0314636... |
xzuyn/open-instruct-uncensored-alpaca | 2023-07-31T22:23:20.000Z | [
"size_categories:100K<n<1M",
"language:en",
"allenai",
"open-instruct",
"ehartford",
"alpaca",
"region:us"
] | xzuyn | null | null | 0 | 8 | 2023-07-20T21:36:52 | ---
language:
- en
tags:
- allenai
- open-instruct
- ehartford
- alpaca
size_categories:
- 100K<n<1M
---
[Original dataset page from ehartford.](https://huggingface.co/datasets/ehartford/open-instruct-uncensored)
810,102 entries. Sourced from `open-instruct-uncensored.jsonl`.
Converted the jsonl to a json which can be loaded into something like LLaMa-LoRA-Tuner.
I've also included smaller datasets that includes less entries depending on how much memory you have to work with.
Each one is randomized before being converted, so each dataset is unique in order.
```
Count of each Dataset:
code_alpaca: 19991
unnatural_instructions: 68231
baize: 166096
self_instruct: 81512
oasst1: 49433
flan_v2: 97519
stanford_alpaca: 50098
sharegpt: 46733
super_ni: 96157
dolly: 14624
cot: 73946
gpt4_alpaca: 45774
``` | 809 | [
[
-0.035430908203125,
-0.032440185546875,
0.0198974609375,
0.0093231201171875,
-0.00812530517578125,
-0.0262908935546875,
-0.004772186279296875,
-0.02789306640625,
0.034454345703125,
0.074951171875,
-0.043060302734375,
-0.0643310546875,
-0.035736083984375,
0.0... |
TrainingDataPro/makeup-detection-dataset | 2023-09-19T19:35:55.000Z | [
"task_categories:image-to-image",
"task_categories:image-classification",
"language:en",
"license:cc-by-nc-nd-4.0",
"code",
"region:us"
] | TrainingDataPro | The dataset consists of photos featuring the same individuals captured in two
distinct scenarios - *with and without makeup*. The dataset contains a diverse
range of individuals with various *ages, ethnicities and genders*. The images
themselves would be of high quality, ensuring clarity and detail for each
subject.
In photos with makeup, it is applied **to only specific parts** of the face,
such as *eyes, lips, or skin*.
In photos without makeup, individuals have a bare face with no visible
cosmetics or beauty enhancements. These images would provide a clear contrast
to the makeup images, allowing for significant visual analysis. | @InProceedings{huggingface:dataset,
title = {makeup-detection-dataset},
author = {TrainingDataPro},
year = {2023}
} | 2 | 8 | 2023-07-21T07:07:28 | ---
language:
- en
license: cc-by-nc-nd-4.0
task_categories:
- image-to-image
- image-classification
tags:
- code
dataset_info:
features:
- name: no_makeup
dtype: image
- name: with_makeup
dtype: image
- name: part
dtype: string
- name: gender
dtype: string
- name: age
dtype: int8
- name: country
dtype: string
splits:
- name: train
num_bytes: 25845965
num_examples: 26
download_size: 25248180
dataset_size: 25845965
---
# Makeup Detection Dataset
The dataset consists of photos featuring the same individuals captured in two distinct scenarios - *with and without makeup*. The dataset contains a diverse range of individuals with various *ages, ethnicities and genders*. The images themselves would be of high quality, ensuring clarity and detail for each subject.
In photos with makeup, it is applied **to only specific parts** of the face, such as *eyes, lips, or skin*.
In photos without makeup, individuals have a bare face with no visible cosmetics or beauty enhancements. These images would provide a clear contrast to the makeup images, allowing for significant visual analysis.
### The dataset's possible applications:
- facial recognition
- beauty consultations and personalized recommendations
- augmented reality and filters in photography apps
- social media and influencer marketing
- dermatology and skincare

# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=makeup-detection-dataset) to discuss your requirements, learn about the price and buy the dataset.
# Content
- **no_makeup**: includes images of people *without* makeup
- **with_makeup**: includes images of people *wearing makeup*. People are the same as in the previous folder, photos are identified by the same name
- **.csv** file: contains information about people in the dataset
### File with the extension .csv
includes the following information for each set of media files:
- **no_makeup**: link to the photo of a person without makeup,
- **with_makeup**: link to the photo of the person with makeup,
- **part**: body part of makeup's application,
- **gender**: gender of the person,
- **age**: age of the person,
- **country**: country of the person
# Images for makeup detection might be collected in accordance with your requirements.
## [TrainingData](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=makeup-detection-dataset) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** | 3,031 | [
[
-0.026336669921875,
-0.038055419921875,
0.00760650634765625,
0.0170440673828125,
-0.007297515869140625,
0.00643157958984375,
-0.005397796630859375,
-0.033599853515625,
0.030303955078125,
0.066650390625,
-0.066650390625,
-0.07952880859375,
-0.03619384765625,
... |
YoonSeul/legal_train_v1 | 2023-07-24T14:44:37.000Z | [
"region:us"
] | YoonSeul | null | null | 0 | 8 | 2023-07-24T14:44:31 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 33357358
num_examples: 14716
download_size: 15578888
dataset_size: 33357358
---
# Dataset Card for "legal_train_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 436 | [
[
-0.0265350341796875,
-0.00403594970703125,
0.01399993896484375,
0.0201873779296875,
-0.0268402099609375,
-0.0197601318359375,
0.02606201171875,
-0.002918243408203125,
0.05078125,
0.047637939453125,
-0.058502197265625,
-0.052581787109375,
-0.0404052734375,
-0... |
HydraLM/math_dataset_alpaca | 2023-07-27T18:43:34.000Z | [
"region:us"
] | HydraLM | null | null | 0 | 8 | 2023-07-27T18:43:23 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 71896969
num_examples: 49999
download_size: 34712339
dataset_size: 71896969
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "math_dataset_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 487 | [
[
-0.0509033203125,
-0.036956787109375,
0.003753662109375,
0.03094482421875,
-0.0239410400390625,
-0.0068359375,
0.0321044921875,
-0.005268096923828125,
0.07757568359375,
0.0307464599609375,
-0.05914306640625,
-0.0526123046875,
-0.0521240234375,
-0.03091430664... |
HydraLM/GPTeacher_roleplay_standardized | 2023-07-27T20:03:23.000Z | [
"region:us"
] | HydraLM | null | null | 2 | 8 | 2023-07-27T20:03:21 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
splits:
- name: train
num_bytes: 1664691
num_examples: 5769
download_size: 946455
dataset_size: 1664691
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "GPTeacher_roleplay_standardized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 583 | [
[
-0.0240478515625,
-0.0198211669921875,
0.0000095367431640625,
0.01544189453125,
-0.00543975830078125,
-0.0233154296875,
0.0035457611083984375,
-0.00586700439453125,
0.032928466796875,
0.033660888671875,
-0.053741455078125,
-0.0731201171875,
-0.052978515625,
... |
kaenakiakona/spanglish_claude_generated | 2023-08-03T23:12:25.000Z | [
"region:us"
] | kaenakiakona | null | null | 0 | 8 | 2023-07-30T21:07:50 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
DNW/newbury_opening_times_qa | 2023-07-31T12:57:27.000Z | [
"region:us"
] | DNW | null | null | 0 | 8 | 2023-07-31T12:57:26 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 31347
num_examples: 233
download_size: 8252
dataset_size: 31347
---
# Dataset Card for "newbury_opening_times_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 444 | [
[
-0.0206756591796875,
-0.0009264945983886719,
0.0118408203125,
0.03094482421875,
-0.0173187255859375,
-0.02606201171875,
0.03778076171875,
-0.016021728515625,
0.0304412841796875,
0.029815673828125,
-0.053497314453125,
-0.056610107421875,
-0.0015993118286132812,
... |
xzuyn/futurama-alpaca | 2023-08-03T06:49:53.000Z | [
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:n<1K",
"language:en",
"region:us"
] | xzuyn | null | null | 0 | 8 | 2023-08-01T20:41:50 | ---
language:
- en
size_categories:
- n<1K
task_categories:
- text-generation
- conversational
---
[Original Dataset](https://www.kaggle.com/datasets/josephvm/futurama-seasons-16-transcripts?select=only_spoken_text.csv)
114 episodes. WIP formatting as with LLaMa, it's like 4000+ tokens each.
I would like to augment the instruction, and also possibly input a summary.
I also want to make a set that includes multiple tv shows. Just not sure how I wanna go about reformatting all this to fit into smaller chunks like 512 tokens, while still understanding the context of being and instruction but the episode at the same time.
```
Instruction: `Generate an episode of Futurama.`
Input: `{Episode Name} - {Episode Synopsis}`
Output: `{Episode Dialog In Chat Format}`
``` | 772 | [
[
-0.01023101806640625,
-0.06011962890625,
-0.0021419525146484375,
0.0254058837890625,
-0.0194549560546875,
-0.00027942657470703125,
-0.04522705078125,
0.01480865478515625,
0.055511474609375,
0.0276641845703125,
-0.0684814453125,
-0.0172882080078125,
-0.033203125,... |
iamshnoo/geomlama | 2023-09-15T23:24:53.000Z | [
"region:us"
] | iamshnoo | null | null | 0 | 8 | 2023-08-02T01:18:19 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: candidate_answers
dtype: string
- name: context
dtype: string
- name: country
dtype: string
splits:
- name: en
num_bytes: 17223
num_examples: 125
- name: fa
num_bytes: 24061
num_examples: 125
- name: hi
num_bytes: 34719
num_examples: 125
- name: sw
num_bytes: 17593
num_examples: 125
- name: zh
num_bytes: 15926
num_examples: 125
- name: el
num_bytes: 37639
num_examples: 150
download_size: 45285
dataset_size: 147161
---
data from the paper GeoMLAMA: Geo-Diverse Commonsense Probing on Multilingual Pre-Trained Language Models
(along with some new data and modifications for cleaning)
[GitHub](https://github.com/WadeYin9712/GeoMLAMA)
# Dataset Card for "geomlama"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 995 | [
[
-0.043426513671875,
-0.0537109375,
0.031707763671875,
-0.0150299072265625,
-0.010101318359375,
-0.01299285888671875,
-0.03240966796875,
-0.0328369140625,
0.0220794677734375,
0.0472412109375,
-0.02545166015625,
-0.080322265625,
-0.057647705078125,
0.002443313... |
BigSuperbPrivate/SpokenTermDetection_Tedlium2Train | 2023-08-02T13:37:29.000Z | [
"region:us"
] | BigSuperbPrivate | null | null | 0 | 8 | 2023-08-02T13:06:04 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: text
dtype: string
- name: instruction
dtype: string
- name: label
dtype: string
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 15786905536.68
num_examples: 92967
- name: validation
num_bytes: 117079048.0
num_examples: 507
download_size: 15262598420
dataset_size: 15903984584.68
---
# Dataset Card for "SpokenTermDetection_Tedlium2Train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 646 | [
[
-0.0291748046875,
-0.02685546875,
-0.0021724700927734375,
0.01161956787109375,
-0.0102081298828125,
0.0130767822265625,
-0.0203399658203125,
-0.01415252685546875,
0.044921875,
0.0241546630859375,
-0.07122802734375,
-0.050872802734375,
-0.03497314453125,
-0.0... |
pourmand1376/isna-news | 2023-08-19T11:56:01.000Z | [
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:fa",
"license:apache-2.0",
"region:us"
] | pourmand1376 | null | null | 0 | 8 | 2023-08-02T14:30:40 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: TEXT
dtype: string
- name: SOURCE
dtype: string
- name: METADATA
dtype: string
splits:
- name: train
num_bytes: 8078800930
num_examples: 2104859
download_size: 2743795907
dataset_size: 8078800930
license: apache-2.0
task_categories:
- text-generation
language:
- fa
pretty_name: Isna News
size_categories:
- 1M<n<10M
---
# Dataset Card for "isna-news"
This is converted version of [Isna-news](https://www.kaggle.com/datasets/amirpourmand/isna-news) to comply with Open-assistant standards.
MetaData Column:
- title
- link: short link to news
- language: fa
- jalali-time: time in jalali calendar
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 896 | [
[
-0.0228729248046875,
-0.036865234375,
0.0190887451171875,
0.0302276611328125,
-0.06036376953125,
-0.0215911865234375,
-0.01126861572265625,
-0.02459716796875,
0.057647705078125,
0.03607177734375,
-0.07147216796875,
-0.05621337890625,
-0.036041259765625,
0.00... |
adityarra07/sub_ATC | 2023-08-06T05:38:09.000Z | [
"region:us"
] | adityarra07 | null | null | 0 | 8 | 2023-08-04T19:13:17 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 136737944.06422067
num_examples: 1000
- name: test
num_bytes: 13673794.406422066
num_examples: 100
download_size: 12473551
dataset_size: 150411738.47064275
---
# Dataset Card for "sub_ATC"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 552 | [
[
-0.0460205078125,
-0.01352691650390625,
0.0152587890625,
-0.0035610198974609375,
-0.0294647216796875,
0.0201263427734375,
0.033416748046875,
-0.0091552734375,
0.06805419921875,
0.02154541015625,
-0.06292724609375,
-0.062255859375,
-0.03753662109375,
-0.01444... |
diffusers/instructpix2pix-clip-filtered-upscaled | 2023-08-07T04:28:55.000Z | [
"region:us"
] | diffusers | null | null | 1 | 8 | 2023-08-07T03:02:48 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
hakatashi/hakatashi-pixiv-bookmark-deepdanbooru | 2023-08-07T05:38:17.000Z | [
"task_categories:image-classification",
"task_categories:tabular-classification",
"size_categories:100K<n<1M",
"art",
"region:us"
] | hakatashi | null | null | 2 | 8 | 2023-08-07T03:54:24 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: tag_probs
sequence: float32
- name: class
dtype:
class_label:
names:
'0': not_bookmarked
'1': bookmarked_public
'2': bookmarked_private
splits:
- name: train
num_bytes: 4301053452
num_examples: 179121
- name: test
num_bytes: 1433684484
num_examples: 59707
- name: validation
num_bytes: 1433708496
num_examples: 59708
download_size: 7351682183
dataset_size: 7168446432
task_categories:
- image-classification
- tabular-classification
tags:
- art
size_categories:
- 100K<n<1M
---
The dataset for training classification model of pixiv artworks by my preference.
## Schema
* tag_probs: List of probabilities for each tag. Preprocessed by [RF5/danbooru-pretrained](https://github.com/RF5/danbooru-pretrained) model. The index of each probability corresponds to the index of the tag in the [class_names_6000.json](https://github.com/RF5/danbooru-pretrained/blob/master/config/class_names_6000.json) file.
* class:
* not_bookmarked (0): Generated from images randomly-sampled from [animelover/danbooru2022](https://huggingface.co/datasets/animelover/danbooru2022) dataset. The images are filtered in advance to the post with pixiv source.
* bookmarked_public (1): Generated from publicly bookmarked images of [hakatashi](https://twitter.com/hakatashi).
* bookmarked_private (2): Generated from privately bookmarked images of [hakatashi](https://twitter.com/hakatashi).
## Stats
train:test:validation = 6:2:2
* not_bookmarked (0): 202,290 images
* bookmarked_public (1): 73,587 images
* bookmarked_private (2): 22,659 images
## Usage
```
>>> from datasets import load_dataset
>>> dataset = load_dataset("hakatashi/hakatashi-pixiv-bookmark-deepdanbooru")
>>> dataset
DatasetDict({
test: Dataset({
features: ['tag_probs', 'class'],
num_rows: 59707
})
train: Dataset({
features: ['tag_probs', 'class'],
num_rows: 179121
})
validation: Dataset({
features: ['tag_probs', 'class'],
num_rows: 59708
})
})
>>> dataset['train'].features
{'tag_probs': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None),
'class': ClassLabel(names=['not_bookmarked', 'bookmarked_public', 'bookmarked_private'], id=None)}
``` | 2,500 | [
[
-0.041015625,
-0.01245880126953125,
-0.0034027099609375,
-0.00301361083984375,
-0.032684326171875,
-0.0232391357421875,
-0.01025390625,
-0.0091552734375,
0.00807952880859375,
0.0231781005859375,
-0.0335693359375,
-0.05694580078125,
-0.040557861328125,
0.0031... |
Fredithefish/openassistant-guanaco-unfiltered | 2023-08-27T21:08:58.000Z | [
"task_categories:conversational",
"size_categories:1K<n<10K",
"language:en",
"language:de",
"language:fr",
"language:es",
"license:apache-2.0",
"region:us"
] | Fredithefish | null | null | 5 | 8 | 2023-08-12T10:12:28 | ---
license: apache-2.0
task_categories:
- conversational
language:
- en
- de
- fr
- es
size_categories:
- 1K<n<10K
---
# Guanaco-Unfiltered
- Any language other than English, German, French, or Spanish has been removed.
- Refusals of assistance have been removed.
- The identification as OpenAssistant has been removed.
## [Version 2 is out](https://huggingface.co/datasets/Fredithefish/openassistant-guanaco-unfiltered/blob/main/guanaco-unfiltered-v2.jsonl)
- Identification as OpenAssistant is now fully removed
- other improvements | 537 | [
[
-0.0058441162109375,
-0.05023193359375,
0.0249481201171875,
0.03192138671875,
-0.0208587646484375,
0.0176239013671875,
-0.026031494140625,
-0.04541015625,
0.026763916015625,
0.05029296875,
-0.051727294921875,
-0.046417236328125,
-0.061065673828125,
0.0149993... |
FreedomIntelligence/sharegpt-korean | 2023-08-13T16:46:20.000Z | [
"license:apache-2.0",
"region:us"
] | FreedomIntelligence | null | null | 0 | 8 | 2023-08-13T16:41:43 | ---
license: apache-2.0
---
Korean ShareGPT data translated by gpt-3.5-turbo.
The dataset is used in the research related to [MultilingualSIFT](https://github.com/FreedomIntelligence/MultilingualSIFT). | 204 | [
[
-0.032806396484375,
-0.0246734619140625,
0.026092529296875,
0.03851318359375,
-0.027557373046875,
0.00470733642578125,
-0.025360107421875,
-0.025054931640625,
0.0197601318359375,
0.0229339599609375,
-0.06396484375,
-0.02679443359375,
-0.035797119140625,
0.00... |
TrainingDataPro/cows-detection-dataset | 2023-09-14T16:32:30.000Z | [
"task_categories:image-to-image",
"task_categories:image-classification",
"task_categories:object-detection",
"language:en",
"license:cc-by-nc-nd-4.0",
"biology",
"code",
"region:us"
] | TrainingDataPro | The dataset is a collection of images along with corresponding bounding box annotations
that are specifically curated for **detecting cows** in images. The dataset covers
different *cows breeds, sizes, and orientations*, providing a comprehensive
representation of cows appearances and positions. Additionally, the visibility of each
cow is presented in the .xml file.
The cow detection dataset provides a valuable resource for researchers working on
detection tasks. It offers a diverse collection of annotated images, allowing for
comprehensive algorithm development, evaluation, and benchmarking, ultimately aiding
in the development of accurate and robust models. | @InProceedings{huggingface:dataset,
title = {cows-detection-dataset},
author = {TrainingDataPro},
year = {2023}
} | 1 | 8 | 2023-08-14T17:00:36 | ---
language:
- en
license: cc-by-nc-nd-4.0
task_categories:
- image-to-image
- image-classification
- object-detection
tags:
- biology
- code
dataset_info:
features:
- name: id
dtype: int32
- name: image
dtype: image
- name: mask
dtype: image
- name: bboxes
dtype: string
splits:
- name: train
num_bytes: 184108240
num_examples: 51
download_size: 183666433
dataset_size: 184108240
---
# Cows Detection Dataset
The dataset is a collection of images along with corresponding bounding box annotations that are specifically curated for **detecting cows** in images. The dataset covers different *cows breeds, sizes, and orientations*, providing a comprehensive representation of cows appearances and positions. Additionally, the visibility of each cow is presented in the .xml file.
The cow detection dataset provides a valuable resource for researchers working on detection tasks. It offers a diverse collection of annotated images, allowing for comprehensive algorithm development, evaluation, and benchmarking, ultimately aiding in the development of accurate and robust models.

# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=cows-detection-dataset) to discuss your requirements, learn about the price and buy the dataset.
# Dataset structure
- **images** - contains of original images of cows
- **boxes** - includes bounding box labeling for the original images
- **annotations.xml** - contains coordinates of the bounding boxes and labels, created for the original photo
# Data Format
Each image from `images` folder is accompanied by an XML-annotation in the `annotations.xml` file indicating the coordinates of the bounding boxes for cows detection. For each point, the x and y coordinates are provided. Visibility of the cow is also provided by the label **is_visible** (true, false).
# Example of XML file structure
.png?generation=1692032268744062&alt=media)
# Cows Detection might be made in accordance with your requirements.
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=cows-detection-dataset) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** | 2,880 | [
[
-0.046966552734375,
-0.021820068359375,
0.00795745849609375,
-0.0153045654296875,
-0.03314208984375,
-0.0164947509765625,
0.0239410400390625,
-0.042633056640625,
0.0233917236328125,
0.063232421875,
-0.038543701171875,
-0.0770263671875,
-0.04901123046875,
0.0... |
Sylvana/qa_en_translation | 2023-08-18T07:51:14.000Z | [
"task_categories:translation",
"size_categories:1K<n<10K",
"language:ar",
"license:apache-2.0",
"region:us"
] | Sylvana | null | null | 1 | 8 | 2023-08-17T17:38:33 | ---
license: apache-2.0
task_categories:
- translation
language:
- ar
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 1,634 | [
[
-0.038177490234375,
-0.02984619140625,
-0.0036067962646484375,
0.027130126953125,
-0.0323486328125,
0.0037822723388671875,
-0.01727294921875,
-0.02020263671875,
0.049041748046875,
0.04046630859375,
-0.0634765625,
-0.08062744140625,
-0.052947998046875,
0.0020... |
highnote/pubmed_qa | 2023-08-19T13:28:27.000Z | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:expert-generated",
"annotations_creators:machine-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"size_categories:1K<... | highnote | PubMedQA is a novel biomedical question answering (QA) dataset collected from PubMed abstracts.
The task of PubMedQA is to answer research questions with yes/no/maybe (e.g.: Do preoperative
statins reduce atrial fibrillation after coronary artery bypass grafting?) using the corresponding abstracts.
PubMedQA has 1k expert-annotated, 61.2k unlabeled and 211.3k artificially generated QA instances.
Each PubMedQA instance is composed of (1) a question which is either an existing research article
title or derived from one, (2) a context which is the corresponding abstract without its conclusion,
(3) a long answer, which is the conclusion of the abstract and, presumably, answers the research question,
and (4) a yes/no/maybe answer which summarizes the conclusion.
PubMedQA is the first QA dataset where reasoning over biomedical research texts, especially their
quantitative contents, is required to answer the questions. | @inproceedings{jin2019pubmedqa,
title={PubMedQA: A Dataset for Biomedical Research Question Answering},
author={Jin, Qiao and Dhingra, Bhuwan and Liu, Zhengping and Cohen, William and Lu, Xinghua},
booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)},
pages={2567--2577},
year={2019}
} | 1 | 8 | 2023-08-19T13:28:27 | ---
annotations_creators:
- expert-generated
- machine-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10K<n<100K
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
paperswithcode_id: pubmedqa
pretty_name: PubMedQA
dataset_info:
- config_name: pqa_labeled
features:
- name: pubid
dtype: int32
- name: question
dtype: string
- name: context
sequence:
- name: contexts
dtype: string
- name: labels
dtype: string
- name: meshes
dtype: string
- name: reasoning_required_pred
dtype: string
- name: reasoning_free_pred
dtype: string
- name: long_answer
dtype: string
- name: final_decision
dtype: string
splits:
- name: train
num_bytes: 2089200
num_examples: 1000
download_size: 687882700
dataset_size: 2089200
- config_name: pqa_unlabeled
features:
- name: pubid
dtype: int32
- name: question
dtype: string
- name: context
sequence:
- name: contexts
dtype: string
- name: labels
dtype: string
- name: meshes
dtype: string
- name: long_answer
dtype: string
splits:
- name: train
num_bytes: 125938502
num_examples: 61249
download_size: 687882700
dataset_size: 125938502
- config_name: pqa_artificial
features:
- name: pubid
dtype: int32
- name: question
dtype: string
- name: context
sequence:
- name: contexts
dtype: string
- name: labels
dtype: string
- name: meshes
dtype: string
- name: long_answer
dtype: string
- name: final_decision
dtype: string
splits:
- name: train
num_bytes: 443554667
num_examples: 211269
download_size: 687882700
dataset_size: 443554667
config_names:
- pqa_artificial
- pqa_labeled
- pqa_unlabeled
duplicated_from: pubmed_qa
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [PUBMED_QA homepage](https://pubmedqa.github.io/ )
- **Repository:** [PUBMED_QA repository](https://github.com/pubmedqa/pubmedqa)
- **Paper:** [PUBMED_QA: A Dataset for Biomedical Research Question Answering](https://arxiv.org/abs/1909.06146)
- **Leaderboard:** [PUBMED_QA: Leaderboard](https://pubmedqa.github.io/)
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@tuner007](https://github.com/tuner007) for adding this dataset. | 4,617 | [
[
-0.032440185546875,
-0.042938232421875,
0.017578125,
0.01171875,
-0.0167236328125,
0.016937255859375,
-0.006732940673828125,
-0.0196075439453125,
0.049285888671875,
0.0396728515625,
-0.06341552734375,
-0.07623291015625,
-0.04620361328125,
0.01346588134765625... |
yardeny/processed_gpt2_context_len_512 | 2023-08-21T08:53:12.000Z | [
"region:us"
] | yardeny | null | null | 0 | 8 | 2023-08-21T05:49:19 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 15593335128.0
num_examples: 6072171
download_size: 6562663671
dataset_size: 15593335128.0
---
# Dataset Card for "processed_gpt2_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 433 | [
[
-0.0261688232421875,
-0.029815673828125,
0.028167724609375,
0.0153656005859375,
-0.0210113525390625,
-0.0012645721435546875,
0.0165863037109375,
-0.0146942138671875,
0.0389404296875,
0.032745361328125,
-0.0546875,
-0.0411376953125,
-0.053955078125,
-0.024673... |
zake7749/chinese-speech-corpus | 2023-08-30T16:19:14.000Z | [
"task_categories:conversational",
"size_categories:1K<n<10K",
"language:zh",
"license:cc",
"region:us"
] | zake7749 | null | null | 0 | 8 | 2023-08-21T09:33:09 | ---
language:
- zh
license: cc
size_categories:
- 1K<n<10K
task_categories:
- conversational
dataset_info:
features:
- name: sentences
list:
- name: speaker
dtype: string
- name: speech
dtype: string
- name: source_url
dtype: string
splits:
- name: train
num_bytes: 77964319
num_examples: 1739
download_size: 43895652
dataset_size: 77964319
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
pretty_name: s
---
# Chinese Speech Corpus
This dataset has been sourced from [SayIt](https://sayit.pdis.nat.gov.tw/), a specialized website focused on preserving transcripts and meeting notes. Presently, it encompasses a compilation of 1739 dialogues, encompassing approximately 340,000 sentences along with their respective speakers.
## License
[CC0 License](https://creativecommons.org/share-your-work/public-domain/cc0/) | 904 | [
[
-0.0078277587890625,
-0.038909912109375,
0.005962371826171875,
0.0270233154296875,
0.00252532958984375,
-0.013427734375,
-0.035003662109375,
-0.020782470703125,
0.004772186279296875,
0.0362548828125,
-0.039215087890625,
-0.0106658935546875,
-0.001488685607910156... |
fia24/annotated18k_training_dataset_90 | 2023-08-23T10:18:59.000Z | [
"region:us"
] | fia24 | null | null | 0 | 8 | 2023-08-23T10:18:46 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: translation
struct:
- name: en
dtype: string
- name: fr
dtype: string
splits:
- name: train
num_bytes: 905284
num_examples: 15273
- name: test
num_bytes: 101954
num_examples: 1697
download_size: 538353
dataset_size: 1007238
---
# Dataset Card for "annotated18k_training_dataset_90"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 659 | [
[
-0.03839111328125,
-0.0155029296875,
0.0054779052734375,
0.0206756591796875,
-0.00972747802734375,
-0.0033416748046875,
0.008209228515625,
-0.006526947021484375,
0.04583740234375,
0.034423828125,
-0.0697021484375,
-0.0518798828125,
-0.039642333984375,
-0.000... |
corbt/all-recipes | 2023-08-24T06:27:02.000Z | [
"region:us"
] | corbt | null | null | 24 | 8 | 2023-08-24T06:26:30 | ---
dataset_info:
features:
- name: input
dtype: string
splits:
- name: train
num_bytes: 1569011376
num_examples: 2147248
download_size: 807147913
dataset_size: 1569011376
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "all-recipes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 451 | [
[
-0.0265350341796875,
-0.0215606689453125,
0.030731201171875,
0.02069091796875,
0.005199432373046875,
-0.009002685546875,
0.020233154296875,
-0.0042572021484375,
0.083740234375,
0.056854248046875,
-0.06048583984375,
-0.06671142578125,
-0.050140380859375,
-0.0... |
qgyd2021/e_commerce_customer_service | 2023-09-14T01:33:20.000Z | [
"task_categories:text-retrieval",
"task_categories:question-answering",
"size_categories:1M<n<10M",
"language:en",
"e-commerce",
"region:us"
] | qgyd2021 | null | @dataset{e_commerce_customer_service,
author = {Xing Tian},
title = {e_commerce_customer_service},
month = aug,
year = 2023,
publisher = {Xing Tian},
version = {1.0},
} | 0 | 8 | 2023-08-24T09:52:45 | ---
task_categories:
- text-retrieval
- question-answering
language:
- en
tags:
- e-commerce
size_categories:
- 1M<n<10M
---
## 电商客户服务数据集
是从 (lightinthebox)[https://www.lightinthebox.com/] 网站收集的电商数据. 此数据可用于电商客服机器人的研究.
数据内容:
faq.json: 包含通用问题的问答对.
product.jsonl: 包含一些商品信息.
examples 中包含收集商品信息的爬虫代码.
python==3.8.10
| 318 | [
[
-0.0190887451171875,
-0.06988525390625,
-0.0038204193115234375,
0.043426513671875,
-0.036865234375,
0.019744873046875,
0.01702880859375,
-0.022064208984375,
0.0186920166015625,
0.040985107421875,
-0.020660400390625,
-0.035064697265625,
-0.01374053955078125,
... |
Linhz/qg_vinewsqa | 2023-08-24T16:23:00.000Z | [
"region:us"
] | Linhz | null | null | 0 | 8 | 2023-08-24T16:21:41 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
quocanh34/fake_dataset | 2023-08-28T07:57:55.000Z | [
"region:us"
] | quocanh34 | null | null | 0 | 8 | 2023-08-28T07:57:53 | ---
dataset_info:
features:
- name: id
dtype: string
- name: sentence
dtype: string
- name: intent
dtype: string
- name: sentence_annotation
dtype: string
- name: entities
list:
- name: type
dtype: string
- name: filler
dtype: string
- name: file
dtype: string
- name: audio
struct:
- name: array
sequence: float64
- name: path
dtype: string
- name: sampling_rate
dtype: int64
splits:
- name: train
num_bytes: 2213835
num_examples: 5
download_size: 614583
dataset_size: 2213835
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "fake_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 844 | [
[
-0.034576416015625,
-0.036468505859375,
0.01474761962890625,
0.0165252685546875,
-0.0135345458984375,
0.0156402587890625,
0.03057861328125,
-0.0172271728515625,
0.0751953125,
0.031280517578125,
-0.0672607421875,
-0.036376953125,
-0.030792236328125,
-0.016250... |
merkol/ffhq-256 | 2023-08-28T11:26:44.000Z | [
"region:us"
] | merkol | null | null | 0 | 8 | 2023-08-28T11:17:10 | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 7358464050.0
num_examples: 70000
download_size: 7407340570
dataset_size: 7358464050.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ffhq-256"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 450 | [
[
-0.0535888671875,
-0.005313873291015625,
0.018035888671875,
0.0122528076171875,
-0.011749267578125,
0.0027828216552734375,
0.041473388671875,
-0.00946807861328125,
0.041290283203125,
0.0374755859375,
-0.0697021484375,
-0.039886474609375,
-0.0252227783203125,
... |
922-Narra/lt_08312023_test_5j1 | 2023-09-02T09:30:34.000Z | [
"license:cc0-1.0",
"region:us"
] | 922-Narra | null | null | 0 | 8 | 2023-08-31T19:18:53 | ---
license: cc0-1.0
---
# LM Tagalog 08/31/2023 Test 5 (jsonl format, split):
Experimental Tagalog-focused dataset, based on a subset of [Tagalog sentences from this dataset](https://huggingface.co/datasets/jfernandez/cebuano-filipino-sentences) augmented with base LLaMA-2 13b (q4_1 ggml) to form a rudimentary mostly 3-turn dialogue dataset.
Used for:
* [Taga-llama-v0.3](https://huggingface.co/922-Narra/llama-2-7b-chat-tagalog-v0.3)
* [Taga-llama-v0.3a](https://huggingface.co/922-Narra/llama-2-7b-chat-tagalog-v0.3a)
We make this dataset public for transparency, and to show the mainly Tagalog generations done to create this dataset (acknowledging their lack of coherency or direction, but noting the remarkable attempts of the primarily English-pretrained base model generating mostly in Tagalog). Further refinements are planned (i.e. manually editing for safety and alignment, coherency, reducing Taglish, likely regenerating with higher quantization, etc.). | 971 | [
[
-0.022674560546875,
-0.051361083984375,
0.008880615234375,
0.05548095703125,
-0.03759765625,
0.0162200927734375,
-0.0141143798828125,
-0.03997802734375,
0.03143310546875,
0.056549072265625,
-0.053802490234375,
-0.041107177734375,
-0.01290130615234375,
0.0348... |
factored/saleswiz_is_positive | 2023-09-14T20:01:28.000Z | [
"region:us"
] | factored | null | null | 0 | 8 | 2023-09-01T19:00:30 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 187533.28961748633
num_examples: 640
- name: validation
num_bytes: 80580.71038251366
num_examples: 275
download_size: 178227
dataset_size: 268114.0
---
# Dataset Card for "saleswiz_is_positive"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 547 | [
[
-0.03558349609375,
-0.02044677734375,
-0.01192474365234375,
0.0204925537109375,
-0.0223388671875,
-0.010833740234375,
-0.0018320083618164062,
-0.01152801513671875,
0.06341552734375,
0.0374755859375,
-0.06781005859375,
-0.066650390625,
-0.019287109375,
-0.022... |
hjerpe/github-kubeflow-issues | 2023-09-13T18:41:35.000Z | [
"task_categories:text-classification",
"task_ids:semantic-similarity-classification",
"task_ids:semantic-similarity-scoring",
"task_ids:topic-classification",
"task_ids:intent-classification",
"task_ids:multi-label-classification",
"task_ids:multi-class-classification",
"annotations_creators:no-annota... | hjerpe | null | null | 0 | 8 | 2023-09-03T06:17:54 | ---
annotations_creators:
- no-annotation
language_creators:
- other
language:
- en
license: []
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- semantic-similarity-classification
- semantic-similarity-scoring
- topic-classification
- intent-classification
- multi-label-classification
- multi-class-classification
pretty_name: github-kubeflow-pipelines-issues
tags:
- GitHub-Issues
dataset_info:
features:
- name: url
dtype: string
- name: repository_url
dtype: string
- name: labels_url
dtype: string
- name: comments_url
dtype: string
- name: events_url
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: number
dtype: int64
- name: title
dtype: string
- name: user
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: labels
list:
- name: id
dtype: int64
- name: node_id
dtype: string
- name: url
dtype: string
- name: name
dtype: string
- name: color
dtype: string
- name: default
dtype: bool
- name: description
dtype: string
- name: state
dtype: string
- name: locked
dtype: bool
- name: assignee
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: assignees
list:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: milestone
struct:
- name: url
dtype: string
- name: html_url
dtype: string
- name: labels_url
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: number
dtype: int64
- name: title
dtype: string
- name: description
dtype: string
- name: creator
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: open_issues
dtype: int64
- name: closed_issues
dtype: int64
- name: state
dtype: string
- name: created_at
dtype: timestamp[s]
- name: updated_at
dtype: timestamp[s]
- name: due_on
dtype: timestamp[s]
- name: closed_at
dtype: 'null'
- name: comments
sequence: string
- name: created_at
dtype: timestamp[s]
- name: updated_at
dtype: timestamp[s]
- name: closed_at
dtype: timestamp[s]
- name: author_association
dtype: string
- name: active_lock_reason
dtype: 'null'
- name: body
dtype: string
- name: reactions
struct:
- name: url
dtype: string
- name: total_count
dtype: int64
- name: '+1'
dtype: int64
- name: '-1'
dtype: int64
- name: laugh
dtype: int64
- name: hooray
dtype: int64
- name: confused
dtype: int64
- name: heart
dtype: int64
- name: rocket
dtype: int64
- name: eyes
dtype: int64
- name: timeline_url
dtype: string
- name: performed_via_github_app
dtype: 'null'
- name: state_reason
dtype: string
- name: draft
dtype: bool
- name: pull_request
struct:
- name: url
dtype: string
- name: html_url
dtype: string
- name: diff_url
dtype: string
- name: patch_url
dtype: string
- name: merged_at
dtype: timestamp[s]
- name: is_pull_request
dtype: bool
splits:
- name: train
num_bytes: 9230693
num_examples: 1567
download_size: 0
dataset_size: 9230693
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for Dataset Name
## Dataset Description
- **Point of Contact:** [Adam Hjerpe](hjerpeadam5@gmail.com)
### Dataset Summary
GitHub Issues is a dataset consisting of the top 5_000 GitHub issues, as of 2023-09-02, associated with the KubeFlow Pipelines [repository](https://github.com/kubeflow/pipelines). It is intended for educational purposes and can be used for semantic search or multilabel text classification. The contents of each GitHub issue are in English and concern the domain of datasets for NLP, computer vision, and beyond.
### Languages
Contains language commonly found in English software development.
### Contributions
Thanks to [@hjerpe](https://github.com/hjerpe) for adding this dataset. | 7,247 | [
[
-0.029998779296875,
-0.0157318115234375,
0.020355224609375,
0.01363372802734375,
-0.008087158203125,
0.0262603759765625,
-0.018890380859375,
-0.0204620361328125,
0.0219879150390625,
0.042816162109375,
-0.045623779296875,
-0.07366943359375,
-0.0565185546875,
... |
aboix/GB_EXAMPLE_V1_GROUPED1_DOWNSAMPLED_SIMPLE | 2023-09-05T11:22:56.000Z | [
"region:us"
] | aboix | null | null | 0 | 8 | 2023-09-05T11:21:50 | ---
dataset_info:
features:
- name: text
dtype: string
- name: inputs
struct:
- name: text
dtype: string
- name: prediction
dtype: 'null'
- name: prediction_agent
dtype: 'null'
- name: annotation
sequence: string
- name: annotation_agent
dtype: string
- name: vectors
dtype: 'null'
- name: multi_label
dtype: bool
- name: explanation
dtype: 'null'
- name: id
dtype: string
- name: metadata
struct:
- name: split
dtype: string
- name: status
dtype: string
- name: event_timestamp
dtype: timestamp[us]
- name: metrics
struct:
- name: text_length
dtype: int64
splits:
- name: train
num_bytes: 6151886.959430213
num_examples: 20330
- name: test
num_bytes: 1538123.0405697871
num_examples: 5083
download_size: 4293230
dataset_size: 7690010.0
---
# Dataset Card for "GB_EXAMPLE_V1_GROUPED1_DOWNSAMPLED_SIMPLE"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,078 | [
[
-0.0537109375,
-0.0252532958984375,
0.00461578369140625,
0.009368896484375,
-0.0178985595703125,
-0.0208282470703125,
0.0245513916015625,
0.01079559326171875,
0.06884765625,
0.036895751953125,
-0.0828857421875,
-0.053466796875,
-0.032623291015625,
-0.0130767... |
yzhuang/autotree_automl_10000_pol_sgosdt_l256_dim10_d3_sd0 | 2023-09-07T10:52:37.000Z | [
"region:us"
] | yzhuang | null | null | 0 | 8 | 2023-09-07T10:52:32 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 236440000
num_examples: 10000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 50839806
dataset_size: 472880000
---
# Dataset Card for "autotree_automl_10000_pol_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 840 | [
[
-0.03656005859375,
-0.01419830322265625,
0.0197296142578125,
0.0218963623046875,
-0.018463134765625,
0.01788330078125,
0.0374755859375,
-0.0000016689300537109375,
0.056488037109375,
0.0281982421875,
-0.054046630859375,
-0.044189453125,
-0.05029296875,
0.0116... |
yzhuang/autotree_pmlb_10000_ring_sgosdt_l256_dim10_d3_sd0 | 2023-09-07T12:31:47.000Z | [
"region:us"
] | yzhuang | null | null | 0 | 8 | 2023-09-07T12:31:39 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 236440000
num_examples: 10000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 120439468
dataset_size: 472880000
---
# Dataset Card for "autotree_pmlb_10000_ring_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 840 | [
[
-0.032958984375,
-0.015289306640625,
0.00962066650390625,
0.02484130859375,
-0.019622802734375,
0.011077880859375,
0.05120849609375,
-0.004322052001953125,
0.05712890625,
0.0256500244140625,
-0.06597900390625,
-0.043243408203125,
-0.044525146484375,
0.008049... |
benjis/sven | 2023-09-18T18:25:16.000Z | [
"arxiv:2302.05319",
"region:us"
] | benjis | null | null | 0 | 8 | 2023-09-13T02:27:09 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
dataset_info:
features:
- name: func_name
dtype: string
- name: func_src_before
dtype: string
- name: func_src_after
dtype: string
- name: line_changes
struct:
- name: deleted
list:
- name: line_no
dtype: int64
- name: char_start
dtype: int64
- name: char_end
dtype: int64
- name: line
dtype: string
- name: added
list:
- name: line_no
dtype: int64
- name: char_start
dtype: int64
- name: char_end
dtype: int64
- name: line
dtype: string
- name: char_changes
struct:
- name: deleted
list:
- name: char_start
dtype: int64
- name: char_end
dtype: int64
- name: chars
dtype: string
- name: added
list:
- name: char_start
dtype: int64
- name: char_end
dtype: int64
- name: chars
dtype: string
- name: commit_link
dtype: string
- name: file_name
dtype: string
- name: vul_type
dtype: string
splits:
- name: train
num_bytes: 4961153
num_examples: 720
- name: val
num_bytes: 621398
num_examples: 83
download_size: 2246744
dataset_size: 5582551
---
# Dataset Card for "sven"
Unofficial, not affiliated with the authors.
Paper: https://arxiv.org/abs/2302.05319
Repository: https://github.com/eth-sri/sven
| 1,544 | [
[
-0.00951385498046875,
0.001461029052734375,
-0.002666473388671875,
0.0226287841796875,
-0.05047607421875,
0.007965087890625,
-0.0158843994140625,
-0.032928466796875,
0.038299560546875,
0.0243072509765625,
-0.045654296875,
-0.0582275390625,
-0.030670166015625,
... |
HydraLM/SkunkData-Corpus-Clusters-001 | 2023-09-15T04:03:06.000Z | [
"region:us"
] | HydraLM | null | null | 0 | 8 | 2023-09-15T03:54:32 | ---
configs:
- config_name: default
data_files:
- split: config0
path: data/config0-*
- split: config1
path: data/config1-*
- split: config2
path: data/config2-*
- split: config3
path: data/config3-*
- split: config4
path: data/config4-*
- split: config5
path: data/config5-*
- split: config6
path: data/config6-*
- split: config7
path: data/config7-*
- split: config8
path: data/config8-*
- split: config9
path: data/config9-*
- split: config10
path: data/config10-*
- split: config11
path: data/config11-*
- split: config12
path: data/config12-*
- split: config13
path: data/config13-*
- split: config14
path: data/config14-*
- split: config15
path: data/config15-*
- split: config16
path: data/config16-*
- split: config17
path: data/config17-*
- split: config18
path: data/config18-*
- split: config19
path: data/config19-*
- split: config20
path: data/config20-*
- split: config21
path: data/config21-*
- split: config22
path: data/config22-*
- split: config23
path: data/config23-*
- split: config24
path: data/config24-*
- split: config25
path: data/config25-*
- split: config26
path: data/config26-*
- split: config27
path: data/config27-*
- split: config28
path: data/config28-*
- split: config29
path: data/config29-*
- split: config30
path: data/config30-*
- split: config31
path: data/config31-*
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_conversation_id
dtype: string
- name: cluster
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: config0
num_bytes: 87924284
num_examples: 99425
- name: config1
num_bytes: 106611220
num_examples: 125333
- name: config2
num_bytes: 173980413
num_examples: 142226
- name: config3
num_bytes: 66985706
num_examples: 95365
- name: config4
num_bytes: 159352232
num_examples: 160680
- name: config5
num_bytes: 77667739
num_examples: 73168
- name: config6
num_bytes: 49793674
num_examples: 68399
- name: config7
num_bytes: 110741148
num_examples: 96048
- name: config8
num_bytes: 246980215
num_examples: 189712
- name: config9
num_bytes: 78705055
num_examples: 75952
- name: config10
num_bytes: 24590140
num_examples: 49477
- name: config11
num_bytes: 101881388
num_examples: 100613
- name: config12
num_bytes: 171141731
num_examples: 189285
- name: config13
num_bytes: 94659874
num_examples: 172001
- name: config14
num_bytes: 75441820
num_examples: 116020
- name: config15
num_bytes: 109459044
num_examples: 109363
- name: config16
num_bytes: 124294700
num_examples: 110020
- name: config17
num_bytes: 68624316
num_examples: 84195
- name: config18
num_bytes: 194234867
num_examples: 126148
- name: config19
num_bytes: 38810678
num_examples: 44274
- name: config20
num_bytes: 109428694
num_examples: 92432
- name: config21
num_bytes: 127536760
num_examples: 91186
- name: config22
num_bytes: 75102071
num_examples: 93171
- name: config23
num_bytes: 99290404
num_examples: 73458
- name: config24
num_bytes: 83858017
num_examples: 95037
- name: config25
num_bytes: 153137616
num_examples: 118558
- name: config26
num_bytes: 84263186
num_examples: 147431
- name: config27
num_bytes: 32127511
num_examples: 61803
- name: config28
num_bytes: 79484162
num_examples: 100282
- name: config29
num_bytes: 50017006
num_examples: 87382
- name: config30
num_bytes: 17400390
num_examples: 31757
- name: config31
num_bytes: 35728713
num_examples: 58432
download_size: 0
dataset_size: 3109254774
---
# Dataset Card for "SkunkData-Corpus-Clusters"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 4,258 | [
[
-0.036865234375,
-0.00569915771484375,
0.0335693359375,
0.027374267578125,
-0.0150604248046875,
0.02386474609375,
-0.0120697021484375,
-0.012908935546875,
0.06951904296875,
0.01885986328125,
-0.040313720703125,
-0.07373046875,
-0.04119873046875,
-0.018310546... |
NASP/neteval-exam | 2023-09-22T02:56:47.000Z | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:multiple-choice",
"size_categories:10K<n<100K",
"language:en",
"language:zh",
"license:cc-by-nc-sa-4.0",
"arxiv:2309.05557",
"region:us"
] | NASP | null | null | 2 | 8 | 2023-09-16T01:55:01 | ---
license: cc-by-nc-sa-4.0
task_categories:
- text-classification
- question-answering
- multiple-choice
language:
- en
- zh
pretty_name: Netops
size_categories:
- 10K<n<100K
---
NetEval is a NetOps evaluation suite for foundation models, consisting of 5269 multi-choice questions. Please check [our paper](https://arxiv.org/abs/2309.05557) for more details about NetEval.
We hope NetEval could help developers track the progress and analyze the NetOps ability of their models.
## Citation
Please cite our paper if you use our dataset.
```
@misc{miao2023empirical,
title={An Empirical Study of NetOps Capability of Pre-Trained Large Language Models},
author={Yukai Miao and Yu Bai and Li Chen and Dan Li and Haifeng Sun and Xizheng Wang and Ziqiu Luo and Dapeng Sun and Xiuting Xu and Qi Zhang and Chao Xiang and Xinchi Li},
year={2023},
eprint={2309.05557},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 951 | [
[
-0.01654052734375,
-0.0452880859375,
-0.017242431640625,
0.0298919677734375,
-0.0023193359375,
-0.0038509368896484375,
-0.0132904052734375,
-0.033660888671875,
0.0159454345703125,
0.041595458984375,
-0.03948974609375,
-0.0188751220703125,
-0.037109375,
0.020... |
minh21/COVID-QA-unique-context-test-10-percent-validation-10-percent | 2023-09-17T18:29:42.000Z | [
"region:us"
] | minh21 | null | null | 0 | 8 | 2023-09-17T11:11:59 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer_text
dtype: string
- name: answer_start
dtype: int64
- name: is_impossible
dtype: bool
- name: document_id
dtype: int64
- name: id
dtype: int64
- name: context
dtype: string
splits:
- name: train
num_bytes: 2050073
num_examples: 1615
- name: test
num_bytes: 260386
num_examples: 202
- name: validation
num_bytes: 261992
num_examples: 202
download_size: 0
dataset_size: 2572451
---
# Dataset Card for "COVID-QA-unique-context-test-10-percent-validation-10-percent"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 750 | [
[
-0.03271484375,
-0.0294189453125,
0.0017805099487304688,
0.035980224609375,
-0.01250457763671875,
-0.006988525390625,
0.0243988037109375,
0.00916290283203125,
0.043426513671875,
0.004425048828125,
-0.05780029296875,
-0.058990478515625,
-0.015777587890625,
-0... |
zqz979/meta-review | 2023-10-15T02:52:16.000Z | [
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:en",
"region:us"
] | zqz979 | null | null | 2 | 8 | 2023-09-21T19:57:48 | ---
task_categories:
- summarization
language:
- en
size_categories:
- 10K<n<100K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The Meta-Review dataset is a dataset created based on the ORSUM dataset proposed in the paper "Meta-review Generation with Checklist-guided Iterative Introspection" by Zeng et al. Downloaded from their official GitHub Repo: https://github.com/Mankeerat/orsum-meta-review-generation
### Supported Tasks and Leaderboards
Multi-Document Summarization
### Languages
English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 1,653 | [
[
-0.04266357421875,
-0.033538818359375,
0.008087158203125,
-0.004985809326171875,
-0.02386474609375,
-0.0037860870361328125,
-0.0081939697265625,
-0.0165863037109375,
0.040313720703125,
0.058319091796875,
-0.068603515625,
-0.0750732421875,
-0.04339599609375,
... |
thanhduycao/data_synthesis | 2023-09-22T01:37:35.000Z | [
"region:us"
] | thanhduycao | null | null | 0 | 8 | 2023-09-22T01:36:57 | ---
dataset_info:
features:
- name: audio
struct:
- name: array
sequence: float64
- name: path
dtype: 'null'
- name: sampling_rate
dtype: int64
- name: transcription
dtype: string
- name: old_transcription
dtype: string
splits:
- name: train
num_bytes: 2364881573
num_examples: 4430
download_size: 559968141
dataset_size: 2364881573
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "data_synthesis"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 657 | [
[
-0.051605224609375,
-0.0284576416015625,
0.025146484375,
0.018646240234375,
-0.0003142356872558594,
0.01107025146484375,
0.01468658447265625,
-0.0067901611328125,
0.05682373046875,
0.0286407470703125,
-0.07318115234375,
-0.054840087890625,
-0.0226898193359375,
... |
jkv53/13F_Reports_with_labels | 2023-09-22T15:01:37.000Z | [
"region:us"
] | jkv53 | null | null | 0 | 8 | 2023-09-22T15:01:34 | ---
dataset_info:
features:
- name: title
dtype: string
- name: body
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 12642773
num_examples: 1113
download_size: 3334911
dataset_size: 12642773
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "13F_Reports_with_labels"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 521 | [
[
-0.039703369140625,
-0.01560211181640625,
0.016021728515625,
0.03912353515625,
-0.01012420654296875,
0.0014524459838867188,
0.017791748046875,
-0.03240966796875,
0.05572509765625,
0.035308837890625,
-0.06005859375,
-0.06353759765625,
-0.060546875,
0.00263595... |
CollectiveCognition/chats-data-2023-09-22 | 2023-09-23T02:07:18.000Z | [
"license:mit",
"region:us"
] | CollectiveCognition | null | null | 8 | 8 | 2023-09-23T01:40:24 | ---
license: mit
---
# Dataset Card for "Collective Cognition ChatGPT Conversations"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
### Dataset Summary
The "Collective Cognition ChatGPT Conversations" dataset is a collection of chat logs between users and the ChatGPT model. These conversations have been shared by users on the "Collective Cognition" website. The dataset provides insights into user interactions with language models and can be utilized for multiple purposes, including training, research, and analysis.
Visit Collective Cognition to browse or contribute chats here: https://collectivecognition.ai
### Supported Tasks
- **Language Model Finetuning**: Training through behavior cloning and knowledge instilling.
- **Dialogue Understanding**: Analyzing the flow and structure of conversations.
- **Sentiment Analysis**: Understanding the sentiment or emotion in the dialogues.
- **Topic Modeling**: Determining the topics discussed within the chats.
### Languages
The primary language of the dataset is English, but any language chat may be present in the dataset as users share more chats.
## Dataset Structure
### Data Instances
A typical data instance includes a chat log with a title, model name, whether the chat used custom instructions (currently not included if so), and the content of the message with the role of the sender (either "human" or "gpt").
The dataset is formatted to conform with ShareGPT standard.
This is an example entry in the dataset, noting that turns are variable and can be many or as few as two:
```json
{
"title": "Interest Formulas Clarification",
"model_name": "GPT-4",
"custom_instruction": false,
"conversations": [
{
"from": "human",
"value": "Is this correct? The formula for calculating simple interest (also known as Compound Interest) is given by I = P * r^n, where \n- I represents the final amount of money, also called principal or capital. This value will keep on increasing at a fixed rate per unit time denoted by 'r' and number of times denoted by 'n'. The initial investment represented by 'P', while the percentage rate expressed as an annual fraction, i.e., p/100 representing the percentage to be paid annually. In other words it\u2019s how much you earn in one year out of your original $100 invested with a 5% annual return compounded monthly."
},
{
"from": "assistant",
"value": "I'm sorry, but there seems to be some confusion in your question. The formula given and the definitions of the variables seem to be mixed up between simple interest and compound interest. Let me clarify:\n\n1. Simple Interest: This type of interest is calculated only on the original amount (principal) that you invested or loaned. The formula for simple interest is I = PRT, where:\n - I is the interest\n - P is the principal amount (initial investment)\n - R is the annual interest rate (in decimal form, so a 5% rate would be 0.05)\n - T is the time the money is invested or borrowed for, in years.\n\n2. Compound Interest: This type of interest is calculated on the initial principal and also on the accumulated interest of previous periods of a deposit or loan. The formula for compound interest is A = P(1 + r/n)^(nt), where:\n - A is the amount of money accumulated after n years, including interest.\n - P is the principal amount (the initial amount of money)\n - r is the annual interest rate (in decimal form, so a 5% rate would be 0.05)\n - n is the number of times that interest is compounded per year\n - t is the number of years the money is invested or borrowed for.\n\nIn both cases, the interest rate is usually expressed as an annual rate, regardless of the length of the term."
}
]
}
```
### Data Splits
Currently, the dataset is not divided into specific splits (train, test, validation).
## Dataset Creation
### Curation Rationale
The dataset was curated to provide insights into how users interact with language models and to contribute to the broader NLP community's resources.
### Source Data
The data originates from user contributions on the "Collective Cognition" website.
### Personal and Sensitive Information
All chats uploaded to the Collective Cognition website are made public, and are uploaded as a new dataset periodically. If you would like to have your chat removed, please email admin@collectivecognition.ai
## Considerations for Using the Data
### Social Impact of Dataset
The dataset offers a glimpse into the interaction dynamics between humans and AI models. It can be instrumental for researchers studying human-AI collaboration.
### Discussion of Biases
There might be biases in the dataset based on the types of users contributing chat logs and the topics they discuss with ChatGPT, particularly centered around what users may utilize ChatGPT for the most.
### Other Known Limitations
The dataset is dependent on the voluntary contributions of users. Hence, it might not represent the entire spectrum of interactions that users have with ChatGPT.
## Additional Information
### Licensing Information
MIT | 6,119 | [
[
-0.027740478515625,
-0.07232666015625,
0.01480865478515625,
0.0293731689453125,
-0.00260162353515625,
0.009552001953125,
-0.016021728515625,
-0.019561767578125,
0.021453857421875,
0.037109375,
-0.051605224609375,
-0.05224609375,
-0.053863525390625,
-0.010978... |
tomaarsen/conll2002 | 2023-09-23T10:53:11.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"task_ids:part-of-speech",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:es",
"language:nl",
"license... | tomaarsen | Named entities are phrases that contain the names of persons, organizations, locations, times and quantities.
Example:
[PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] .
The shared task of CoNLL-2002 concerns language-independent named entity recognition.
We will concentrate on four types of named entities: persons, locations, organizations and names of miscellaneous entities that do not belong to the previous three groups.
The participants of the shared task will be offered training and test data for at least two languages.
They will use the data for developing a named-entity recognition system that includes a machine learning component.
Information sources other than the training data may be used in this shared task.
We are especially interested in methods that can use additional unannotated data for improving their performance (for example co-training).
The train/validation/test sets are available in Spanish and Dutch.
For more details see https://www.clips.uantwerpen.be/conll2002/ner/ and https://www.aclweb.org/anthology/W02-2024/ | @inproceedings{tjong-kim-sang-2002-introduction,
title = "Introduction to the {C}o{NLL}-2002 Shared Task: Language-Independent Named Entity Recognition",
author = "Tjong Kim Sang, Erik F.",
booktitle = "{COLING}-02: The 6th Conference on Natural Language Learning 2002 ({C}o{NLL}-2002)",
year = "2002",
url = "https://www.aclweb.org/anthology/W02-2024",
} | 0 | 8 | 2023-09-23T10:04:25 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- es
- nl
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
- part-of-speech
paperswithcode_id: conll-2002
pretty_name: CoNLL-2002
config_names:
- es
- nl
dataset_info:
- config_name: es
features:
- name: id
dtype: string
- name: document_id
dtype: int32
- name: sentence_id
dtype: int32
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': AO
'1': AQ
'2': CC
'3': CS
'4': DA
'5': DE
'6': DD
'7': DI
'8': DN
'9': DP
'10': DT
'11': Faa
'12': Fat
'13': Fc
'14': Fd
'15': Fe
'16': Fg
'17': Fh
'18': Fia
'19': Fit
'20': Fp
'21': Fpa
'22': Fpt
'23': Fs
'24': Ft
'25': Fx
'26': Fz
'27': I
'28': NC
'29': NP
'30': P0
'31': PD
'32': PI
'33': PN
'34': PP
'35': PR
'36': PT
'37': PX
'38': RG
'39': RN
'40': SP
'41': VAI
'42': VAM
'43': VAN
'44': VAP
'45': VAS
'46': VMG
'47': VMI
'48': VMM
'49': VMN
'50': VMP
'51': VMS
'52': VSG
'53': VSI
'54': VSM
'55': VSN
'56': VSP
'57': VSS
'58': Y
'59': Z
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
splits:
- name: train
num_bytes: 6738717
num_examples: 8323
- name: validation
num_bytes: 1349064
num_examples: 1915
- name: test
num_bytes: 1306252
num_examples: 1517
download_size: 4140690
dataset_size: 9394033
- config_name: nl
features:
- name: id
dtype: string
- name: document_id
dtype: int32
- name: sentence_id
dtype: int32
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': Adj
'1': Adv
'2': Art
'3': Conj
'4': Int
'5': Misc
'6': N
'7': Num
'8': Prep
'9': Pron
'10': Punc
'11': V
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
splits:
- name: train
num_bytes: 5435346
num_examples: 15806
- name: validation
num_bytes: 1017418
num_examples: 2895
- name: test
num_bytes: 1850382
num_examples: 5195
download_size: 3642241
dataset_size: 8303146
---
# Dataset Card for CoNLL-2002
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [homepage](https://www.clips.uantwerpen.be/conll2002/ner/)
- **Repository:** [github](https://github.com/teropa/nlp/tree/master/resources/corpora/conll2002)
- **Paper:** [paper](https://www.aclweb.org/anthology/W02-2024/)
- **Point of Contact:** [Erik Tjong Kim Sang](erikt@uia.ua.ac.be)
### Dataset Summary
Named entities are phrases that contain the names of persons, organizations, locations, times and quantities. Example:
[PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] .
The shared task of CoNLL-2002 concerns language-independent named entity recognition. We will concentrate on four types of named entities: persons, locations, organizations and names of miscellaneous entities that do not belong to the previous three groups. The participants of the shared task will be offered training and test data for at least two languages. They will use the data for developing a named-entity recognition system that includes a machine learning component. Information sources other than the training data may be used in this shared task. We are especially interested in methods that can use additional unannotated data for improving their performance (for example co-training).
### Supported Tasks and Leaderboards
Named Entity Recognition (NER) is a subtask of Information Extraction. Different NER systems were evaluated as a part of the Sixth Message Understanding Conference in 1995 (MUC6). The target language was English. The participating systems performed well. However, many of them used language-specific resources for performing the task and it is unknown how they would have performed on another language than English.
After 1995 NER systems have been developed for some European languages and a few Asian languages. There have been at least two studies that have applied one NER system to different languages. Palmer and Day [PD97] have used statistical methods for finding named entities in newswire articles in Chinese, English, French, Japanese, Portuguese and Spanish. They found that the difficulty of the NER task was different for the six languages but that a large part of the task could be performed with simple methods. Cucerzan and Yarowsky [CY99] used both morphological and contextual clues for identifying named entities in English, Greek, Hindi, Rumanian and Turkish. With minimal supervision, they obtained overall F measures between 40 and 70, depending on the languages used.
- `named-entity-recognition`: The performance in this task is measured with [F1](https://huggingface.co/metrics/f1) (higher is better). A named entity is correct only if it is an exact match of the corresponding entity in the data.
- `parsing`: The performance in this task is measured with [F1](https://huggingface.co/metrics/f1) (higher is better). A part-of-speech tag is correct only if it is equal to the corresponding tag in the data.
### Languages
There are two languages available : Spanish (es) and Dutch (nl).
## Dataset Structure
### Data Instances
The examples look like this :
```
{
'id': '0',
'document_id': 0,
'sentence_id': 0,
'tokens': ['Melbourne', '(', 'Australia', ')', ',', '25', 'may', '(', 'EFE', ')', '.'],
'pos_tags': [29, 21, 29, 22, 13, 59, 28, 21, 28, 22, 20],
'ner_tags': [5, 0, 5, 0, 0, 0, 0, 0, 3, 0, 0]
}
```
The original data files within the Dutch sub-dataset have `-DOCSTART-` lines used to separate documents, but these lines are removed here.
Indeed `-DOCSTART-` is a special line that acts as a boundary between two different documents, and it is filtered out in this implementation.
### Data Fields
- `id`: id of the sample
- `document_id`: an `int32` feature tracking which document the sample is from.
- `sentence_id`: an `int32` feature tracking which sentence in this document the sample is from.
- `tokens`: the tokens of the example text
- `ner_tags`: the NER tags of each token
- `pos_tags`: the POS tags of each token
The POS tags correspond to this list for Spanish:
```
'AO', 'AQ', 'CC', 'CS', 'DA', 'DE', 'DD', 'DI', 'DN', 'DP', 'DT', 'Faa', 'Fat', 'Fc', 'Fd', 'Fe', 'Fg', 'Fh', 'Fia', 'Fit', 'Fp', 'Fpa', 'Fpt', 'Fs', 'Ft', 'Fx', 'Fz', 'I', 'NC', 'NP', 'P0', 'PD', 'PI', 'PN', 'PP', 'PR', 'PT', 'PX', 'RG', 'RN', 'SP', 'VAI', 'VAM', 'VAN', 'VAP', 'VAS', 'VMG', 'VMI', 'VMM', 'VMN', 'VMP', 'VMS', 'VSG', 'VSI', 'VSM', 'VSN', 'VSP', 'VSS', 'Y', 'Z'
```
And this list for Dutch:
```
'Adj', 'Adv', 'Art', 'Conj', 'Int', 'Misc', 'N', 'Num', 'Prep', 'Pron', 'Punc', 'V'
```
The NER tags correspond to this list:
```
"O", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-MISC", "I-MISC",
```
The NER tags have the same format as in the chunking task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC).
It is assumed that named entities are non-recursive and non-overlapping. In case a named entity is embedded in another named entity usually, only the top level entity is marked.
### Data Splits
For both configurations (Spanish and Dutch), there are three splits.
The original splits were named `train`, `testa` and `testb` and they correspond to the `train`, `validation` and `test` splits.
The splits have the following sizes :
| | train | validation | test |
| ----- |-------:|------------:|------:|
| N. Examples (Spanish) | 8324 | 1916 | 1518 |
| N. Examples (Dutch) | 15807 | 2896 | 5196 |
## Dataset Creation
### Curation Rationale
The dataset was introduced to introduce new resources to two languages that were under-served for statistical machine learning at the time, Dutch and Spanish.
[More Information Needed]
### Source Data
The Spanish data is a collection of news wire articles made available by the Spanish EFE News Agency. The articles are from May 2000.
The Dutch data consist of four editions of the Belgian newspaper "De Morgen" of 2000 (June 2, July 1, August 1 and September 1).
#### Initial Data Collection and Normalization
The articles were word-tokenized, information on the exact pre-processing pipeline is unavailable.
#### Who are the source language producers?
The source language was produced by journalists and writers employed by the news agency and newspaper mentioned above.
### Annotations
#### Annotation process
For the Dutch data, the annotator has followed the MITRE and SAIC guidelines for named entity recognition (Chinchor et al., 1999) as well as possible.
#### Who are the annotators?
The Spanish data annotation was carried out by the TALP Research Center of the Technical University of Catalonia (UPC) and the Center of Language and Computation (CLiC) of the University of Barcelona (UB).
The Dutch data was annotated as a part of the Atranos project at the University of Antwerp.
### Personal and Sensitive Information
The data is sourced from newspaper source and only contains mentions of public figures or individuals
## Considerations for Using the Data
### Social Impact of Dataset
Named Entity Recognition systems can be used to efficiently index news text, allowing to easily gather all information pertaining to an organization or individual. Making such resources widely available in languages other than English can support better research and user experience for a larger part of the world's population. At the same time, better indexing and discoverability can also enable surveillance by state actors.
### Discussion of Biases
News text reproduces the biases of society, and any system trained on news data should be cognizant of these limitations and the risk for models to learn spurious correlations in this context, for example between a person's gender and their occupation.
### Other Known Limitations
Users should keep in mind that the dataset only contains news text, which might limit the applicability of the developed systems to other domains.
## Additional Information
### Dataset Curators
The annotation of the Spanish data was funded by the European Commission through the NAMIC project (IST-1999-12392).
### Licensing Information
The licensing status of the data, especially the news source text, is unknown.
### Citation Information
Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example:
```
@inproceedings{tjong-kim-sang-2002-introduction,
title = "Introduction to the {C}o{NLL}-2002 Shared Task: Language-Independent Named Entity Recognition",
author = "Tjong Kim Sang, Erik F.",
booktitle = "{COLING}-02: The 6th Conference on Natural Language Learning 2002 ({C}o{NLL}-2002)",
year = "2002",
url = "https://www.aclweb.org/anthology/W02-2024",
}
```
### Contributions
Thanks to [@lhoestq](https://github.com/lhoestq) for adding this dataset. | 13,325 | [
[
-0.041473388671875,
-0.0548095703125,
0.0132293701171875,
0.031951904296875,
-0.0208282470703125,
0.00687408447265625,
-0.028900146484375,
-0.0465087890625,
0.046661376953125,
0.031280517578125,
-0.041717529296875,
-0.054229736328125,
-0.047454833984375,
0.0... |
YL95/AllFpML5.10 | 2023-09-26T09:26:32.000Z | [
"region:us"
] | YL95 | null | null | 0 | 8 | 2023-09-25T10:21:25 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
M-A-D/Mixed-Arabic-Dataset-Main | 2023-10-06T17:56:33.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:translation",
"task_categories:summarization",
"language:ar",
"region:us"
] | M-A-D | null | null | 1 | 8 | 2023-09-25T10:52:11 | ---
language:
- ar
task_categories:
- conversational
- text-generation
- text2text-generation
- translation
- summarization
pretty_name: MAD
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: GenId
dtype: int64
- name: SubId
dtype: int64
- name: DatasetName
dtype: string
- name: DatasetLink
dtype: string
- name: Text
dtype: string
- name: MetaData
struct:
- name: AboutAuthor
dtype: string
- name: AboutBook
dtype: string
- name: Author
dtype: string
- name: AuthorName
dtype: string
- name: BookLink
dtype: string
- name: BookName
dtype: string
- name: ChapterLink
dtype: string
- name: ChapterName
dtype: string
- name: Tags
dtype: float64
- name: __index_level_0__
dtype: float64
- name: created_date
dtype: string
- name: deleted
dtype: bool
- name: detoxify
dtype: 'null'
- name: emojis
struct:
- name: count
sequence: int32
- name: name
sequence: string
- name: id
dtype: string
- name: labels
struct:
- name: count
sequence: int32
- name: name
sequence: string
- name: value
sequence: float64
- name: lang
dtype: string
- name: message_id
dtype: string
- name: message_tree_id
dtype: string
- name: model_name
dtype: 'null'
- name: parent_id
dtype: string
- name: query_id
dtype: string
- name: rank
dtype: float64
- name: review_count
dtype: float64
- name: review_result
dtype: bool
- name: role
dtype: string
- name: synthetic
dtype: bool
- name: title
dtype: string
- name: tree_state
dtype: string
- name: url
dtype: string
- name: user_id
dtype: string
- name: ConcatenatedText
dtype: int64
- name: __index_level_0__
dtype: float64
splits:
- name: train
num_bytes: 1990497610
num_examples: 131393
download_size: 790648134
dataset_size: 1990497610
---
# Dataset Card for "Mixed-Arabic-Dataset"
## Mixed Arabic Datasets (MAD)
The Mixed Arabic Datasets (MAD) project provides a comprehensive collection of diverse Arabic-language datasets, sourced from various repositories, platforms, and domains. These datasets cover a wide range of text types, including books, articles, Wikipedia content, stories, and more.
### MAD Repo vs. MAD Main
#### MAD Repo
- **Versatility**: In the MAD Repository (MAD Repo), datasets are made available in their original, native form. Researchers and practitioners can selectively download specific datasets that align with their specific interests or requirements.
- **Independent Access**: Each dataset is self-contained, enabling users to work with individual datasets independently, allowing for focused analyses and experiments.
#### MAD Main or simply MAD
- **Unified Dataframe**: MAD Main represents a harmonized and unified dataframe, incorporating all datasets from the MAD Repository. It provides a seamless and consolidated view of the entire MAD collection, making it convenient for comprehensive analyses and applications.
- **Holistic Perspective**: Researchers can access a broad spectrum of Arabic-language content within a single dataframe, promoting holistic exploration and insights across diverse text sources.
### Why MAD Main?
- **Efficiency**: Working with MAD Main streamlines the data acquisition process by consolidating multiple datasets into one structured dataframe. This is particularly beneficial for large-scale projects or studies requiring diverse data sources.
- **Interoperability**: With MAD Main, the datasets are integrated into a standardized format, enhancing interoperability and compatibility with a wide range of data processing and analysis tools.
- **Meta-Analysis**: Researchers can conduct comprehensive analyses, such as cross-domain studies, trend analyses, or comparative studies, by leveraging the combined richness of all MAD datasets.
### Getting Started
- To access individual datasets in their original form, refer to the MAD Repository ([Link to MAD Repo](https://huggingface.co/datasets/M-A-D/Mixed-Arabic-Datasets-Repo)).
- For a unified view of all datasets, conveniently organized in a dataframe, you are here in the right place.
```python
from datasets import load_dataset
dataset = load_dataset("M-A-D/Mixed-Arabic-Dataset-Main")
```
### Join Us on Discord
For discussions, contributions, and community interactions, join us on Discord! [](https://discord.gg/2NpJ9JGm)
### How to Contribute
Want to contribute to the Mixed Arabic Datasets project? Follow our comprehensive guide on Google Colab for step-by-step instructions: [Contribution Guide](https://colab.research.google.com/drive/1w7_7lL6w7nM9DcDmTZe1Vfiwkio6SA-w?usp=sharing).
**Note**: If you'd like to test a contribution before submitting it, feel free to do so on the [MAD Test Dataset](https://huggingface.co/datasets/M-A-D/Mixed-Arabic-Dataset-test).
## Citation
```
@dataset{
title = {Mixed Arabic Datasets (MAD)},
author = {MAD Community},
howpublished = {Dataset},
url = {https://huggingface.co/datasets/M-A-D/Mixed-Arabic-Datasets-Repo},
year = {2023},
}
``` | 5,502 | [
[
-0.046112060546875,
-0.042327880859375,
-0.01004791259765625,
0.023712158203125,
-0.016845703125,
0.024383544921875,
-0.004116058349609375,
-0.02001953125,
0.0296173095703125,
0.0161895751953125,
-0.033966064453125,
-0.06903076171875,
-0.048095703125,
0.0239... |
joe-chiu/TinyChineseStories | 2023-09-25T23:19:08.000Z | [
"language:zh",
"region:us"
] | joe-chiu | null | null | 0 | 8 | 2023-09-25T21:58:35 | ---
language:
- zh
---
This is a dataset of short Chiense stories generated from GPT3.5. It is inspired by Tiny Stories dataset, but instead of millions of rows, I only generated a few thousands stories. The dataset was created as a learning exercise for using GPT API to generate training data for a potential language model idea.
I created these stories by first using ChatGPT to generate a list of male and female character names, a list of genre and one sentence story themes and a list of story starters (similar to "Once upon a time"). Later, I use GPT3.5 chat completion API to generate short stories given the 3 constraints: genre and theme and sentence starter. And the stories were generated in the batch of 3. So every 3 stories would share the exact same parameters.
---
license: cc-by-4.0
--- | 809 | [
[
-0.01381683349609375,
-0.04510498046875,
0.05810546875,
0.030242919921875,
-0.0285186767578125,
0.00684356689453125,
-0.0131683349609375,
-0.043731689453125,
0.030853271484375,
0.02044677734375,
-0.06365966796875,
-0.027191162109375,
-0.01401519775390625,
0.... |
TrainingDataPro/customers-reviews-on-banks | 2023-09-26T11:08:32.000Z | [
"task_categories:text-classification",
"language:en",
"license:cc-by-nc-nd-4.0",
"code",
"finance",
"region:us"
] | TrainingDataPro | null | null | 1 | 8 | 2023-09-26T11:05:11 | ---
license: cc-by-nc-nd-4.0
task_categories:
- text-classification
language:
- en
tags:
- code
- finance
---
# Customers Reviews on Banks ⭐️
The Reviews on Banks Dataset is a comprehensive collection of **20,000** the most recent customer reviews on **48** US banks.
This dataset containing diverse reviews on multiple banks, can be useful for *sentiment analysis, assessing geographical variations in customer satisfaction, and exploring customer preferences through textual data*.
Understanding customer sentiments and preferences helps **banks** improve their services and address any issues raised by customers in their reviews.
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=customers-reviews-on-banks) to discuss your requirements, learn about the price and buy the dataset.
# Content
For each item, we extracted:
- **author**: name of the reviewer,
- **date**: date of the review,
- **location**: location of the reviewer,
- **bank**: bank which is reviewed
- **star**: number of stars given to the bank by the reviewer,
- **text**: text of the review,
- **like**: number of likes on the review
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=customers-reviews-on-banks) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** | 1,670 | [
[
-0.044158935546875,
-0.04559326171875,
0.0112762451171875,
0.0117340087890625,
-0.0139923095703125,
-0.00010460615158081055,
-0.0009560585021972656,
-0.03204345703125,
0.01444244384765625,
0.0687255859375,
-0.03839111328125,
-0.06982421875,
-0.027435302734375,
... |
ArwaAbdul/Fingerprint_split_90_10 | 2023-09-28T12:14:02.000Z | [
"region:us"
] | ArwaAbdul | null | null | 0 | 8 | 2023-09-28T12:06:53 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '1'
'1': '2'
'2': '3'
'3': '4'
splits:
- name: train
num_bytes: 504155396.6682027
num_examples: 3000
- name: test
num_bytes: 77898517.33179724
num_examples: 472
download_size: 337755809
dataset_size: 582053914.0
---
# Dataset Card for "Fingerprint_split_90_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 712 | [
[
-0.045654296875,
-0.01291656494140625,
0.02508544921875,
0.012664794921875,
-0.02423095703125,
0.0174102783203125,
0.040802001953125,
-0.0157470703125,
0.055389404296875,
0.040679931640625,
-0.047637939453125,
-0.04437255859375,
-0.02972412109375,
-0.0082550... |
Doub7e/SD-CLIP-alignment-composition | 2023-09-28T21:56:15.000Z | [
"region:us"
] | Doub7e | null | null | 0 | 8 | 2023-09-28T21:23:16 | ---
dataset_info:
features:
- name: image
dtype: image
- name: prompt
dtype: string
- name: clip_pred
dtype: string
splits:
- name: train
num_bytes: 405174703.0
num_examples: 900
download_size: 405155460
dataset_size: 405174703.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "SD-CLIP-alignment-composition"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 539 | [
[
-0.040008544921875,
-0.01427459716796875,
0.025299072265625,
0.032257080078125,
-0.01134490966796875,
0.0200347900390625,
0.0191192626953125,
-0.00704193115234375,
0.08355712890625,
0.0269775390625,
-0.07476806640625,
-0.049957275390625,
-0.0364990234375,
-0... |
renumics/spotlight-beans-enrichment | 2023-10-13T09:05:35.000Z | [
"region:us"
] | renumics | null | null | 0 | 8 | 2023-09-29T13:12:54 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image_file_path.embedding
sequence: float32
length: 2
- name: image.embedding
sequence: float32
length: 2
splits:
- name: train
num_bytes: 16544
num_examples: 1034
- name: validation
num_bytes: 2128
num_examples: 133
- name: test
num_bytes: 2048
num_examples: 128
download_size: 33961
dataset_size: 20720
---
# Dataset Card for "spotlight-beans-enrichment"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 757 | [
[
-0.041656494140625,
-0.0274505615234375,
0.0306549072265625,
0.0391845703125,
0.0022182464599609375,
0.0183563232421875,
0.006587982177734375,
-0.02337646484375,
0.059844970703125,
0.0277862548828125,
-0.05291748046875,
-0.04241943359375,
-0.044403076171875,
... |
wikipunk/fibo2023Q3 | 2023-10-04T20:03:28.000Z | [
"task_categories:graph-ml",
"annotations_creators:expert-generated",
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"knowledge-graph",
"rdf",
"owl",
"ontology",
"region:us"
] | wikipunk | null | null | 0 | 8 | 2023-09-29T16:32:18 | ---
language:
- en
license: mit
tags:
- knowledge-graph
- rdf
- owl
- ontology
annotations_creators:
- expert-generated
pretty_name: FIBO
size_categories:
- 100K<n<1M
task_categories:
- graph-ml
dataset_info:
features:
- name: subject
dtype: string
- name: predicate
dtype: string
- name: object
dtype: string
config_name: default
splits:
- name: train
num_bytes: 56045523
num_examples: 236579
dataset_size: 56045523
viewer: false
---
# FIBO: The Financial Industry Business Ontology
### Overview
In the world of financial technology, the vastness of data and the
complexity of financial instruments present both challenges and
opportunities. The Financial Industry Business Ontology (FIBO) offers
a structured framework that bridges the gap between theoretical
financial concepts and real-world data. I believe machine learning
researchers interested in the financial sector could use the
relationships in FIBO to innovate in financial feature engineering to
fine-tune existing models or build new ones.
#### Open Source
The FIBO ontology is developed on GitHub at
https://github.com/edmcouncil/fibo/.
### Use-cases
- Comprehensive Data Structure: FIBO offers a broad spectrum of
financial concepts, ranging from derivatives to securities. This
design, rooted in expert knowledge from both the knowledge
representation and financial sectors, ensures a profound
understanding of financial instruments.
- Decoding Complex Relationships: The financial domain is
characterized by its intricate interdependencies. FIBO's structured
approach provides clarity on these relationships, enabling machine
learning algorithms to identify patterns and correlations within
large datasets.
- Linkage with Real-world Data: A distinguishing feature of FIBO is
its capability to associate financial concepts with real-world
financial data and controlled vocabularies. This connection is
crucial for researchers aiming to apply theoretical insights in
practical contexts in financial enterprises with their existing
data.
- Retrieval Augmented Generation: The advent of Large Language Models,
particularly in conjunction with Retrieval Augmented Generation
(RAG), holds promise for revolutionizing the way financial data is
processed and interpreted.
- Document Classification: With the surge in financial documents,
utilizing RAG to categorize financial datasets classifed by FIBO
concepts can assist financial analysts in achieving enhanced
accuracy and depth in data interpretation, facilitated by
intelligent prompting.
#### Building and Verification:
1. **Construction**: The ontology was imported from
[AboutFIBOProd-IncludingReferenceData](https://github.com/edmcouncil/fibo/blob/master/AboutFIBOProd-IncludingReferenceData.rdf)
into Protege version 5.6.1.
2. **Reasoning**: Due to the large size of the ontology I used the ELK
reasoner plugin to materialize (make explicit) inferences in the
ontology.
3. **Coherence Check**: The Debug Ontology plugin in Protege was used
to ensure the ontology's coherence and consistency.
4. **Export**: After verification, inferred axioms, along with
asserted axioms and annotations, were [exported using Protege](https://www.michaeldebellis.com/post/export-inferred-axioms).
5. **Encoding and Compression**: [Apache Jena's
riot](https://jena.apache.org/documentation/tools/) was used to convert the
result to ntriples, which was then compressed with gzip. This
compressed artifact is downloaded and extracted by the Hugging Face
datasets library to yield the examples in the dataset.
### Usage
First make sure you have the requirements installed:
```python
pip install datasets
pip install rdflib
```
You can load the dataset using the Hugging Face Datasets library with the following Python code:
```python
from datasets import load_dataset
dataset = load_dataset('wikipunk/fibo2023Q3', split='train')
```
## Features
The FIBO dataset is composed of triples representing the relationships
between different financial concepts and named individuals such as
market participants, corporations, and contractual agents.
#### Note on Format:
The subject, predicate, and object features are stored in N3 notation
with no prefix mappings. This allows users to parse each component
using `rdflib.util.from_n3` from the RDFLib Python library.
### 1. **Subject** (`string`)
The subject of a triple is the primary entity or focus of the statement. In this dataset, the subject often represents a specific financial instrument or entity. For instance:
`<https://spec.edmcouncil.org/fibo/ontology/SEC/Equities/EquitiesExampleIndividuals/XNYSListedTheCoca-ColaCompanyCommonStock>`
refers to the common stock of The Coca-Cola Company that is listed on
the NYSE.
### 2. **Predicate** (`string`)
The predicate of a triple indicates the nature of the relationship between the subject and the object. It describes a specific property, characteristic, or connection of the subject. In our example:
`<https://spec.edmcouncil.org/fibo/ontology/SEC/Securities/SecuritiesListings/isTradedOn>`
signifies that the financial instrument (subject) is traded on a
particular exchange (object).
### 3. **Object** (`string`)
The object of a triple is the entity or value that is associated with the subject via the predicate. It can be another financial concept, a trading platform, or any other related entity. In the context of our example:
`<https://spec.edmcouncil.org/fibo/ontology/FBC/FunctionalEntities/NorthAmericanEntities/USMarketsAndExchangesIndividuals/NewYorkStockExchange>`
represents the New York Stock Exchange where the aforementioned
Coca-Cola common stock is traded.
#### Continued
Here is an another example of a triple in the dataset:
- Subject: `"<https://spec.edmcouncil.org/fibo/ontology/FBC/FunctionalEntities/MarketsIndividuals/ServiceProvider-L-JEUVK5RWVJEN8W0C9M24>"`
- Predicate: `"<http://www.w3.org/1999/02/22-rdf-syntax-ns#type>`
- Object: `"<https://spec.edmcouncil.org/fibo/ontology/BE/FunctionalEntities/FunctionalEntities/FunctionalEntity>"`
This triple represents the statement that the market individual
[ServiceProvider-L-JEUVK5RWVJEN8W0C9M24](https://spec.edmcouncil.org/fibo/ontology/FBC/FunctionalEntities/MarketsIndividuals/ServiceProvider-L-JEUVK5RWVJEN8W0C9M24)
has a type of
[FunctionalEntity](https://spec.edmcouncil.org/fibo/ontology/BE/FunctionalEntities/FunctionalEntities/FunctionalEntity).
#### Note:
The dataset contains example individuals from the ontology as
reference points. These examples provide a structured framework for
understanding the relationships and entities within the financial
domain. However, the individuals included are not exhaustive. With
advancements in Large Language Models, especially Retrieval Augmented
Generation (RAG), there's potential to generate and expand upon these
examples, enriching the dataset with more structured data and
insights.
### FIBO Viewer
Use the [FIBO Viewer](https://spec.edmcouncil.org/fibo/ontology) to
explore the ontology on the web. One of the coolest features about
FIBO is that entities with a prefix of
https://spec.edmcouncil.org/fibo/ontology/ can be looked up in the web
just by opening its URL in a browser or in any HTTP client.
## Ideas for Deriving Graph Neural Network Features from FIBO:
Graph Neural Networks (GNNs) have emerged as a powerful tool for
machine learning on structured data. FIBO, with its structured
ontology, can be leveraged to derive features for GNNs.
### Node Features:
- **rdf:type**: Each entity in FIBO has one or more associated `rdf:type`,
`<http://www.w3.org/1999/02/22-rdf-syntax-ns#type>`, that
indicates its class or category. This can serve as a primary node
feature to encode.
- **Entity Attributes**: Attributes of each entity, such as names or
descriptions, can be used as additional node features. Consider
embedding descriptions using a semantic text embedding model.
### Edge Features:
- **RDF Predicates**: The relationships between entities in FIBO are
represented using RDF predicates. These predicates can serve as edge
features in a GNN, capturing the nature of the relationship between
nodes.
### Potential Applications:
1. **Entity Classification**: Using the derived node and edge
features, GNNs can classify entities into various financial
categories, enhancing the granularity of financial data analysis.
2. **Relationship Prediction**: GNNs can predict potential
relationships between entities, aiding in the discovery of hidden
patterns or correlations within the financial data.
3. **Anomaly Detection**: By training GNNs on the structured data from
FIBO and interlinked financial datasets, anomalies or
irregularities in them may be detected, ensuring data integrity and
accuracy.
### Acknowledgements
We extend our sincere gratitude to the FIBO contributors for their
meticulous efforts in knowledge representation. Their expertise and
dedication have been instrumental in shaping a comprehensive and
insightful framework that serves as a cornerstone for innovation in
the financial industry.
If you are interested in modeling the financial industry you should
consider [contributing to
FIBO](https://github.com/edmcouncil/fibo/blob/master/CONTRIBUTING.md).
### Citation
```bibtex
@misc{fibo2023Q3,
title={Financial Industry Business Ontology (FIBO)},
author={Object Management Group, Inc. and EDM Council, Inc. and Various Contributors},
year={2023},
note={Available as OWL 2 ontologies and UML models compliant with the Semantics for Information Modeling and Federation (SMIF) draft specification. Contributions are open on GitHub, consult the repository for a list of contributors.},
howpublished={\url{https://spec.edmcouncil.org/fibo/}},
abstract={The Financial Industry Business Ontology (FIBO) is a collaborative effort to standardize the language used to define the terms, conditions, and characteristics of financial instruments; the legal and relationship structure of business entities; the content and time dimensions of market data; and the legal obligations and process aspects of corporate actions.},
license={MIT License, \url{https://opensource.org/licenses/MIT}}
}
```
| 10,268 | [
[
-0.0262451171875,
-0.0819091796875,
0.01448822021484375,
0.027679443359375,
0.01203155517578125,
0.007152557373046875,
0.01323699951171875,
-0.052581787109375,
0.019378662109375,
0.07623291015625,
-0.03704833984375,
-0.033203125,
-0.03167724609375,
0.0065689... |
jitx/distillation_code_100 | 2023-09-30T01:30:18.000Z | [
"region:us"
] | jitx | null | null | 0 | 8 | 2023-09-30T01:30:16 | ---
dataset_info:
features:
- name: santacoder_prompts
dtype: string
- name: fim_inputs
dtype: string
- name: label_middles
dtype: string
- name: santacoder_outputs
dtype: string
- name: openai_rationales
dtype: string
splits:
- name: train
num_bytes: 399654
num_examples: 100
download_size: 155882
dataset_size: 399654
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "distillation_code_100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 633 | [
[
-0.03814697265625,
-0.0225372314453125,
0.0269622802734375,
0.0279388427734375,
-0.01538848876953125,
0.0009603500366210938,
0.01214599609375,
0.0200042724609375,
0.044647216796875,
0.0260162353515625,
-0.04852294921875,
-0.05426025390625,
-0.046539306640625,
... |
mickylan2367/ColorSpectrogram | 2023-09-30T12:33:24.000Z | [
"language:en",
"music",
"art",
"region:us"
] | mickylan2367 | null | null | 0 | 8 | 2023-09-30T10:42:55 | ---
language:
- en
tags:
- music
- art
---
## Google/MusicCapsの音楽をスペクトログラムにしたもの
* Google/MusicCapsのスペクトログラム。カラーバージョンも作っておく.
### 基本情報
* sampling_rate: int = 44100
## 参考資料とメモ
* (memo)ぶっちゃけグレースケールもカラーバージョンをtorchvision.transformのグレースケール変換すればいいだけかも?
* ダウンロードに使ったコードは<a href="https://colab.research.google.com/drive/1HmDorbxD5g6C2WDjLierUqbhecTdRvgA?usp=sharing">こちら</a>
* 参考:https://www.kaggle.com/code/osanseviero/musiccaps-explorer
* 仕組み:Kaggleの参考コードでwavファイルをダウンロードする->スペクトログラムつくりながらmetadata.jsonlに
```
{"filename":"spectrogram_*.png", "caption":"This is beautiful music"}
```
をなどと言ったjson列を書き込み、これをアップロードした
* Huggingfaceのデータビューアが動かなくなったら、一度GoogleColabでそのデータセットをダウンロードしてみることもおすすめ
* 意外とHuggingfaceがバグっているだけかも(実話(´;ω;`)) | 749 | [
[
-0.0615234375,
-0.034759521484375,
0.017852783203125,
0.04705810546875,
-0.03802490234375,
0.031524658203125,
-0.03375244140625,
-0.0244140625,
0.0806884765625,
0.00988006591796875,
-0.06549072265625,
-0.04730224609375,
-0.039825439453125,
0.01904296875,
... |
learn3r/SDG_cs | 2023-10-01T11:45:46.000Z | [
"region:us"
] | learn3r | null | null | 0 | 8 | 2023-10-01T11:45:43 | ---
dataset_info:
features:
- name: jargon
dtype: string
- name: definition
dtype: string
splits:
- name: train
num_bytes: 44588
num_examples: 200
download_size: 29080
dataset_size: 44588
---
# Dataset Card for "SDG_cs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 382 | [
[
-0.0426025390625,
-0.00870513916015625,
0.0254669189453125,
0.025054931640625,
-0.016387939453125,
0.01163482666015625,
0.01165008544921875,
-0.00476837158203125,
0.05865478515625,
0.0282440185546875,
-0.06964111328125,
-0.06646728515625,
-0.04107666015625,
... |
Emphive/PythonQA | 2023-10-02T06:23:15.000Z | [
"region:us"
] | Emphive | null | null | 0 | 8 | 2023-10-02T06:22:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
manu/french-30b | 2023-10-16T05:21:51.000Z | [
"region:us"
] | manu | null | null | 0 | 8 | 2023-10-02T18:47:43 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
- name: dataset_id
dtype: string
splits:
- name: train
num_bytes: 108345820734
num_examples: 63517737
- name: test
num_bytes: 483041948
num_examples: 93498
download_size: 0
dataset_size: 108828862682
---
# Dataset Card for "french_30b2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 619 | [
[
-0.050201416015625,
-0.0163726806640625,
0.01251220703125,
0.04388427734375,
0.0013723373413085938,
0.0004935264587402344,
0.012115478515625,
-0.00910186767578125,
0.038238525390625,
0.043731689453125,
-0.055023193359375,
-0.051300048828125,
-0.04742431640625,
... |
relaxtraffic/attrain | 2023-10-03T09:36:58.000Z | [
"region:us"
] | relaxtraffic | null | null | 0 | 8 | 2023-10-03T09:30:18 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Falah/3d_perspective_drawing | 2023-10-03T12:36:18.000Z | [
"region:us"
] | Falah | null | null | 0 | 8 | 2023-10-03T12:36:17 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 174080
num_examples: 1000
download_size: 18501
dataset_size: 174080
---
# Dataset Card for "3d_perspective_drawing"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 363 | [
[
-0.05389404296875,
-0.01316070556640625,
0.0338134765625,
0.035980224609375,
-0.0115509033203125,
-0.01155853271484375,
0.04718017578125,
-0.0285491943359375,
0.060333251953125,
0.045318603515625,
-0.052764892578125,
-0.05938720703125,
-0.040771484375,
-0.01... |
Kris8an/30k_no_obs | 2023-10-03T16:43:30.000Z | [
"region:us"
] | Kris8an | null | null | 0 | 8 | 2023-10-03T15:37:15 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
indiejoseph/ted-translation-zhhk-zhcn | 2023-10-03T20:51:18.000Z | [
"region:us"
] | indiejoseph | null | null | 0 | 8 | 2023-10-03T19:59:34 | ---
dataset_info:
features:
- name: yue
dtype: string
- name: zh
dtype: string
splits:
- name: train
num_bytes: 518228
num_examples: 6480
download_size: 382566
dataset_size: 518228
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ted-translation-zhhk-zhcn"
Transcriptions of TED videos converted into Cantonese.
| 404 | [
[
0.0222930908203125,
-0.03411865234375,
-0.001239776611328125,
0.0276336669921875,
-0.062255859375,
0.0078277587890625,
-0.01413726806640625,
0.0258331298828125,
0.03961181640625,
0.0380859375,
-0.06689453125,
-0.05218505859375,
-0.033599853515625,
0.01080322... |
yudiwbs/olimpiade | 2023-10-04T02:44:10.000Z | [
"region:us"
] | yudiwbs | null | null | 0 | 8 | 2023-10-04T02:39:37 | Sumber: https://www.kaggle.com/datasets/heesoo37/120-years-of-olympic-history-athletes-and-results
Dataset untuk modul praktikum https://docs.google.com/document/d/1ehUlhdLeubEJz9qc3fvGeCRtZindHhowyGhg5Pbqq3w/edit | 215 | [
[
-0.0161285400390625,
-0.01593017578125,
0.02752685546875,
-0.01190185546875,
-0.0292816162109375,
-0.0024566650390625,
0.0097198486328125,
-0.020721435546875,
0.050079345703125,
0.0360107421875,
-0.042205810546875,
-0.034454345703125,
-0.050811767578125,
0.0... |
AndyLiu0104/Soldering-Data-Tiny-1004-unsolder-area | 2023-10-04T16:28:52.000Z | [
"region:us"
] | AndyLiu0104 | null | null | 0 | 8 | 2023-10-04T07:13:56 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 18073742.875
num_examples: 10481
download_size: 0
dataset_size: 18073742.875
---
# Dataset Card for "Soldering-Data-Tiny-1004-unsolder-area"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 418 | [
[
-0.0281829833984375,
-0.0235137939453125,
0.03326416015625,
0.00605010986328125,
-0.01338958740234375,
-0.005680084228515625,
0.01169586181640625,
0.03948974609375,
0.056671142578125,
0.036285400390625,
-0.05889892578125,
-0.037841796875,
-0.0142669677734375,
... |
Viswa09/gujarati_speechdata | 2023-10-05T03:40:48.000Z | [
"region:us"
] | Viswa09 | null | null | 0 | 8 | 2023-10-04T10:50:30 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: audio
dtype: audio
- name: language
dtype: string
- name: language_probability
dtype: float64
- name: segments
list:
- name: avg_logprob
dtype: float64
- name: start
dtype: float64
- name: end
dtype: float64
- name: text
dtype: string
splits:
- name: train
num_bytes: 4531512898.0
num_examples: 434
download_size: 2099229077
dataset_size: 4531512898.0
---
# Dataset Card for "gujarati_speechdata"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 733 | [
[
-0.0323486328125,
-0.02923583984375,
0.004283905029296875,
0.021697998046875,
-0.011138916015625,
0.0013713836669921875,
0.0016965866088867188,
-0.003143310546875,
0.052398681640625,
0.02520751953125,
-0.052978515625,
-0.05743408203125,
-0.050323486328125,
-... |
renumics/spotlight-emodb-enrichment | 2023-10-13T09:24:05.000Z | [
"region:us"
] | renumics | null | null | 0 | 8 | 2023-10-04T15:15:18 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: audio.embedding
sequence: float32
length: 2
splits:
- name: train
num_bytes: 4280
num_examples: 535
download_size: 6880
dataset_size: 4280
---
# Dataset Card for "spotlight-emodb-enrichment"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 473 | [
[
-0.0498046875,
-0.036895751953125,
0.02935791015625,
0.02099609375,
-0.01580810546875,
0.01873779296875,
0.009124755859375,
-0.0102691650390625,
0.07464599609375,
0.032318115234375,
-0.0599365234375,
-0.05523681640625,
-0.031463623046875,
-0.0065040588378906... |
ikiransuryavanshi/llama_training | 2023-10-05T09:46:13.000Z | [
"region:us"
] | ikiransuryavanshi | null | null | 0 | 8 | 2023-10-05T09:28:15 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
vsarathy/nl-robotics-translation-simple_english-30k-context | 2023-10-05T14:43:52.000Z | [
"region:us"
] | vsarathy | null | null | 0 | 8 | 2023-10-05T14:43:23 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Intuit-GenSRF/jigsaw-unintended-bias-train-es | 2023-10-06T00:14:53.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | 0 | 8 | 2023-10-06T00:14:32 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
splits:
- name: train
num_bytes: 651762756
num_examples: 1900137
download_size: 421521895
dataset_size: 651762756
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "jigsaw-unintended-bias-train-es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 506 | [
[
-0.040740966796875,
-0.0152130126953125,
0.0180206298828125,
0.01788330078125,
-0.0275726318359375,
-0.01068115234375,
0.0195159912109375,
-0.0017232894897460938,
0.063232421875,
0.028961181640625,
-0.06121826171875,
-0.036346435546875,
-0.05181884765625,
-0... |
minh21/COVID-QA-sentence-transformer-biencoder-data-75_25 | 2023-10-06T07:38:09.000Z | [
"region:us"
] | minh21 | null | null | 0 | 8 | 2023-10-06T07:38:05 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: positive
dtype: string
- name: negative
dtype: string
- name: document_id
dtype: int64
splits:
- name: train
num_bytes: 25188652
num_examples: 12274
- name: test
num_bytes: 2473938
num_examples: 1360
download_size: 1946559
dataset_size: 27662590
---
# Dataset Card for "COVID-QA-sentence-transformer-biencoder-data-75_25"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 697 | [
[
-0.02679443359375,
-0.01959228515625,
0.00852203369140625,
0.02093505859375,
-0.010833740234375,
-0.00920867919921875,
0.016937255859375,
-0.0003917217254638672,
0.04071044921875,
0.0224761962890625,
-0.05364990234375,
-0.0499267578125,
-0.037567138671875,
-... |
renumics/spotlight-boolq-enrichment | 2023-10-13T09:10:09.000Z | [
"region:us"
] | renumics | null | null | 0 | 8 | 2023-10-06T13:06:09 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: question.embedding
sequence: float32
length: 2
- name: passage.embedding
sequence: float32
length: 2
splits:
- name: train
num_bytes: 150832
num_examples: 9427
- name: validation
num_bytes: 52320
num_examples: 3270
download_size: 284725
dataset_size: 203152
---
# Dataset Card for "spotlight-boolq-enrichment"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 662 | [
[
-0.03564453125,
-0.01751708984375,
0.0182952880859375,
0.01528167724609375,
-0.01007843017578125,
0.023834228515625,
0.020904541015625,
-0.02313232421875,
0.04864501953125,
0.044464111328125,
-0.0577392578125,
-0.0489501953125,
-0.019622802734375,
-0.0176086... |
Hack90/ncbi_genbank_part_8 | 2023-10-06T16:44:09.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 8 | 2023-10-06T16:25:47 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 19567803802
num_examples: 10984
download_size: 9068866549
dataset_size: 19567803802
---
# Dataset Card for "ncbi_genbank_part_8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 641 | [
[
-0.04327392578125,
-0.01204681396484375,
0.0238494873046875,
0.0113525390625,
-0.0234527587890625,
0.0171051025390625,
0.038330078125,
-0.01018524169921875,
0.05194091796875,
0.042510986328125,
-0.03900146484375,
-0.05816650390625,
-0.033782958984375,
0.0095... |
Wongeze/test | 2023-10-24T11:33:03.000Z | [
"region:us"
] | Wongeze | null | null | 0 | 8 | 2023-10-07T03:22:25 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 338808
num_examples: 200
download_size: 201257
dataset_size: 338808
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 428 | [
[
-0.04620361328125,
-0.028656005859375,
0.00555419921875,
0.0131072998046875,
-0.009124755859375,
0.00058746337890625,
0.0164794921875,
-0.00917816162109375,
0.050537109375,
0.0228424072265625,
-0.056121826171875,
-0.04486083984375,
-0.03240966796875,
-0.0128... |
LongJiAn/marsh-capstone | 2023-10-07T06:17:47.000Z | [
"region:us"
] | LongJiAn | null | null | 0 | 8 | 2023-10-07T05:14:13 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
carnival13/massive_val_DA5_tokenized | 2023-10-07T11:03:09.000Z | [
"region:us"
] | carnival13 | null | null | 0 | 8 | 2023-10-07T11:03:04 | ---
dataset_info:
features:
- name: pass_label
dtype: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 16518310
num_examples: 24160
download_size: 3778628
dataset_size: 16518310
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "massive_val_DA5_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 545 | [
[
-0.04241943359375,
-0.034423828125,
0.01078033447265625,
0.0194854736328125,
-0.0219879150390625,
0.0028553009033203125,
0.0271759033203125,
-0.006504058837890625,
0.06396484375,
0.042083740234375,
-0.0447998046875,
-0.058380126953125,
-0.05279541015625,
-0.... |
matthewfranglen/aste-v2 | 2023-10-09T10:05:10.000Z | [
"task_categories:token-classification",
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"arxiv:1911.01616",
"arxiv:2107.12214",
"arxiv:2010.02609",
"region:us"
] | matthewfranglen | null | null | 0 | 8 | 2023-10-07T18:45:31 | ---
pretty_name: "Aspect Sentiment Triplet Extraction v2"
language:
- en
arxiv:
- 2107.12214
- 2010.02609
- 1911.01616
size_categories:
- 1K<n<10K
task_categories:
- token-classification
- text-classification
configs:
- config_name: 2014-laptop-sem-eval
data_files:
- split: train
path: "data/2014/laptop/sem-eval/train.gz.parquet"
- split: valid
path: "data/2014/laptop/sem-eval/valid.gz.parquet"
- split: test
path: "data/2014/laptop/sem-eval/test.gz.parquet"
- config_name: 2014-laptop-aste-v2
data_files:
- split: train
path: "data/2014/laptop/aste/train.gz.parquet"
- split: valid
path: "data/2014/laptop/aste/valid.gz.parquet"
- split: test
path: "data/2014/laptop/aste/test.gz.parquet"
- config_name: 2014-restaurant-sem-eval
data_files:
- split: train
path: "data/2014/restaurant/sem-eval/train.gz.parquet"
- split: valid
path: "data/2014/restaurant/sem-eval/valid.gz.parquet"
- split: test
path: "data/2014/restaurant/sem-eval/test.gz.parquet"
- config_name: 2014-restaurant-aste-v2
data_files:
- split: train
path: "data/2014/restaurant/aste/train.gz.parquet"
- split: valid
path: "data/2014/restaurant/aste/valid.gz.parquet"
- split: test
path: "data/2014/restaurant/aste/test.gz.parquet"
- config_name: 2015-restaurant-sem-eval
data_files:
- split: train
path: "data/2015/restaurant/sem-eval/train.gz.parquet"
- split: valid
path: "data/2015/restaurant/sem-eval/valid.gz.parquet"
- split: test
path: "data/2015/restaurant/sem-eval/test.gz.parquet"
- config_name: 2015-restaurant-aste-v2
data_files:
- split: train
path: "data/2015/restaurant/aste/train.gz.parquet"
- split: valid
path: "data/2015/restaurant/aste/valid.gz.parquet"
- split: test
path: "data/2015/restaurant/aste/test.gz.parquet"
- config_name: 2016-restaurant-sem-eval
data_files:
- split: train
path: "data/2016/restaurant/sem-eval/train.gz.parquet"
- split: valid
path: "data/2016/restaurant/sem-eval/valid.gz.parquet"
- split: test
path: "data/2016/restaurant/sem-eval/test.gz.parquet"
- config_name: 2016-restaurant-aste-v2
data_files:
- split: train
path: "data/2016/restaurant/aste/train.gz.parquet"
- split: valid
path: "data/2016/restaurant/aste/valid.gz.parquet"
- split: test
path: "data/2016/restaurant/aste/test.gz.parquet"
---
## Dataset Description
### Task Summary
Aspect Sentiment Triplet Extraction (ASTE) is the task of extracting the triplets of target entities, their associated sentiment, and opinion spans explaining the reason for the sentiment.
This task is firstly proposed by (Peng et al., 2020) in the paper [Knowing What, How and Why: A Near Complete Solution for Aspect-based Sentiment Analysis (In AAAI 2020)](https://arxiv.org/abs/1911.01616).
For Example, given the sentence:
> The screen is very large and crystal clear with amazing colors and resolution .
The objective of the Aspect Sentiment Triplet Extraction (ASTE) task is to predict the triplets:
> [('screen', 'large', 'Positive'), ('screen', 'clear', 'Positive'), ('colors', 'amazing', 'Positive'), ('resolution', 'amazing', 'Positive')]
where a triplet consists of (target, opinion, sentiment).
### Dataset Summary
Sentiment analysis is increasingly viewed as a vital task both from an academic and a commercial standpoint.
The majority of current approaches, however, attempt to detect the overall polarity of a sentence, paragraph, or text span, regardless of the entities mentioned (e.g., laptops, restaurants) and their aspects (e.g., battery, screen; food, service).
By contrast, this task is concerned with aspect based sentiment analysis (ABSA), where the goal is to identify the aspects of given target entities and the sentiment expressed towards each aspect.
This dataset consists of customer reviews with human-authored annotations identifying the mentioned aspects of the target entities and the sentiment polarity of each aspect.
### Dataset Source
The ASTE dataset is from the [xuuuluuu/SemEval-Triplet-data](https://github.com/xuuuluuu/SemEval-Triplet-data) repository.
It is based on the Sem Eval 2014, 2015 and 2016 datasets, with some preprocessing applied to the text.
* [Sem Eval 2014 Task 4](https://alt.qcri.org/semeval2014/task4/)
* [Sem Eval 2015 Task 12](https://alt.qcri.org/semeval2015/task12/)
* [Sem Eval 2016 Task 5](https://alt.qcri.org/semeval2016/task5/)
### Dataset Details
The train, validation and test splits come from the ASTE dataset.
There are the following columns:
* index
The ASTE and Sem Eval datasets had multiple annotations per document.
This dataset has a single annotation per row.
To make it easier to collect all annotations for a document the index can be used to group them.
All annotations for a given document will have the same index.
* text
This is the document that is annotated, either in the ASTE form or in the Sem Eval form (see below for details).
* aspect_start_index
The zero based character index for the first letter of the aspect term
* aspect_end_index
The zero based character index for the last letter of the aspect term
* aspect_term
The aspect term as it appears in the text
* opinion_start_index
The zero based character index for the first letter of the opinion term
* opinion_end_index
The zero based character index for the last letter of the opinion term
* opinion_term
The opinion term as it appears in the text
* sentiment
The sentiment class for the opinion about the aspect.
One of _negative_, _neutral_ or _positive_.
The ASTE dataset involved preprocessing the SemEval text.
This preprocessing fixed some of the spelling mistakes, for example:
> Keyboard good sized and wasy to use.
(easy misspelt as wasy).
The preprocessing also includes tokenization of the text and then separating the tokens with whitespace, for example:
> It 's just as fast with one program open as it is with sixteen open .
Since the added whitespace can lead to unnatrual text I have provided two forms of the dataset.
Subsets that end with `aste-v2` have the preprocessed text with spelling correction and additional whitespace.
Subsets that end with `sem-eval` have the original Sem Eval text.
### Citation Information
```
@misc{xu2021learning,
title={Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction},
author={Lu Xu and Yew Ken Chia and Lidong Bing},
year={2021},
eprint={2107.12214},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{xu2021positionaware,
title={Position-Aware Tagging for Aspect Sentiment Triplet Extraction},
author={Lu Xu and Hao Li and Wei Lu and Lidong Bing},
year={2021},
eprint={2010.02609},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{peng2019knowing,
title={Knowing What, How and Why: A Near Complete Solution for Aspect-based Sentiment Analysis},
author={Haiyun Peng and Lu Xu and Lidong Bing and Fei Huang and Wei Lu and Luo Si},
year={2019},
eprint={1911.01616},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| 7,161 | [
[
-0.036834716796875,
-0.052459716796875,
0.038665771484375,
0.03619384765625,
-0.011016845703125,
0.00432586669921875,
-0.01474761962890625,
-0.02947998046875,
0.03240966796875,
0.050079345703125,
-0.035400390625,
-0.05828857421875,
-0.0264739990234375,
0.016... |
Intuit-GenSRF/ziq-depression-tweet-es | 2023-10-07T22:25:29.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | 0 | 8 | 2023-10-07T22:25:26 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
- name: processed_text
sequence: string
- name: num_tokens
dtype: int64
- name: text_en
dtype: string
splits:
- name: train
num_bytes: 51261868
num_examples: 51132
download_size: 32137564
dataset_size: 51261868
---
# Dataset Card for "ziq-depression_tweet-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 613 | [
[
-0.032073974609375,
-0.003841400146484375,
0.027008056640625,
0.031951904296875,
-0.02740478515625,
0.006000518798828125,
0.006473541259765625,
0.003871917724609375,
0.07073974609375,
0.00638580322265625,
-0.080078125,
-0.07684326171875,
-0.0482177734375,
-0... |
marcus2000/dataset4sentinement_HSE | 2023-10-08T00:39:44.000Z | [
"region:us"
] | marcus2000 | null | null | 0 | 8 | 2023-10-08T00:13:47 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
dtype: int64
splits:
- name: train
num_bytes: 3679508.0480941418
num_examples: 3322
- name: test
num_bytes: 650171.9519058582
num_examples: 587
download_size: 2311435
dataset_size: 4329680.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "dataset4sentinement_HSE"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 606 | [
[
-0.03741455078125,
-0.00011324882507324219,
0.0237884521484375,
0.0140380859375,
-0.000858306884765625,
-0.005466461181640625,
0.030120849609375,
-0.00009292364120483398,
0.057464599609375,
0.034637451171875,
-0.056060791015625,
-0.0599365234375,
-0.034423828125... |
SuodhanJ6/elliptic_txs_features | 2023-10-08T06:20:10.000Z | [
"region:us"
] | SuodhanJ6 | null | null | 0 | 8 | 2023-10-08T06:18:41 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Falah/book_cover_prompts_with_sections | 2023-10-08T08:54:49.000Z | [
"region:us"
] | Falah | null | null | 0 | 8 | 2023-10-08T08:17:39 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 393452
num_examples: 1000
download_size: 45494
dataset_size: 393452
---
# Dataset Card for "book_cover_prompts_with_sections"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 373 | [
[
-0.049560546875,
-0.00824737548828125,
0.00927734375,
0.0204010009765625,
-0.0166015625,
-0.005268096923828125,
0.024322509765625,
0.0130157470703125,
0.044342041015625,
0.047637939453125,
-0.07672119140625,
-0.06304931640625,
-0.026763916015625,
-0.00355529... |
Falah/programming_book_cover_prompts | 2023-10-08T09:00:51.000Z | [
"region:us"
] | Falah | null | null | 0 | 8 | 2023-10-08T09:00:50 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 191332
num_examples: 1000
download_size: 24579
dataset_size: 191332
---
# Dataset Card for "programming_book_cover_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 371 | [
[
-0.043243408203125,
-0.0026302337646484375,
0.00505828857421875,
0.025848388671875,
0.0037689208984375,
0.00212860107421875,
0.019287109375,
0.0164642333984375,
0.038238525390625,
0.0362548828125,
-0.0770263671875,
-0.060089111328125,
-0.028656005859375,
-0.... |
hk-kaden-kim/uzh-hs23-etsp-eval-multi-subplot-line | 2023-10-08T11:00:11.000Z | [
"region:us"
] | hk-kaden-kim | null | null | 0 | 8 | 2023-10-08T10:47:27 | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: test
num_bytes: 5443206.0
num_examples: 100
download_size: 5428035
dataset_size: 5443206.0
---
# Dataset Card for "uzh-hs23-etsp-eval-multi-subplot-line"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 417 | [
[
-0.043243408203125,
-0.02935791015625,
0.01629638671875,
0.0173187255859375,
-0.035400390625,
0.02130126953125,
0.0213623046875,
-0.00408935546875,
0.057708740234375,
0.0298614501953125,
-0.05560302734375,
-0.041534423828125,
-0.0167236328125,
-0.02185058593... |
librarian-bots/arxiv-metadata-snapshot | 2023-10-16T06:12:12.000Z | [
"task_categories:text-generation",
"task_categories:text-classification",
"size_categories:1M<n<10M",
"language:en",
"license:cc0-1.0",
"arxiv",
"science",
"region:us"
] | librarian-bots | null | null | 1 | 8 | 2023-10-08T21:40:28 | ---
language:
- en
license: cc0-1.0
size_categories:
- 1M<n<10M
task_categories:
- text-generation
- text-classification
pretty_name: arXiv Metadata Dataset
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: submitter
dtype: string
- name: authors
dtype: string
- name: title
dtype: string
- name: comments
dtype: string
- name: journal-ref
dtype: string
- name: doi
dtype: string
- name: report-no
dtype: string
- name: categories
dtype: string
- name: license
dtype: string
- name: abstract
dtype: string
- name: versions
list:
- name: version
dtype: string
- name: created
dtype: string
- name: update_date
dtype: timestamp[s]
- name: authors_parsed
sequence:
sequence: string
splits:
- name: train
num_bytes: 3491601823.0
num_examples: 2340115
download_size: 1953513808
dataset_size: 3491601823.0
tags:
- arxiv
- science
---
# Dataset Card for "arxiv-metadata-oai-snapshot"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
This is a mirror of the metadata portion of the arXiv [dataset](https://www.kaggle.com/datasets/Cornell-University/arxiv/versions/147).
The sync will take place weekly so may fall behind the original datasets slightly if there are more regular updates to the source dataset.
## Metadata
This dataset is a mirror of the original ArXiv data. This dataset contains an entry for each paper, containing:
- id: ArXiv ID (can be used to access the paper, see below)
- submitter: Who submitted the paper
- authors: Authors of the paper
- title: Title of the paper
- comments: Additional info, such as number of pages and figures
- journal-ref: Information about the journal the paper was published in
- doi: [https://www.doi.org](Digital Object Identifier)
- abstract: The abstract of the paper
- categories: Categories / tags in the ArXiv system
- versions: A version history
You can access each paper directly on ArXiv using these links:
- `https://arxiv.org/abs/{id}`: Page for this paper including its abstract and further links
- `https://arxiv.org/pdf/{id}`: Direct link to download the PDF
| 2,331 | [
[
-0.040069580078125,
-0.01042938232421875,
0.023345947265625,
-0.01580810546875,
-0.0196990966796875,
0.0009446144104003906,
0.0341796875,
-0.0262298583984375,
0.038543701171875,
0.031494140625,
-0.03558349609375,
-0.041900634765625,
-0.038177490234375,
0.016... |
diwank/storycommonsense-chatml | 2023-10-27T05:20:08.000Z | [
"region:us"
] | diwank | null | null | 0 | 8 | 2023-10-08T23:31:35 | ---
dataset_info:
features:
- name: chatml
list:
- name: content
dtype: string
- name: name
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 10730524
num_examples: 4827
download_size: 2141314
dataset_size: 10730524
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "storycommonsense-chatml"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 561 | [
[
-0.0306854248046875,
-0.0229644775390625,
0.01727294921875,
0.0193328857421875,
-0.00550079345703125,
0.02587890625,
0.0004200935363769531,
-0.004924774169921875,
0.048553466796875,
0.0408935546875,
-0.0655517578125,
-0.06390380859375,
-0.040435791015625,
-0... |
Hariharavarshan/Assessment | 2023-10-09T00:11:24.000Z | [
"region:us"
] | Hariharavarshan | null | null | 0 | 8 | 2023-10-09T00:10:23 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
benayas/snips_llm | 2023-10-09T01:40:59.000Z | [
"region:us"
] | benayas | null | null | 0 | 8 | 2023-10-09T01:40:56 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 2310806
num_examples: 13084
- name: test
num_bytes: 248670
num_examples: 1400
download_size: 546576
dataset_size: 2559476
---
# Dataset Card for "snips_llm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 572 | [
[
-0.021087646484375,
-0.01483154296875,
0.0280303955078125,
0.01751708984375,
-0.0272674560546875,
0.0068511962890625,
0.01739501953125,
0.0002665519714355469,
0.07672119140625,
0.0516357421875,
-0.0657958984375,
-0.058502197265625,
-0.06646728515625,
-0.0037... |
carnival13/eng_sur_val_DA_tokenized | 2023-10-09T07:13:09.000Z | [
"region:us"
] | carnival13 | null | null | 0 | 8 | 2023-10-09T07:13:06 | ---
dataset_info:
features:
- name: pass_label
dtype: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 30391635
num_examples: 22390
download_size: 5882210
dataset_size: 30391635
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "eng_sur_val_DA_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 544 | [
[
-0.03985595703125,
-0.025177001953125,
-0.003269195556640625,
0.0306396484375,
-0.0212554931640625,
0.00891876220703125,
0.020416259765625,
-0.01312255859375,
0.07440185546875,
0.049346923828125,
-0.04376220703125,
-0.06085205078125,
-0.048248291015625,
-0.0... |
sankettgorey/donut_5 | 2023-10-09T08:00:21.000Z | [
"region:us"
] | sankettgorey | null | null | 0 | 8 | 2023-10-09T07:57:22 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 437960649.0
num_examples: 1000
download_size: 402681326
dataset_size: 437960649.0
---
# Dataset Card for "donut_5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 486 | [
[
-0.026611328125,
-0.0091552734375,
0.0212249755859375,
0.006916046142578125,
-0.0006718635559082031,
0.00586700439453125,
0.0166168212890625,
-0.003997802734375,
0.05255126953125,
0.03631591796875,
-0.052978515625,
-0.054534912109375,
-0.04302978515625,
-0.0... |
ngarneau/fm_queries | 2023-10-09T14:44:18.000Z | [
"region:us"
] | ngarneau | null | null | 0 | 8 | 2023-10-09T08:00:39 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
boundless-asura/wikihow | 2023-10-09T08:34:42.000Z | [
"region:us"
] | boundless-asura | null | null | 0 | 8 | 2023-10-09T08:33:12 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Goorm-AI-04/Drone_Doppler_Noise | 2023-10-09T09:27:59.000Z | [
"region:us"
] | Goorm-AI-04 | null | null | 0 | 8 | 2023-10-09T09:27:42 | ---
dataset_info:
features:
- name: image
sequence:
sequence:
sequence: float64
- name: label
dtype: int64
- name: type
dtype: string
- name: noise_var_0.0001
sequence:
sequence:
sequence: float64
- name: noise_var_0.0005
sequence:
sequence:
sequence: float64
- name: noise_var_0.001
sequence:
sequence:
sequence: float64
- name: noise_var_0.005
sequence:
sequence:
sequence: float64
- name: noise_var_0.01
sequence:
sequence:
sequence: float64
splits:
- name: train
num_bytes: 395275453
num_examples: 3497
download_size: 314133140
dataset_size: 395275453
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Drone_Doppler_Noise"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 969 | [
[
-0.04486083984375,
-0.00849151611328125,
0.006183624267578125,
0.006481170654296875,
-0.0185089111328125,
0.01055145263671875,
0.01166534423828125,
-0.021331787109375,
0.04449462890625,
0.01549530029296875,
-0.0576171875,
-0.04119873046875,
-0.0223236083984375,
... |
dmrau/cqadupstack-webmasters | 2023-10-09T12:41:03.000Z | [
"region:us"
] | dmrau | null | null | 0 | 8 | 2023-10-09T12:41:00 | ---
configs:
- config_name: default
data_files:
- split: queries
path: data/queries-*
- split: corpus
path: data/corpus-*
dataset_info:
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: queries
num_bytes: 34792
num_examples: 506
- name: corpus
num_bytes: 11659413
num_examples: 17405
download_size: 6885106
dataset_size: 11694205
---
# Dataset Card for "cqadupstack-webmasters"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 627 | [
[
-0.052520751953125,
-0.00418853759765625,
0.00010877847671508789,
0.022918701171875,
-0.004978179931640625,
0.0233612060546875,
0.0172271728515625,
-0.016448974609375,
0.043426513671875,
0.04351806640625,
-0.0621337890625,
-0.044647216796875,
-0.035186767578125,... |
dmrau/cqadupstack-unix | 2023-10-09T12:42:00.000Z | [
"region:us"
] | dmrau | null | null | 0 | 8 | 2023-10-09T12:41:55 | ---
configs:
- config_name: default
data_files:
- split: queries
path: data/queries-*
- split: corpus
path: data/corpus-*
dataset_info:
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: queries
num_bytes: 72357
num_examples: 1072
- name: corpus
num_bytes: 46102756
num_examples: 47382
download_size: 24571026
dataset_size: 46175113
---
# Dataset Card for "cqadupstack-unix"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 623 | [
[
-0.04339599609375,
-0.0121612548828125,
0.0283203125,
0.00855255126953125,
-0.01593017578125,
0.025177001953125,
0.025390625,
-0.006725311279296875,
0.05865478515625,
0.045867919921875,
-0.0577392578125,
-0.04840087890625,
-0.033843994140625,
-0.023574829101... |
dmrau/cqadupstack-wordpress | 2023-10-09T12:42:09.000Z | [
"region:us"
] | dmrau | null | null | 0 | 8 | 2023-10-09T12:42:04 | ---
configs:
- config_name: default
data_files:
- split: queries
path: data/queries-*
- split: corpus
path: data/corpus-*
dataset_info:
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: queries
num_bytes: 35736
num_examples: 541
- name: corpus
num_bytes: 53026140
num_examples: 48605
download_size: 26551471
dataset_size: 53061876
---
# Dataset Card for "cqadupstack-wordpress"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 627 | [
[
-0.05145263671875,
-0.0123291015625,
0.02239990234375,
0.0176544189453125,
-0.022247314453125,
0.0195465087890625,
0.01324462890625,
-0.0128631591796875,
0.05950927734375,
0.0274200439453125,
-0.06072998046875,
-0.06085205078125,
-0.04498291015625,
-0.015792... |
hmao/rule_learning_data_v1 | 2023-10-10T16:29:42.000Z | [
"region:us"
] | hmao | null | null | 0 | 8 | 2023-10-09T22:30:42 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: rule
dtype: string
- name: task_name
dtype: string
- name: configuration
dtype: string
- name: description
dtype: string
- name: filepath
dtype: string
- name: old_instruction
dtype: string
- name: prompt
dtype: string
- name: 'codellama/CodeLlama-34b-hf---{"do_sample": false, "max_new_tokens": 256,
"truncate": 15744, "return_full_text": false}'
dtype: string
splits:
- name: train
num_bytes: 7650436
num_examples: 2009
download_size: 2660984
dataset_size: 7650436
---
# Dataset Card for "rule_learning_data_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 840 | [
[
-0.025115966796875,
-0.025115966796875,
0.00920867919921875,
0.0106048583984375,
-0.0203094482421875,
-0.0244293212890625,
0.025238037109375,
0.000408172607421875,
0.050628662109375,
0.038970947265625,
-0.07470703125,
-0.07000732421875,
-0.036956787109375,
-... |
jpiorko/marketingagency | 2023-10-11T07:10:44.000Z | [
"region:us"
] | jpiorko | null | null | 0 | 8 | 2023-10-10T07:21:51 | <s>[INST] Compose a professional email with the following points: Me chinese cook 10 years Good good cook People love my food Can you hire me? [/INST] Thanks for the information. Unfortunately, your initial message does not contain enough details or structure to compose a compelling professional email to a potential employer. Here are some suggestions for how we can improve the message: 1) Let's first clarify the type of cooking position you are interested in. Are you looking for a job as a line cook, sous chef, executive chef, or something else? This will help determine how to emphasize relevant experience and skills. 2) What specific experience do you have? For example, "Over 10 years of experience preparing authentic Sichuan and Cantonese cuisine." Highlight any relevant skills, like knife skills, cooking techniques, menu development, etc. Mention the types of cuisine you specialize in. 3) Have you worked in a professional kitchen before? If so, mention the name and type of restaurant (upscale, casual dining, etc.). Emphasize relevant responsibilities and accomplishments. If not, that's fine too, just be clear that you're looking to transition into a professional kitchen and highlight related skills and passion for cooking. 4) Do you have any certifications or training? Note any culinary degrees, certificates, or other credentials. Even informal training can show your dedication to the craft. 5) Are you looking for full-time, part-time, temporary, or contract work? Be specific about the type of opportunity you're seeking. With the answers to these questions, we can craft an email in a standard business letter format. The opening can express your interest in the position and summarize your relevant experience and qualifications. The body can go into more depth on your experience, skills, and goals. The closing can reiterate your interest and mention that you'd welcome the opportunity to further discuss the role. Does this help? Let me know if you would like help crafting the full email. I'm happy to assist you in composing a professional and compelling message to potential employers. </s>
| 2,128 | [
[
-0.0113067626953125,
-0.03680419921875,
0.02996826171875,
0.0168304443359375,
-0.0031986236572265625,
0.0009365081787109375,
0.01297760009765625,
-0.0236663818359375,
0.044219970703125,
0.051483154296875,
-0.047760009765625,
-0.04644775390625,
-0.04339599609375,... |
twdent/HikingHD | 2023-10-10T14:05:24.000Z | [
"region:us"
] | twdent | null | null | 0 | 8 | 2023-10-10T13:10:23 | ---
dataset_info:
features:
- name: pixel_values
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 515009990.0
num_examples: 38
download_size: 159208907
dataset_size: 515009990.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "HikingHD"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 484 | [
[
-0.03656005859375,
-0.0164947509765625,
0.0172882080078125,
0.02532958984375,
-0.015380859375,
-0.006256103515625,
0.030731201171875,
-0.0200347900390625,
0.051300048828125,
0.0175323486328125,
-0.08306884765625,
-0.05450439453125,
-0.0377197265625,
-0.03089... |
HamdanXI/daily_dialog_text_to_gloss_final | 2023-10-10T13:31:14.000Z | [
"region:us"
] | HamdanXI | null | null | 0 | 8 | 2023-10-10T13:24:55 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: gloss
dtype: string
splits:
- name: train
num_bytes: 6048869
num_examples: 75415
download_size: 3960195
dataset_size: 6048869
---
# Dataset Card for "daily_dialog_text_to_gloss_final"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 495 | [
[
-0.029296875,
-0.041229248046875,
0.034332275390625,
0.0157623291015625,
-0.01031494140625,
-0.0100860595703125,
0.0015583038330078125,
-0.01326751708984375,
0.045013427734375,
0.051605224609375,
-0.0706787109375,
-0.06622314453125,
-0.0309906005859375,
-0.0... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.