id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Jellywibble/222_handwritten_and_regen_prompts | Jellywibble | 2022-11-20T01:11:06Z | 13 | 1 | null | [
"region:us"
] | 2022-11-20T01:11:06Z | 2022-11-20T01:10:08.000Z | 2022-11-20T01:10:08 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-staging-eval-project-37b497c4-c065-4454-9a21-53d55a38d3d3-2826 | autoevaluate | 2022-11-20T13:02:54Z | 13 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-20T13:02:54Z | 2022-11-20T13:02:16.000Z | 2022-11-20T13:02:16 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: binary_classification
model: autoevaluate/binary-classification
metrics: ['matthews_correlation']
dataset_name: glue
dataset_config: sst2
dataset_split: validation
col_mapping:
text: sentence
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | [
-0.20361600816249847,
-0.33383119106292725,
0.2989136278629303,
0.17618101835250854,
-0.16354262828826904,
0.036154817789793015,
0.02089543454349041,
-0.39217692613601685,
0.12184587866067886,
0.361812025308609,
-0.9186381101608276,
-0.21669895946979523,
-0.770520806312561,
-0.013488114811... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-staging-eval-project-11ed4317-15c4-4e98-9e37-8cdfe6d38dfb-4947 | autoevaluate | 2022-11-21T13:06:00Z | 13 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-21T13:06:00Z | 2022-11-21T13:05:17.000Z | 2022-11-21T13:05:17 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- emotion
eval_info:
task: multi_class_classification
model: autoevaluate/multi-class-classification
metrics: ['matthews_correlation']
dataset_name: emotion
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: autoevaluate/multi-class-classification
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | [
-0.4580226242542267,
-0.2519463896751404,
0.25005340576171875,
0.230754092335701,
0.02388734370470047,
-0.05350050702691078,
-0.07919254899024963,
-0.46299290657043457,
0.09022600203752518,
0.19469396770000458,
-0.9535601735115051,
-0.20165009796619415,
-0.7263393998146057,
0.0498039834201... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
antoniomenezes/go_emotions_ptbr | antoniomenezes | 2022-11-21T14:27:31Z | 13 | 4 | goemotions | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:2 languages",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"source_datasets:modified",
... | 2022-11-21T14:27:31Z | 2022-11-21T13:38:59.000Z | 2022-11-21T13:38:59 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
- pt
license:
- apache-2.0
multilinguality:
- 2 languages
size_categories:
- 100K<n<1M
- 10K<n<100K
source_datasets:
- modified
task_categories:
- text-classification
task_ids:
- multi-class-classification
- multi-label-classification
paperswithcode_id: goemotions
pretty_name: GoEmotions
configs:
- raw
- simplified
tags:
- emotion
dataset_info:
- config_name: raw
features:
- name: text
dtype: string
- name: id
dtype: string
- name: author
dtype: string
- name: subreddit
dtype: string
- name: link_id
dtype: string
- name: parent_id
dtype: string
- name: created_utc
dtype: float32
- name: rater_id
dtype: int32
- name: example_very_unclear
dtype: bool
- name: admiration
dtype: int32
- name: amusement
dtype: int32
- name: anger
dtype: int32
- name: annoyance
dtype: int32
- name: approval
dtype: int32
- name: caring
dtype: int32
- name: confusion
dtype: int32
- name: curiosity
dtype: int32
- name: desire
dtype: int32
- name: disappointment
dtype: int32
- name: disapproval
dtype: int32
- name: disgust
dtype: int32
- name: embarrassment
dtype: int32
- name: excitement
dtype: int32
- name: fear
dtype: int32
- name: gratitude
dtype: int32
- name: grief
dtype: int32
- name: joy
dtype: int32
- name: love
dtype: int32
- name: nervousness
dtype: int32
- name: optimism
dtype: int32
- name: pride
dtype: int32
- name: realization
dtype: int32
- name: relief
dtype: int32
- name: remorse
dtype: int32
- name: sadness
dtype: int32
- name: surprise
dtype: int32
- name: neutral
dtype: int32
- name: texto
dtype: string
splits:
- name: train
num_bytes: 55343630
num_examples: 211225
download_size: 42742918
dataset_size: 55343630
- config_name: simplified
features:
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
0: admiration
1: amusement
2: anger
3: annoyance
4: approval
5: caring
6: confusion
7: curiosity
8: desire
9: disappointment
10: disapproval
11: disgust
12: embarrassment
13: excitement
14: fear
15: gratitude
16: grief
17: joy
18: love
19: nervousness
20: optimism
21: pride
22: realization
23: relief
24: remorse
25: sadness
26: surprise
27: neutral
- name: id
dtype: string
splits:
- name: train
num_bytes: 4224198
num_examples: 43410
- name: validation
num_bytes: 527131
num_examples: 5426
- name: test
num_bytes: 524455
num_examples: 5427
download_size: 4394818
dataset_size: 5275784
---
# Dataset Card for GoEmotions
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/google-research/google-research/tree/master/goemotions
- **Repository:** https://github.com/google-research/google-research/tree/master/goemotions
- **Paper:** https://arxiv.org/abs/2005.00547
- **Leaderboard:**
- **Point of Contact:** [Dora Demszky](https://nlp.stanford.edu/~ddemszky/index.html)
### Dataset Summary
The GoEmotions dataset contains 58k carefully curated Reddit comments labeled for 27 emotion categories or Neutral.
The raw data is included as well as the smaller, simplified version of the dataset with predefined train/val/test
splits.
### Supported Tasks and Leaderboards
This dataset is intended for multi-class, multi-label emotion classification.
### Languages
The data is in English and Brazilian Portuguese (translated by Google Translator).
## Dataset Structure
### Data Instances
Each instance is a reddit comment with a corresponding ID and one or more emotion annotations (or neutral).
### Data Fields
The simplified configuration includes:
- `text`: the reddit comment
- `texto`: the reddit comment in portuguese
- `labels`: the emotion annotations
- `comment_id`: unique identifier of the comment (can be used to look up the entry in the raw dataset)
In addition to the above, the raw data includes:
* `author`: The Reddit username of the comment's author.
* `subreddit`: The subreddit that the comment belongs to.
* `link_id`: The link id of the comment.
* `parent_id`: The parent id of the comment.
* `created_utc`: The timestamp of the comment.
* `rater_id`: The unique id of the annotator.
* `example_very_unclear`: Whether the annotator marked the example as being very unclear or difficult to label (in this
case they did not choose any emotion labels).
In the raw data, labels are listed as their own columns with binary 0/1 entries rather than a list of ids as in the
simplified data.
### Data Splits
The simplified data includes a set of train/val/test splits with 43,410, 5426, and 5427 examples respectively.
## Dataset Creation
### Curation Rationale
From the paper abstract:
> Understanding emotion expressed in language has a wide range of applications, from building empathetic chatbots to
detecting harmful online behavior. Advancement in this area can be improved using large-scale datasets with a
fine-grained typology, adaptable to multiple downstream tasks.
### Source Data
#### Initial Data Collection and Normalization
Data was collected from Reddit comments via a variety of automated methods discussed in 3.1 of the paper.
#### Who are the source language producers?
English-speaking Reddit users.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
Annotations were produced by 3 English-speaking crowdworkers in India.
### Personal and Sensitive Information
This dataset includes the original usernames of the Reddit users who posted each comment. Although Reddit usernames
are typically disasociated from personal real-world identities, this is not always the case. It may therefore be
possible to discover the identities of the individuals who created this content in some cases.
## Considerations for Using the Data
### Social Impact of Dataset
Emotion detection is a worthwhile problem which can potentially lead to improvements such as better human/computer
interaction. However, emotion detection algorithms (particularly in computer vision) have been abused in some cases
to make erroneous inferences in human monitoring and assessment applications such as hiring decisions, insurance
pricing, and student attentiveness (see
[this article](https://www.unite.ai/ai-now-institute-warns-about-misuse-of-emotion-detection-software-and-other-ethical-issues/)).
### Discussion of Biases
From the authors' github page:
> Potential biases in the data include: Inherent biases in Reddit and user base biases, the offensive/vulgar word lists used for data filtering, inherent or unconscious bias in assessment of offensive identity labels, annotators were all native English speakers from India. All these likely affect labelling, precision, and recall for a trained model. Anyone using this dataset should be aware of these limitations of the dataset.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Researchers at Amazon Alexa, Google Research, and Stanford. See the [author list](https://arxiv.org/abs/2005.00547).
### Licensing Information
The GitHub repository which houses this dataset has an
[Apache License 2.0](https://github.com/google-research/google-research/blob/master/LICENSE).
### Citation Information
@inproceedings{demszky2020goemotions,
author = {Demszky, Dorottya and Movshovitz-Attias, Dana and Ko, Jeongwoo and Cowen, Alan and Nemade, Gaurav and Ravi, Sujith},
booktitle = {58th Annual Meeting of the Association for Computational Linguistics (ACL)},
title = {{GoEmotions: A Dataset of Fine-Grained Emotions}},
year = {2020}
}
### Contributions
Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset.
Thanks to [@antoniomenezes](https://github.com/antoniomenezes) for extending this dataset. | [
-0.4653739035129547,
-0.5852454900741577,
0.18521273136138916,
0.2014186680316925,
-0.3584609627723694,
-0.18668396770954132,
-0.4374690651893616,
-0.6496272683143616,
0.47940903902053833,
0.20101280510425568,
-0.7286809682846069,
-0.8744338154792786,
-0.6706148386001587,
0.176460191607475... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Jellywibble/dummy_dalio_questions_score | Jellywibble | 2022-11-21T22:36:32Z | 13 | 0 | null | [
"region:us"
] | 2022-11-21T22:36:32Z | 2022-11-21T21:56:04.000Z | 2022-11-21T21:56:04 | Dummy dataset to check reward model training is learning correctly. Score is the number of question marks in Ray's response. | [
-0.11216229200363159,
-0.5519523620605469,
0.009042699821293354,
0.10540342330932617,
-0.092414990067482,
0.22446471452713013,
0.574202299118042,
0.14319024980068207,
0.21201267838478088,
0.3731466829776764,
-0.7933783531188965,
-0.4982873499393463,
-0.3029029965400696,
0.23227016627788544... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Jellywibble/50_scored_qa_pairs | Jellywibble | 2022-11-22T04:44:09Z | 13 | 0 | null | [
"region:us"
] | 2022-11-22T04:44:09Z | 2022-11-22T04:44:05.000Z | 2022-11-22T04:44:05 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Nerfgun3/ouroboros_embeddings | Nerfgun3 | 2022-11-22T23:37:12Z | 13 | 7 | null | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"image-to-image",
"region:us"
] | 2022-11-22T23:37:12Z | 2022-11-22T23:28:12.000Z | 2022-11-22T23:28:12 | ---
language:
- en
license: creativeml-openrail-m
thumbnail: "https://huggingface.co/datasets/Nerfgun3/ouroboros_embeddings/resolve/main/ouroboros_showcase.jpg"
tags:
- stable-diffusion
- text-to-image
- image-to-image
inference: false
---
# Ouroboros Style Embeddings / Textual Inversion
<img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/ouroboros_embeddings/resolve/main/ouroboros_showcase.jpg"/>
## Intro
Both embeddings are quiet similar in style, but were trained on a different dataset.
## Usage
To use my embeddings you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
Personally, I would recommend to use my embeddings with a strength of 0.8, like ```"drawn by (filename:0.8)"```
I trained both embeddings two epochs until 8000 steps.
I hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: "Nerfgun3#7508"
### Dark ouroboros
This embedding was trained on a dataset with dark backgrounds.
To use it in a prompt: ```"drawn by dark_ouroboros"```
### White ouroboros
This embedding was trained on a dataset with white backgrounds.
To use it in a prompt: ```"drawn by white_ouroboros"```
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | [
-0.4132017493247986,
-0.6911299824714661,
0.3824947476387024,
0.361943781375885,
-0.5305522680282593,
-0.17826110124588013,
-0.2599945068359375,
-0.27249816060066223,
0.29085877537727356,
0.492240309715271,
-0.5074822902679443,
-0.6855611205101013,
-0.8069995641708374,
0.041209205985069275... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MLRS/masri_test | MLRS | 2023-03-30T11:08:22Z | 13 | 1 | null | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:mt",
"license:cc-by-nc-sa-4.0",
"masri",
"maltese",
"masri-project",
"malta",
"test... | 2023-03-30T11:08:22Z | 2022-11-25T17:06:57.000Z | 2022-11-25T17:06:57 | ---
annotations_creators:
- expert-generated
language:
- mt
language_creators:
- other
license: cc-by-nc-sa-4.0
multilinguality:
- monolingual
pretty_name: >-
MASRI-TEST CORPUS: Audio and Transcriptions in Maltese extracted from the
YouTube channel of the University of Malta.
size_categories:
- n<1K
source_datasets:
- original
tags:
- masri
- maltese
- masri-project
- malta
- test corpus
task_categories:
- automatic-speech-recognition
task_ids: []
---
# Dataset Card for masri_test
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [MASRI Project](https://www.um.edu.mt/projects/masri/)
- **Repository:** [MASRI Data Repo](https://github.com/UMSpeech/)
- **Point of Contact:** [Carlos Mena](mailto:carlos.mena@ciempiess.org), [Andrea De Marco](mailto:andrea.demarco@um.edu.mt), [Claudia Borg](mailto:claudia.borg@um.edu.mt)
### Dataset Summary
The MASRI-TEST CORPUS was created out of YouTube videos belonging to the channel of the [University of Malta](www.youtube.com/user/universityofmalta). It has a length of 1 hour and it is gender balanced, as it has the same number of male and female speakers.
### Example Usage
The MASRI-TEST contains only the test split:
```python
from datasets import load_dataset
masri_test = load_dataset("MLRS/masri_test")
```
It is also valid to do:
```python
from datasets import load_dataset
masri_test = load_dataset("MLRS/masri_test",split="test")
```
### Supported Tasks
automatic-speech-recognition: The dataset can be used to test a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
### Languages
The language of the corpus is Maltese.
## Dataset Structure
### Data Instances
```python
{
'audio_id': 'MSRTS_M_17_TS_00001',
'audio': {
'path': '/home/carlos/.cache/HuggingFace/datasets/downloads/extracted/9158ecbeeb3532038f3fe3d53e0adda1f790c9363a613bac32c454a39d9c682c/test/male/M_17/MSRTS_M_17_TS_00001.flac',
'array': array([ 0.0020752 , 0.00283813, 0.00167847, ..., -0.0010376 ,
-0.00091553, -0.00100708], dtype=float32),
'sampling_rate': 16000
},
'speaker_id': 'M_17',
'gender': 'male',
'duration': 5.920000076293945,
'normalized_text': 'ignazio saverio mifsud kien qed jippjana kien qed iħejji tliet volumi tal-biblijoteka maltese'
}
```
### Data Fields
* `audio_id` (string) - id of audio segment
* `audio` (datasets.Audio) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).
* `speaker_id` (string) - id of speaker
* `gender` (string) - gender of speaker (male or female)
* `duration` (float32) - duration of the audio file in seconds.
* `normalized_text` (string) - normalized audio segment transcription
### Data Splits
The corpus counts just with the test split which has a total of 668 speech files from 17 male speakers and 17 female speakers with a total duration of 1 hour.
## Dataset Creation
### Curation Rationale
The MASRI-TEST CORPUS (MTSC) has the following characteristics:
* The MTSC has an exact duration of 1 hours and 0 minutes. It has 668 audio files.
* The MTSC has recordings from 34 different speakers: 17 men and 17 women.
* Data in MTSC is classified by speaker. Therefore, all the recordings of each individual speaker are stored in one single directory.
* Data is also classified according to the gender (male/female) of the speakers.
* Every audio file in the MTSC has a duration between 3 and 10 seconds approximately.
* Audio files in the MTSC are distributed in a 16khz@16bit mono format.
* Transcriptions in MTSC are in lowercase. No punctuation marks are permitted except for dashes (-) and apostrophes (') due to their importance in Maltese orthography.
### Source Data
#### Initial Data Collection and Normalization
The MASRI-TEST CORPUS was possible due to a collaboration of two different Universities. The data selection and audio segmentation was performed by the [CIEMPIESS-UNAM Project](http://www.ciempiess.org/) at the [Universidad Nacional Autónoma de México (UNAM)](https://www.unam.mx/) in Mexico City. The audio transcription and corpus edition was performed by the [MASRI Team](https://www.um.edu.mt/projects/masri/) at the [University of Malta](https://www.um.edu.mt/) in the Msida Campus.
### Annotations
#### Annotation process
Proper nouns and other words pronounced in languages other than Maltese (mainly from English, Italian, French and German) were transcribed in their respective orthographic system.
#### Who are the annotators?
The audio transcription was performed by expert native speakers at the [University of Malta](https://www.um.edu.mt/) in the Msida Campus.
### Personal and Sensitive Information
The dataset could contain names revealing the identity of some speakers; on the other side, the recordings come from a publicly repository (YouTube), so, there is not a real intent of the participants to be anonymized. Anyway, you agree to not attempt to determine the identity of speakers in this dataset.
**Notice:** Should you consider that our data contains material that is owned by you and should therefore not be reproduced here?, please:
* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
* Send the request to [Carlos Mena](mailto:carlos.mena@ciempiess.org)
Take down: We will comply to legitimate requests by removing the affected sources from the corpus.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is challenging because it contains spontaneous speech; so, it will be helpful for the ASR community to evaluate their acoustic models in Maltese with it.
### Discussion of Biases
The dataset intents to be gender balanced. It is comprised of 17 male speakers and 17 female speakers.
### Other Known Limitations
Neither the MASRI Team or the CIEMPIESS-UNAM Project guarantee the accuracy of this corpus, nor its suitability for any specific purpose. As a matter of fact, a number of errors, omissions and inconsistencies are expected to be found within the corpus.
### Dataset Curators
The audio recordings were collected and segmented by students belonging to the social service program ["Desarrollo de Tecnologías del Habla"](http://profesores.fi-b.unam.mx/carlos_mena/servicio.html), it was curated by Carlos Daniel Hernández Mena and its transcriptions were manually performed by Ayrton-Didier Brincat during 2020.
### Licensing Information
[CC-BY-NC-SA-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/). The copyright remains with the original owners of the video.
As the data is taken from YouTube, we invoke the same argument of "fair use" as in the [Voxlingua107](http://bark.phon.ioc.ee/voxlingua107/) dataset, which is:
**"While YouTube users own the copyright to their own videos, using the audio in the videos for training speech recognition models has very limited and transformative purpose and qualifies thus as "fair use" of copyrighted materials. YouTube’s terms of service forbid downloading, storing and distribution of videos. However, the aim of this rule is clearly to forbid unfair monetization of the content by third-party sites and applications. Our dataset contains the videos in segmented audio-only form that makes the monetization of the actual distributed content extremely difficult."**
### Citation Information
```
@misc{carlosmenamasritest2020,
title={MASRI-TEST CORPUS: Audio and Transcriptions in Maltese extracted from the YouTube channel of the University of Malta.},
author={Hernandez Mena, Carlos Daniel and Brincat, Ayrton-Didier and Gatt, Albert and DeMarco, Andrea and Borg, Claudia and van der Plas, Lonneke and Meza Ruiz, Iván Vladimir},
journal={MASRI Project, Malta},
year={2020},
url={https://huggingface.co/datasets/MLRS/masri_test},
}
```
### Contributions
The authors would like to thank to Alberto Templos Carbajal, Elena Vera and Angélica Gutiérrez for their support to the social service program ["Desarrollo de Tecnologías del Habla"](http://profesores.fi-b.unam.mx/carlos_mena/servicio.html) at the ["Facultad de Ingeniería (FI)"](https://www.ingenieria.unam.mx/) of the [Universidad Nacional Autónoma de México (UNAM)](https://www.unam.mx/). We also thank to the social service students for all the hard work during the audio segmentation. | [
-0.6457654237747192,
-0.7556211948394775,
-0.0029322507325559855,
0.16795958578586578,
-0.21532264351844788,
0.03942566737532616,
-0.3667384088039398,
-0.18092119693756104,
0.48583558201789856,
0.41362330317497253,
-0.6571890115737915,
-0.5779765248298645,
-0.6153554916381836,
0.3411804437... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ashraf-ali/quran-data | ashraf-ali | 2022-12-10T17:35:33Z | 13 | 5 | quran-data | [
"task_categories:automatic-speech-recognition",
"language_creators:Tarteel.io",
"license:cc0-1.0",
"region:us"
] | 2022-12-10T17:35:33Z | 2022-11-28T17:14:02.000Z | 2022-11-28T17:14:02 | ---
language_creators:
- Tarteel.io
license:
- cc0-1.0
size_categories:
ar:
- 43652
task_categories:
- automatic-speech-recognition
task_ids: []
paperswithcode_id: quran-data
pretty_name: Quran Audio
language_bcp47:
- ar
---
# Dataset Card for Quran audio
Content
* 7 Imam Full Quran Recitation: 7*6236 wav file
- csv contains the Text info for 11k subset short wav file
* Tarteel.io user dataset ~25k wav
- csv contains the Text info for 18k subset of the accepted user quality | [
-0.19383762776851654,
-0.42108848690986633,
0.11796458065509796,
0.4992859363555908,
-0.5904836654663086,
-0.030597994104027748,
-0.29930219054222107,
-0.35060936212539673,
0.22025737166404724,
0.8759444355964661,
-0.28117939829826355,
-0.9243881106376648,
-0.4683760404586792,
0.3774289786... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
society-ethics/medmcqa_age_gender | society-ethics | 2022-11-30T02:59:21Z | 13 | 1 | null | [
"region:us"
] | 2022-11-30T02:59:21Z | 2022-11-30T02:20:29.000Z | 2022-11-30T02:20:29 | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: opa
dtype: string
- name: opb
dtype: string
- name: opc
dtype: string
- name: opd
dtype: string
- name: cop
dtype: int64
- name: choice_type
dtype: string
- name: exp
dtype: string
- name: subject_name
dtype: string
- name: topic_name
dtype: string
- name: age.child
dtype: bool
- name: age.youth
dtype: bool
- name: age.adult
dtype: bool
- name: age.senior
dtype: bool
- name: gender.male
dtype: bool
- name: gender.female
dtype: bool
splits:
- name: train
num_bytes: 132040415
num_examples: 182822
- name: validation
num_bytes: 2224566
num_examples: 4183
download_size: 84155335
dataset_size: 134264981
---
# Dataset Card for "medmcqa_age_gender"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.43985652923583984,
-0.07262739539146423,
0.3121582269668579,
0.1393613964319229,
-0.012462584301829338,
0.19338592886924744,
0.5430171489715576,
-0.0352008230984211,
0.3758701980113983,
0.3754667341709137,
-1.1421233415603638,
-0.8411040902137756,
-0.5453333258628845,
-0.134370252490043... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
deutsche-telekom/NLU-few-shot-benchmark-en-de | deutsche-telekom | 2023-01-01T07:23:53Z | 13 | 1 | null | [
"task_categories:text-classification",
"task_ids:intent-classification",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:extended|deutsche-telekom/NLU-Evaluation-Data-en-de",
"language:en",
"language:de",
"license:cc-by-4.0",
"region:us"
] | 2023-01-01T07:23:53Z | 2022-12-02T16:26:59.000Z | 2022-12-02T16:26:59 | ---
license: cc-by-4.0
language:
- en
- de
multilinguality:
- multilingual
source_datasets:
- extended|deutsche-telekom/NLU-Evaluation-Data-en-de
size_categories:
- 1K<n<10K
task_categories:
- text-classification
task_ids:
- intent-classification
---
# NLU Few-shot Benchmark - English and German
This is a few-shot training dataset from the domain of human-robot interaction.
It contains texts in German and English language with 64 different utterances (classes).
Each utterance (class) has exactly 20 samples in the training set.
This leads to a total of 1280 different training samples.
The dataset is intended to benchmark the intent classifiers of chat bots in English and especially in German language.
We are building on our
[deutsche-telekom/NLU-Evaluation-Data-en-de](https://huggingface.co/datasets/deutsche-telekom/NLU-Evaluation-Data-en-de)
data set.
## Processing Steps
- drop `NaN` values
- drop duplicates in `answer_de` and `answer`
- delete all rows where `answer_de` has more than 70 characters
- add column `label`: `df["label"] = df["scenario"] + "_" + df["intent"]`
- remove classes (`label`) with less than 25 samples:
- `audio_volume_other`
- `cooking_query`
- `general_greet`
- `music_dislikeness`
- random selection for train set - exactly 20 samples for each class (`label`)
- rest for test set
## Copyright
Copyright (c) the authors of [xliuhw/NLU-Evaluation-Data](https://github.com/xliuhw/NLU-Evaluation-Data)\
Copyright (c) 2022 [Philip May](https://may.la/), [Deutsche Telekom AG](https://www.telekom.com/)
All data is released under the
[Creative Commons Attribution 4.0 International License (CC BY 4.0)](http://creativecommons.org/licenses/by/4.0/).
| [
-0.3596230149269104,
-0.7451614141464233,
0.2720522880554199,
0.18708603084087372,
-0.054273683577775955,
-0.3283567726612091,
-0.31141331791877747,
-0.34063485264778137,
0.0365314781665802,
0.45567837357521057,
-0.6692185997962952,
-0.6433144807815552,
-0.39002662897109985,
0.493560165166... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Shularp/un_multi-ar-en | Shularp | 2022-12-07T11:00:47Z | 13 | 0 | null | [
"region:us"
] | 2022-12-07T11:00:47Z | 2022-12-07T10:56:27.000Z | 2022-12-07T10:56:27 | ---
dataset_info:
features:
- name: translation
dtype:
translation:
languages:
- ar
- en
splits:
- name: train
num_bytes: 4189844561
num_examples: 9759125
download_size: 1926773979
dataset_size: 4189844561
---
# Dataset Card for "un_multi-ar-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8261367082595825,
-0.09151900559663773,
0.09384281188249588,
0.2273358255624771,
-0.41801273822784424,
0.2522430419921875,
0.2187739461660385,
-0.24847793579101562,
0.8397541046142578,
0.5689466595649719,
-0.6516183614730835,
-0.8128536343574524,
-0.6993647813796997,
0.00335160270333290... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
saibo/bookcorpus_deduplicated | saibo | 2022-12-29T16:24:22Z | 13 | 1 | null | [
"arxiv:2105.05241",
"arxiv:2107.06499",
"arxiv:2209.00099",
"region:us"
] | 2022-12-29T16:24:22Z | 2022-12-28T16:41:10.000Z | 2022-12-28T16:41:10 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2867856394
num_examples: 38832894
download_size: 1794567875
dataset_size: 2867856394
---
# Dataset Card for "bookcorpus_deduplicated"
## Dataset Summary
This is a deduplicated version of the original [Book Corpus dataset](https://huggingface.co/datasets/bookcorpus).
The Book Corpus (Zhu et al., 2015), which was used to train popular models such as BERT, has a substantial amount of exact-duplicate documents according to [Bandy and Vincent (2021)](https://arxiv.org/abs/2105.05241)
[Bandy and Vincent (2021)](https://arxiv.org/abs/2105.05241) find that thousands of books in BookCorpus are duplicated, with only 7,185 unique books out of 11,038 total.
Effect of deduplication
- Num of lines: 38832894 VS 74004228
- Dataset size: 2.91GB VS 4.63GB
The duplicate text has been droped and only the first appearance is kept.
The order of text appearance is kept.
## Why deduplicate?
Deduplication of training data has showed various advantages, including:
- require fewer training steps to achieve the same or better accuracy
- train models that emit memorized text ten times less frequently
- reduce carbon emission and energy consumption
cf [Deduplicating Training Data Makes Language Models Better](https://arxiv.org/abs/2107.06499)
## Deduplication script
```python
import pandas as pd
from datasets import load_dataset
dataset = load_dataset("bookcorpus")["train"]["text"]
df = pd.Dataframe({"text":dataset})
# drop duplicates(exact match)
df_filtered = df["text"].drop_duplicates()
df_filtered.to_csv("bookcorpus_filtered.csv","index"=False,"header"=False)
new_dataset = load_dataset("text",data_files={"train":"bookcorpus_filtered.csv"})
```
The running time is short, less than several minutes.
More sophicated deduplication algorithms can be applied to improve the performance, such as https://github.com/google-research/deduplicate-text-datasets
## Reference
```bib
@misc{https://doi.org/10.48550/arxiv.2105.05241,
doi = {10.48550/ARXIV.2105.05241},
url = {https://arxiv.org/abs/2105.05241},
author = {Bandy, Jack and Vincent, Nicholas},
keywords = {Computation and Language (cs.CL), Computers and Society (cs.CY), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Addressing "Documentation Debt" in Machine Learning Research: A Retrospective Datasheet for BookCorpus},
publisher = {arXiv},
year = {2021},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
```bib
@misc{https://doi.org/10.48550/arxiv.2107.06499,
doi = {10.48550/ARXIV.2107.06499},
url = {https://arxiv.org/abs/2107.06499},
author = {Lee, Katherine and Ippolito, Daphne and Nystrom, Andrew and Zhang, Chiyuan and Eck, Douglas and Callison-Burch, Chris and Carlini, Nicholas},
keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Deduplicating Training Data Makes Language Models Better},
publisher = {arXiv},
year = {2021},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
```bib
@misc{https://doi.org/10.48550/arxiv.2209.00099,
doi = {10.48550/ARXIV.2209.00099},
url = {https://arxiv.org/abs/2209.00099},
author = {Treviso, Marcos and Ji, Tianchu and Lee, Ji-Ung and van Aken, Betty and Cao, Qingqing and Ciosici, Manuel R. and Hassid, Michael and Heafield, Kenneth and Hooker, Sara and Martins, Pedro H. and Martins, André F. T. and Milder, Peter and Raffel, Colin and Simpson, Edwin and Slonim, Noam and Balasubramanian, Niranjan and Derczynski, Leon and Schwartz, Roy},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Methods for Natural Language Processing: A Survey},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.25811484456062317,
-0.7471617460250854,
0.005380014423280954,
0.2505621910095215,
-0.17978721857070923,
0.001298640388995409,
-0.19578523933887482,
-0.4395569860935211,
-0.02465410716831684,
0.6350682973861694,
-0.28516843914985657,
-0.46116527915000916,
-0.5858423113822937,
0.238053888... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Ruth-Ann/jampatoisnli | Ruth-Ann | 2022-12-31T03:25:34Z | 13 | 0 | null | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"multilinguality:other-english-based-creole",
"size_categories:n<1K",
"source_dataset... | 2022-12-31T03:25:34Z | 2022-12-29T05:22:50.000Z | 2022-12-29T05:22:50 | ---
annotations_creators:
- expert-generated
language:
- jam
language_creators:
- expert-generated
- found
license:
- other
multilinguality:
- monolingual
- other-english-based-creole
pretty_name: JamPatoisNLI
size_categories:
- n<1K
source_datasets:
- original
tags:
- creole
- low-resource-language
task_categories:
- text-classification
task_ids:
- natural-language-inference
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- jampatoisnli.github.io
- **Repository:**
- https://github.com/ruth-ann/jampatoisnli
- **Paper:**
- https://arxiv.org/abs/2212.03419
- **Point of Contact:**
- Ruth-Ann Armsrong: armstrongruthanna@gmail.com
### Dataset Summary
JamPatoisNLI provides the first dataset for natural language inference in a creole language, Jamaican Patois.
Many of the most-spoken low-resource languages are creoles. These languages commonly have a lexicon derived from
a major world language and a distinctive grammar reflecting the languages of the original speakers and the process
of language birth by creolization. This gives them a distinctive place in exploring the effectiveness of transfer
from large monolingual or multilingual pretrained models.
### Supported Tasks and Leaderboards
Natural language inference
### Languages
Jamaican Patois
### Data Fields
premise, hypothesis, label
### Data Splits
Train: 250
Val: 200
Test: 200
### Data set creation + Annotations
Premise collection:
97% of examples from Twitter; remaining pulled from literature and online cultural website
Hypothesis construction:
For each premise, hypothesis written by native speaker (our first author) so that pair’s classification would be E, N or C
Label validation:
Random sample of 100 sentence pairs double annotated by fluent speakers
### Social Impact of Dataset
JamPatoisNLI is a low-resource language dataset in an English-based Creole spoken in the Caribbean,
Jamaican Patois. The creation of the dataset contributes to expanding the scope of NLP research
to under-explored languages across the world.
### Dataset Curators
[@ruth-ann](https://github.com/ruth-ann)
### Citation Information
@misc{https://doi.org/10.48550/arxiv.2212.03419,
doi = {10.48550/ARXIV.2212.03419},
url = {https://arxiv.org/abs/2212.03419},
author = {Armstrong, Ruth-Ann and Hewitt, John and Manning, Christopher},
keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences, I.2.7},
title = {JamPatoisNLI: A Jamaican Patois Natural Language Inference Dataset},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
### Contributions
Thanks to Prof. Christopher Manning and John Hewitt for their contributions, guidance, facilitation and support related to the creation of this dataset.
| [
-0.42453527450561523,
-0.6547154188156128,
0.009984908625483513,
0.5700637698173523,
-0.2747499942779541,
0.20315329730510712,
-0.5703424215316772,
-0.3953971862792969,
0.7089278697967529,
0.7380533218383789,
-0.48321273922920227,
-0.6681209206581116,
-0.7092725038528442,
0.305129438638687... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
georeactor/reddit_one_ups_2014 | georeactor | 2023-03-28T22:02:40Z | 13 | 0 | null | [
"task_categories:text-classification",
"language:en",
"reddit",
"not-for-all-eyes",
"not-for-all-audiences",
"region:us"
] | 2023-03-28T22:02:40Z | 2022-12-29T08:23:42.000Z | 2022-12-29T08:23:42 | ---
task_categories:
- text-classification
tags:
- reddit
- not-for-all-eyes
- not-for-all-audiences
language: en
---
# Dataset Card for reddit_one_ups_2014
## Dataset Description
- **Homepage:** https://github.com/Georeactor/reddit-one-ups
### Dataset Summary
Reddit 'one-ups' or 'clapbacks' - replies which scored higher than the original comments. This task makes one-ups easier by focusing on a set of common, often meme-like replies (e.g. 'yes', 'nope', '(͡°͜ʖ͡°)').
For commentary on predictions with a previous version of the dataset, see https://blog.goodaudience.com/can-deepclapback-learn-when-to-lol-e4a2092a8f2c
For unique / non-meme seq2seq version of this dataset, see https://huggingface.co/datasets/georeactor/reddit_one_ups_seq2seq_2014
Replies were selected from PushShift's archive of posts from 2014.
### Supported Tasks
Text classification task: finding the common reply (out of ~37) to match the parent comment text.
Text prediction task: estimating the vote score, or parent:reply ratio, of a meme response, as a measure of relevancy/cleverness of reply.
### Languages
Primarily English - includes some emoticons such as ┬─┬ノ(ಠ_ಠノ)
## Dataset Structure
### Data Instances
29,375 rows
### Data Fields
- id: the Reddit alphanumeric ID for the reply
- body: the content of the original reply
- score: the net vote score of the original reply
- parent_id: the Reddit alphanumeric ID for the parent
- author: the Reddit username of the reply
- subreddit: the Reddit community where the discussion occurred
- parent_score: the net vote score of the parent comment
- cleantext: the simplified reply (one of 37 classes)
- tstamp: the timestamp of the reply
- parent_body: the content of the original parent
## Dataset Creation
### Source Data
Reddit comments collected through PushShift.io archives for 2014.
#### Initial Data Collection and Normalization
- Removed deleted or empty comments.
- Selected only replies which scored 1.5x higher than a parent comment, where both have a positive score.
- Found the top/repeating phrases common to these one-ups/clapback comments.
- Selected only replies which had one of these top/repeating phrases.
- Made rows in PostgreSQL and output as CSV.
## Considerations for Using the Data
Comments and responses in the Reddit archives and output datasets all include NSFW and otherwise toxic language and links!
- You can use the subreddit and score columns to filter content.
- Imbalanced dataset: replies 'yes' and 'no' are more common than others.
- Overlap of labels: replies such as 'yes', 'yep', and 'yup' serve similar purposes; in other cases 'no' vs. 'nope' may be interesting.
- Timestamps: the given timestamp may help identify trends in meme replies
- Usernames: a username was included to identify the 'username checks out' meme, but this was not common enough in 2014, and the included username is from the reply.
Reddit comments are properties of Reddit and comment owners using their Terms of Service. | [
-0.6454948782920837,
-0.7753979563713074,
0.3876536190509796,
0.4388667941093445,
-0.3964190185070038,
-0.07275371253490448,
0.0002809072320815176,
-0.06680291891098022,
0.5902538299560547,
0.490265816450119,
-0.9674826264381409,
-0.7199823260307312,
-0.7331227660179138,
0.2000621110200882... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lintang/numerical_reasoning_arithmetic | lintang | 2023-01-09T06:33:43Z | 13 | 0 | null | [
"region:us"
] | 2023-01-09T06:33:43Z | 2023-01-05T08:48:37.000Z | 2023-01-05T08:48:37 | # Numerical Reasoning
| [
-0.6224920749664307,
-0.24438495934009552,
0.869672954082489,
0.9915084838867188,
-0.5110044479370117,
0.0626751184463501,
0.27180320024490356,
0.20398572087287903,
-0.12044711410999298,
0.6286579966545105,
-0.29405471682548523,
-0.36700350046157837,
-0.49964800477027893,
0.202071040868759... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dhurley/medicare | dhurley | 2023-01-07T21:26:23Z | 13 | 0 | null | [
"license:mit",
"region:us"
] | 2023-01-07T21:26:23Z | 2023-01-07T19:13:51.000Z | 2023-01-07T19:13:51 | ---
license: mit
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jrtec/Superheroes | jrtec | 2023-01-08T06:18:48Z | 13 | 0 | null | [
"task_categories:summarization",
"size_categories:1K<n<10K",
"language:en",
"license:cc0-1.0",
"superheroes",
"heroes",
"anime",
"manga",
"marvel",
"region:us"
] | 2023-01-08T06:18:48Z | 2023-01-08T01:38:39.000Z | 2023-01-08T01:38:39 | ---
license: cc0-1.0
task_categories:
- summarization
language:
- en
tags:
- superheroes
- heroes
- anime
- manga
- marvel
size_categories:
- 1K<n<10K
---
# Dataset Card for Superheroes
## Dataset Description
1400+ Superheroes history and powers description to apply text mining and NLP [Original source](https://www.kaggle.com/datasets/jonathanbesomi/superheroes-nlp-dataset/code?resource=download)
## Context
The aim of this dataset is to make text analytics and NLP even funnier. All of us have dreamed to be like a superhero and save the world, yet we are still on Kaggle figuring out how python works. Then, why not improve our NLP competences by analyzing Superheros' history and powers?
The particularity of this dataset is that it contains categorical and numerical features such as overall_score, intelligence_score, creator, alignment, gender, eye_color but also text features history_text and powers_text. By combining the two, a lot of interesting insights can be gathered!
## Content
We collected all data from superherodb and cooked for you in a nice and clean tabular format.
The dataset contains 1447 different Superheroes. Each superhero row has:
* overall_score - derivated by superherodb from the power stats features. Can you find the relationship?
* history_text - History of the Superhero (text features)
* powers_text - Description of Superheros' powers (text features)
* intelligence_score, strength_score, speed_score, durability_score, power_score and combat_score. (power stats features)
* "Origin" (full_name, alter_egos, …)
* "Connections" (occupation, base, teams, …)
* "Appareance" (gender, type_race, height, weight, eye_color, …)
## Acknowledgements
The following [Github repository](https://github.com/jbesomi/texthero/tree/master/dataset/Superheroes%20NLP%20Dataset) contains the code used to scrape this Dataset.
| [
-0.3429059684276581,
-0.29493579268455505,
0.11966217309236526,
-0.18130122125148773,
-0.14984673261642456,
0.3490007221698761,
0.006776360794901848,
-0.26904457807540894,
0.5537616014480591,
0.5631754994392395,
-0.8230276107788086,
-0.635821521282196,
-0.5228135585784912,
0.50011223554611... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mehul7/captioned_military_aircraft | mehul7 | 2023-01-11T23:35:22Z | 13 | 3 | null | [
"license:mit",
"region:us"
] | 2023-01-11T23:35:22Z | 2023-01-11T22:54:08.000Z | 2023-01-11T22:54:08 | ---
license: mit
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 5806592710.697
num_examples: 8341
download_size: 6709513141
dataset_size: 5806592710.697
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ihanif/praang-images | ihanif | 2023-01-17T11:27:22Z | 13 | 0 | null | [
"region:us"
] | 2023-01-17T11:27:22Z | 2023-01-17T11:27:10.000Z | 2023-01-17T11:27:10 | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 7404618.0
num_examples: 23
download_size: 5551951
dataset_size: 7404618.0
---
# Dataset Card for "praang-images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5372496843338013,
-0.22470226883888245,
0.04849500209093094,
0.29539957642555237,
-0.6691980957984924,
-0.18162435293197632,
0.07938798516988754,
-0.22948043048381805,
0.9609577059745789,
0.5785689353942871,
-0.8092549443244934,
-1.1359820365905762,
-0.7070328593254089,
-0.0829499438405... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yuvalkirstain/dreambooth_test_with_reg | yuvalkirstain | 2023-01-18T08:09:31Z | 13 | 0 | null | [
"region:us"
] | 2023-01-18T08:09:31Z | 2023-01-18T06:59:08.000Z | 2023-01-18T06:59:08 | ---
dataset_info:
features:
- name: text
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 183792899.0
num_examples: 200
- name: validation
num_bytes: 37346753.0
num_examples: 32
download_size: 78739258
dataset_size: 221139652.0
---
# Dataset Card for "dreambooth_test_with_reg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4450073838233948,
-0.19787344336509705,
-0.06897657364606857,
0.23425278067588806,
-0.3697972297668457,
0.3172638416290283,
0.2292640507221222,
-0.13489143550395966,
0.9544169902801514,
0.2961329519748688,
-0.7580324411392212,
-0.607526957988739,
-0.26706621050834656,
-0.171399489045143... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
includeno/movielens-100k | includeno | 2023-01-19T16:13:51Z | 13 | 0 | null | [
"size_categories:10K<n<100K",
"license:apache-2.0",
"region:us"
] | 2023-01-19T16:13:51Z | 2023-01-19T15:56:56.000Z | 2023-01-19T15:56:56 | ---
license: apache-2.0
size_categories:
- 10K<n<100K
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kubota/defamation-japanese-twitter | kubota | 2023-02-06T18:26:10Z | 13 | 2 | null | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ja",
"license:cc-by-4.0",
"region:us"
] | 2023-02-06T18:26:10Z | 2023-01-20T06:50:46.000Z | 2023-01-20T06:50:46 | ---
annotations_creators:
- crowdsourced
language:
- ja
language_creators:
- crowdsourced
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: defamation_japanese_twitter
size_categories:
- 1K<n<10K
source_datasets:
- original
tags: []
task_categories:
- text-classification
task_ids: []
dataset_info:
features:
- name: id
dtype: string
- name: target
sequence: string
- name: label
sequence: string
- name: user_id_list
sequence: int32
---
# defamation_japanese_twitter
# Twitter日本語誹謗中傷検出データセット
<!-- ## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** -->
## Dataset Summary
SNSにおける誹謗中傷検出のためのデータセットです.
5,000件の日本語のツイートに,それぞれ以下で定義している誹謗中傷の対象者と内容をアノテーションしています.アノテーションは,3人のクラウドワーカーにより行われています.2022年2月15日から2022年6月30日までのツイートです.
元のツイートは含まれていないため,Twitter APIを用いてデータセットを収集してください.
中傷対象(target)と中傷内容(label)の2項目がアノテーションされています.
- target :テキストが話題にしている対象者の分類
- label : targetで選択された対象者に対する誹謗中傷の種類の分類
文として成立しておらず意味の取れないものはラベルC(0)としています.
| target | 対象 | 例|
| ---- | ---- | ---- |
| A1(1) | (人種・性別・職業・思想などを共通とする)グループ | (人種・性別・職業・思想などを共通とする)グループ
| A2(2) | 個人(著名人や知人など) | 〇〇大統領,芸能人の〇〇さん,おまえ
| A3(3) | 対象がはっきりしないもの |
| C(0) | 文として成立しておらず意味が取れない |
| label | 誹謗中傷の種類 | 侵害されるもの | 例
| ---- | ---- | ---- | ---- |
| B1(1) | 生命を脅かす,精神的・身体的な危害を加える | 私生活の平穏 | • 殺害予告などの脅迫発言<br>• ◯◯なんていなくなればいいのにな
| B2(2) | 容姿,人格などをけなしている | 名誉感情| • 太っているくせにカッコいいと勘違いしている<br>• 田舎育ちだからファッション感覚がない
| B3(3) | 社会から客観的に受ける価値を低下させる | 名誉権| • ◯◯さんは過去に事件を起こして逮捕されたことがある<br>• ◯◯さんは会社の同僚と不倫をしている
| B4(4) | B1-B3のどれにも当てはまらず中傷性がない | |
| C(0) | 文として成立しておらず意味が取れない |
## Data Fields
- `id` Twitter ID
- `target`: 3名のアノテータのカテゴリAの回答 values: C(0), A1(1), A2(2), A3(3)
- `label`: 3名のアノテータのカテゴリBの回答 values: C(0), B1(1), B2(2), B3(3), B4(4)
- `user_id_list`: 匿名化された回答者のID
## Example Using Twitter API
[](https://colab.research.google.com/github/kubotaissei/defamation_japanese_twitter/blob/master/notebooks/get_dataset_example.ipynb)
```python
# sample code from https://github.com/twitterdev/Twitter-API-v2-sample-code/blob/main/Tweet-Lookup/get_tweets_with_bearer_token.py
import requests
import os
import json
from datasets import load_dataset
# To set your enviornment variables in your terminal run the following line:
# export 'BEARER_TOKEN'='<your_bearer_token>'
bearer_token = os.environ.get("BEARER_TOKEN")
def create_url(ids: list):
tweet_fields = "tweet.fields=created_at"
ids = f"ids={','.join(ids)}"
url = "https://api.twitter.com/2/tweets?{}&{}".format(ids, tweet_fields)
return url
def bearer_oauth(r):
"""
Method required by bearer token authentication.
"""
r.headers["Authorization"] = f"Bearer {bearer_token}"
r.headers["User-Agent"] = "v2TweetLookupPython"
return r
def connect_to_endpoint(url):
response = requests.request("GET", url, auth=bearer_oauth)
if response.status_code != 200:
raise Exception(
"Request returned an error: {} {}".format(
response.status_code, response.text
)
)
return response.json()
def get_text_data(examples):
url = create_url(examples["id"])
json_response = connect_to_endpoint(url)
# print(json_response["data"])
text_dict = {data["id"]: data["text"] for data in json_response["data"]}
time_dict = {data["id"]: data["created_at"] for data in json_response["data"]}
return {
"text": [text_dict.get(id) for id in examples["id"]],
"created_at": [time_dict.get(id) for id in examples["id"]],
}
dataset = load_dataset("kubota/defamation-japanese-twitter")
dataset = dataset.map(get_text_data, batched=True, batch_size=100)
dataset["train"].to_pandas().head()
```
<!-- ## Data Splits
[More Information Needed]
## Dataset Creation
## Curation Rationale
[More Information Needed]
## Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed] -->
## Contributions
Thanks to [@kubotaissei](https://github.com/kubotaissei) for adding this dataset. | [
-0.3304722309112549,
-0.8386595845222473,
0.18733887374401093,
0.34966182708740234,
-0.24546784162521362,
0.29268547892570496,
-0.25053712725639343,
-0.5677520036697388,
0.5684118866920471,
0.29351550340652466,
-0.6803230047225952,
-0.6748959422111511,
-0.6415307521820068,
0.23543933033943... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
NLPFin/Quantitative101 | NLPFin | 2023-01-23T04:17:06Z | 13 | 0 | null | [
"license:cc-by-nc-4.0",
"region:us"
] | 2023-01-23T04:17:06Z | 2023-01-23T04:14:40.000Z | 2023-01-23T04:14:40 | ---
license: cc-by-nc-4.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
metaeval/monotonicity-entailment | metaeval | 2023-01-24T08:35:27Z | 13 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-01-24T08:35:27Z | 2023-01-24T08:29:45.000Z | 2023-01-24T08:29:45 | ---
license: apache-2.0
---
```
@inproceedings{yanaka-etal-2019-neural,
title = "Can Neural Networks Understand Monotonicity Reasoning?",
author = "Yanaka, Hitomi and
Mineshima, Koji and
Bekki, Daisuke and
Inui, Kentaro and
Sekine, Satoshi and
Abzianidze, Lasha and
Bos, Johan",
booktitle = "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
year = "2019",
pages = "31--40",
}
```
| [
-0.4392159879207611,
-0.7409358024597168,
0.16726361215114594,
0.269280344247818,
-0.4551110863685608,
-0.4437052011489868,
-0.327797532081604,
-0.6299362182617188,
0.3807392716407776,
0.19571679830551147,
-0.8815308809280396,
-0.3084990382194519,
-0.554236888885498,
0.17923468351364136,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
metaeval/naturallogic | metaeval | 2023-01-26T09:51:03Z | 13 | 0 | null | [
"task_categories:text-classification",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-01-26T09:51:03Z | 2023-01-26T09:49:49.000Z | 2023-01-26T09:49:49 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
---
https://github.com/feng-yufei/Neural-Natural-Logic
```bib
@inproceedings{feng2020exploring,
title={Exploring End-to-End Differentiable Natural Logic Modeling},
author={Feng, Yufei, Ziou Zheng, and Liu, Quan and Greenspan, Michael and Zhu, Xiaodan},
booktitle={Proceedings of the 28th International Conference on Computational Linguistics},
pages={1172--1185},
year={2020}
}
``` | [
-0.23713205754756927,
-0.5869814157485962,
0.23081210255622864,
0.2674928605556488,
-0.09128088504076004,
0.025560488924384117,
-0.4260258972644806,
-0.8444996476173401,
0.35367873311042786,
0.18546147644519806,
-0.7162511944770813,
-0.21164532005786896,
-0.3039100468158722,
0.391791224479... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
clip-benchmark/wds_flickr30k | clip-benchmark | 2023-01-31T00:27:15Z | 13 | 0 | null | [
"region:us"
] | 2023-01-31T00:27:15Z | 2023-01-31T00:26:29.000Z | 2023-01-31T00:26:29 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
clip-benchmark/wds_mscoco_captions | clip-benchmark | 2023-01-31T00:31:29Z | 13 | 1 | null | [
"region:us"
] | 2023-01-31T00:31:29Z | 2023-01-31T00:29:00.000Z | 2023-01-31T00:29:00 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hanamizuki-ai/genshin-voice-v3.4-mandarin | hanamizuki-ai | 2023-04-13T02:28:53Z | 13 | 4 | null | [
"task_categories:text-to-speech",
"task_categories:automatic-speech-recognition",
"multilinguality:monolingual",
"source_datasets:original",
"language:zh",
"region:us"
] | 2023-04-13T02:28:53Z | 2023-02-09T01:50:09.000Z | 2023-02-09T01:50:09 | ---
language:
- zh
multilinguality:
- monolingual
pretty_name: Genshin Voice
source_datasets:
- original
task_categories:
- text-to-speech
- automatic-speech-recognition
dataset_info:
features:
- name: audio
dtype: audio
- name: language
dtype: string
- name: npcName
dtype: string
- name: text
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 20516788863.251
num_examples: 78337
download_size: 34041643248
dataset_size: 20516788863.251
---
# Dataset Card for Genshin Voice
## Dataset Description
### Dataset Summary
The Genshin Voice dataset is a text-to-voice dataset of different Genshin Impact characters unpacked from the game.
### Languages
The text in the dataset is in Mandarin.
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The data was obtained by unpacking the [Genshin Impact](https://genshin.hoyoverse.com/) game.
#### Who are the source language producers?
The language producers are the employee of [Hoyoverse](https://hoyoverse.com/) and contractors from [EchoSky Studio](http://qx.asiacu.com/).
### Annotations
The dataset contains official annotations from the game, including ingame speaker name and transcripts.
## Additional Information
### Dataset Curators
The dataset was created by [w4123](https://github.com/w4123) initially in his [GitHub repository](https://github.com/w4123/GenshinVoice).
### Licensing Information
Copyright © COGNOSPHERE. All Rights Reserved. | [
-0.11419045180082321,
-0.181595116853714,
-0.09415209293365479,
0.2813863456249237,
-0.1745700240135193,
0.4151202440261841,
-0.31707683205604553,
-0.30916252732276917,
0.3252650797367096,
0.7964580059051514,
-1.1213959455490112,
-0.8945391178131104,
-0.1038065180182457,
0.0251917950809001... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vishnun/NLP-KnowledgeGraph | vishnun | 2023-02-15T04:24:58Z | 13 | 0 | null | [
"task_categories:token-classification",
"size_categories:10K<n<100K",
"language:en",
"license:cc0-1.0",
"ML",
"NLP",
"region:us"
] | 2023-02-15T04:24:58Z | 2023-02-14T06:45:57.000Z | 2023-02-14T06:45:57 | ---
license: cc0-1.0
task_categories:
- token-classification
language:
- en
tags:
- ML
- NLP
size_categories:
- 10K<n<100K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
KG dataset created by using spaCy PoS and Dependency parser.
### Supported Tasks and Leaderboards
Can be leveraged for token classification for detection of knowledge graph entities and relations.
### Languages
English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
Important fields for the token classification task are
* tokens - tokenized text
* tags - Tags for each token
{'SRC' - Source, 'REL' - Relation, 'TGT' - Target, 'O' - Others}
### Data Splits
One data file for around 15k records
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.4550614356994629,
-0.3958912491798401,
0.07708415389060974,
0.14574575424194336,
-0.10927225649356842,
0.07515666633844376,
-0.2190057784318924,
-0.18132059276103973,
0.32168474793434143,
0.7944675087928772,
-0.6979532837867737,
-1.2482703924179077,
-0.9320905208587646,
0.06895355880260... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
svjack/context-dialogue-generate-ds-zh-v1 | svjack | 2023-02-21T07:59:42Z | 13 | 0 | null | [
"region:us"
] | 2023-02-21T07:59:42Z | 2023-02-21T07:28:37.000Z | 2023-02-21T07:28:37 | ---
dataset_info:
features:
- name: sent
dtype: string
- name: dialogue
sequence: string
- name: L_emb
sequence: float32
splits:
- name: train
num_bytes: 74417088
num_examples: 20000
download_size: 82191201
dataset_size: 74417088
---
# Dataset Card for "context-dialogue-generate-ds-zh-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7056859731674194,
-0.5005975961685181,
0.5303649306297302,
0.04384121298789978,
-0.385907918214798,
-0.3723810911178589,
0.22594380378723145,
-0.0028996842447668314,
0.8782333135604858,
0.6081138849258423,
-1.5151132345199585,
-0.7936797142028809,
-0.39068594574928284,
-0.15054987370967... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zeusfsx/ukrainian-news | zeusfsx | 2023-05-14T08:04:18Z | 13 | 9 | null | [
"task_categories:text-generation",
"size_categories:10M<n<100M",
"language:uk",
"license:unknown",
"news",
"region:us"
] | 2023-05-14T08:04:18Z | 2023-03-01T18:34:15.000Z | 2023-03-01T18:34:15 | ---
license: unknown
task_categories:
- text-generation
language:
- uk
pretty_name: ukr-news
size_categories:
- 10M<n<100M
tags:
- news
---
# Ukrainian News Dataset
This is a dataset of news articles downloaded from various Ukrainian websites and Telegram channels.
The dataset contains 22 567 099 JSON objects (news), total size ~67GB each with the following fields:
```json
title: The title of the news article
text: The text of the news article, which may contain HTML tags(e.g., paragraphs, links, images, etc.)
url: The URL of the news article
datetime: The time of publication or when the article was parsed and added to the dataset
owner: The name of the website that published the news article
```
Count of news from websites: 16 022 416
Count of telegram posts: 6 544 683
The JSON objects are divided into parts, and the dataset is available for download via Hugging Face. The terms of use state that all data in this dataset is under the copyright of the owners of the respective websites.
## Accessing the Dataset
The dataset is available for download via the Hugging Face datasets library. You can install the library via pip:
```bash
pip install datasets
```
Once you have installed the library, you can load the dataset using the following code:
```python
from datasets import load_dataset
dataset = load_dataset('zeusfsx/ukrainian-news')
```
This will load the entire dataset into memory. If you prefer to load only a subset of the data, you can specify the split argument:
```python
# Load only the first 10,000 examples from the "train" split
dataset = load_dataset('zeusfsx/ukrainian-news', split='train[:10000]')
```
## Contacts
If you have any questions or comments about this dataset, please contact me at email [zeusfsxtmp@gmail.com]. I will do our best to respond to your inquiry as soon as possible.
## License
The dataset is made available under the terms of use specified by the owners of the respective websites. Please consult the individual websites for more information on their terms of use. | [
-0.2879590094089508,
-0.519403874874115,
0.30506962537765503,
0.43523162603378296,
-0.623187780380249,
0.04071327671408653,
-0.18829689919948578,
-0.21859456598758698,
0.3224579989910126,
0.4817552864551544,
-0.7628611922264099,
-0.7533969283103943,
-0.46584948897361755,
0.1702510863542556... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Elfsong/ClinicalDataset | Elfsong | 2023-03-05T06:43:13Z | 13 | 12 | null | [
"task_categories:summarization",
"task_categories:conversational",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | 2023-03-05T06:43:13Z | 2023-03-05T06:15:46.000Z | 2023-03-05T06:15:46 | ---
task_categories:
- summarization
- conversational
language:
- en
pretty_name: MediQA
size_categories:
- 1K<n<10K
---
# MEDIQA-Chat 2023 Training/Validation Data
# Task A
The training set consists of 1,201 pairs of conversations and associated section headers and contents.
The validation set consists of 100 pairs of conversations and their summaries.
The full list of normalized section headers:
1. fam/sochx [FAMILY HISTORY/SOCIAL HISTORY]
2. genhx [HISTORY of PRESENT ILLNESS]
3. pastmedicalhx [PAST MEDICAL HISTORY]
4. cc [CHIEF COMPLAINT]
5. pastsurgical [PAST SURGICAL HISTORY]
6. allergy
7. ros [REVIEW OF SYSTEMS]
8. medications
9. assessment
10. exam
11. diagnosis
12. disposition
13. plan
14. edcourse [EMERGENCY DEPARTMENT COURSE]
15. immunizations
16. imaging
17. gynhx [GYNECOLOGIC HISTORY]
18. procedures
19. other_history
20. labs
# Task B
The training set consists of 67 pairs of conversations and full notes. The validation set includes 20 pairs of conversations and clinical notes.
Full encounter notes are expected to have at least one of four overall section divisions demarked by the first-occuring of its related section headers:
> | note_division | section_headers
> | subjective | chief complaint, history of present illness, hpi, subjective
> | objective_exam | physical exam, exam
> | objective_results | results, findings
> | assessment_and_plan | assessment, plan
Depending on the encounter, objective_exam and objective_results may not be relevant.
We encourage review the sample data as well as the evaluation script to understand the best demarkation headers for your generated note.
# Task C
The training set consists of 67 pairs of full doctor-patient conversations and notes and the validation set includes 20 pairs of full conversations and clinical notes (same as Task-B datasets). The Task-A training and validation sets (1,301 pairs) could be used as additional training data.
| [
-0.19642801582813263,
-0.45247066020965576,
0.60069340467453,
0.15974600613117218,
-0.21548743546009064,
-0.05145229399204254,
0.17005965113639832,
-0.27923527359962463,
0.33237457275390625,
0.6754003763198853,
-0.7539812326431274,
-0.7458165884017944,
-0.5420051217079163,
0.18445809185504... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
RGBD-SOD/test | RGBD-SOD | 2023-03-12T05:45:57Z | 13 | 0 | null | [
"size_categories:10K<n<100K",
"RGBD-SOD",
"region:us"
] | 2023-03-12T05:45:57Z | 2023-03-10T15:48:51.000Z | 2023-03-10T15:48:51 | ---
dataset_info:
- config_name: v1
features:
- name: depth
dtype: image
- name: rgb
dtype: image
- name: gt
dtype: image
- name: name
dtype: string
splits:
- name: train
num_bytes: 4232411
num_examples: 10
- name: validation
num_bytes: 4232411
num_examples: 10
download_size: 2917880
dataset_size: 8464822
- config_name: v2
features:
- name: depth
dtype: image
- name: rgb
dtype: image
- name: gt
dtype: image
- name: name
dtype: string
splits:
- name: train
num_bytes: 4232411
num_examples: 10
- name: validation
num_bytes: 4232411
num_examples: 10
download_size: 2917880
dataset_size: 8464822
tags:
- RGBD-SOD
size_categories:
- 10K<n<100K
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
davanstrien/wikiart-resized-sample | davanstrien | 2023-03-21T20:09:00Z | 13 | 0 | null | [
"region:us"
] | 2023-03-21T20:09:00Z | 2023-03-21T14:04:35.000Z | 2023-03-21T14:04:35 | ---
dataset_info:
features:
- name: image
dtype: image
- name: artist
dtype:
class_label:
names:
'0': Unknown Artist
'1': boris-kustodiev
'2': camille-pissarro
'3': childe-hassam
'4': claude-monet
'5': edgar-degas
'6': eugene-boudin
'7': gustave-dore
'8': ilya-repin
'9': ivan-aivazovsky
'10': ivan-shishkin
'11': john-singer-sargent
'12': marc-chagall
'13': martiros-saryan
'14': nicholas-roerich
'15': pablo-picasso
'16': paul-cezanne
'17': pierre-auguste-renoir
'18': pyotr-konchalovsky
'19': raphael-kirchner
'20': rembrandt
'21': salvador-dali
'22': vincent-van-gogh
'23': hieronymus-bosch
'24': leonardo-da-vinci
'25': albrecht-durer
'26': edouard-cortes
'27': sam-francis
'28': juan-gris
'29': lucas-cranach-the-elder
'30': paul-gauguin
'31': konstantin-makovsky
'32': egon-schiele
'33': thomas-eakins
'34': gustave-moreau
'35': francisco-goya
'36': edvard-munch
'37': henri-matisse
'38': fra-angelico
'39': maxime-maufra
'40': jan-matejko
'41': mstislav-dobuzhinsky
'42': alfred-sisley
'43': mary-cassatt
'44': gustave-loiseau
'45': fernando-botero
'46': zinaida-serebriakova
'47': georges-seurat
'48': isaac-levitan
'49': joaquãn-sorolla
'50': jacek-malczewski
'51': berthe-morisot
'52': andy-warhol
'53': arkhip-kuindzhi
'54': niko-pirosmani
'55': james-tissot
'56': vasily-polenov
'57': valentin-serov
'58': pietro-perugino
'59': pierre-bonnard
'60': ferdinand-hodler
'61': bartolome-esteban-murillo
'62': giovanni-boldini
'63': henri-martin
'64': gustav-klimt
'65': vasily-perov
'66': odilon-redon
'67': tintoretto
'68': gene-davis
'69': raphael
'70': john-henry-twachtman
'71': henri-de-toulouse-lautrec
'72': antoine-blanchard
'73': david-burliuk
'74': camille-corot
'75': konstantin-korovin
'76': ivan-bilibin
'77': titian
'78': maurice-prendergast
'79': edouard-manet
'80': peter-paul-rubens
'81': aubrey-beardsley
'82': paolo-veronese
'83': joshua-reynolds
'84': kuzma-petrov-vodkin
'85': gustave-caillebotte
'86': lucian-freud
'87': michelangelo
'88': dante-gabriel-rossetti
'89': felix-vallotton
'90': nikolay-bogdanov-belsky
'91': georges-braque
'92': vasily-surikov
'93': fernand-leger
'94': konstantin-somov
'95': katsushika-hokusai
'96': sir-lawrence-alma-tadema
'97': vasily-vereshchagin
'98': ernst-ludwig-kirchner
'99': mikhail-vrubel
'100': orest-kiprensky
'101': william-merritt-chase
'102': aleksey-savrasov
'103': hans-memling
'104': amedeo-modigliani
'105': ivan-kramskoy
'106': utagawa-kuniyoshi
'107': gustave-courbet
'108': william-turner
'109': theo-van-rysselberghe
'110': joseph-wright
'111': edward-burne-jones
'112': koloman-moser
'113': viktor-vasnetsov
'114': anthony-van-dyck
'115': raoul-dufy
'116': frans-hals
'117': hans-holbein-the-younger
'118': ilya-mashkov
'119': henri-fantin-latour
'120': m.c.-escher
'121': el-greco
'122': mikalojus-ciurlionis
'123': james-mcneill-whistler
'124': karl-bryullov
'125': jacob-jordaens
'126': thomas-gainsborough
'127': eugene-delacroix
'128': canaletto
- name: genre
dtype:
class_label:
names:
'0': abstract_painting
'1': cityscape
'2': genre_painting
'3': illustration
'4': landscape
'5': nude_painting
'6': portrait
'7': religious_painting
'8': sketch_and_study
'9': still_life
'10': Unknown Genre
- name: style
dtype:
class_label:
names:
'0': Abstract_Expressionism
'1': Action_painting
'2': Analytical_Cubism
'3': Art_Nouveau
'4': Baroque
'5': Color_Field_Painting
'6': Contemporary_Realism
'7': Cubism
'8': Early_Renaissance
'9': Expressionism
'10': Fauvism
'11': High_Renaissance
'12': Impressionism
'13': Mannerism_Late_Renaissance
'14': Minimalism
'15': Naive_Art_Primitivism
'16': New_Realism
'17': Northern_Renaissance
'18': Pointillism
'19': Pop_Art
'20': Post_Impressionism
'21': Realism
'22': Rococo
'23': Romanticism
'24': Symbolism
'25': Synthetic_Cubism
'26': Ukiyo_e
splits:
- name: train
num_bytes: 3110660852.85595
num_examples: 50000
download_size: 3114376026
dataset_size: 3110660852.85595
---
# Dataset Card for "wikiart-resized-sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7586042881011963,
-0.13821350038051605,
0.08850302547216415,
-0.012535952031612396,
-0.3631992042064667,
-0.07387322932481766,
-0.093449667096138,
-0.13850320875644684,
1.106737732887268,
0.37403392791748047,
-0.9804625511169434,
-0.5522724986076355,
-0.5291339755058289,
-0.125565782189... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
chymaks/Igbo_ner | chymaks | 2023-11-28T14:23:46Z | 13 | 0 | null | [
"license:cc-by-nc-2.0",
"region:us"
] | 2023-11-28T14:23:46Z | 2023-03-21T14:07:46.000Z | 2023-03-21T14:07:46 | ---
license: cc-by-nc-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
proofcheck/prooflang | proofcheck | 2023-06-01T13:35:20Z | 13 | 1 | null | [
"task_categories:text-generation",
"size_categories:1B<n<10B",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2023-06-01T13:35:20Z | 2023-03-24T23:23:54.000Z | 2023-03-24T23:23:54 | ---
license: cc-by-4.0
task_categories:
- text-generation
language:
- en
size_categories:
- 1B<n<10B
pretty_name: ProofLang Corpus
dataset_info:
- config_name: proofs
num_bytes: 3197091800
num_examples: 3681901
features:
- name: fileID
dtype: string
- name: proof
dtype: string
- config_name: sentences
num_bytes: 3736579062
num_examples: 38899130
features:
- name: fileID
dtype: string
- name: sentence
dtype: string
download_size: 6933683563
dataset_size: 6933670862
---
# Dataset Card for the ProofLang Corpus
## Dataset Summary
The ProofLang Corpus includes 3.7M proofs (558 million words) mechanically extracted from papers that were posted on [arXiv.org](https://arXiv.org) between 1992 and 2020.
The focus of this corpus is proofs, rather than the explanatory text that surrounds them, and more specifically on the *language* used in such proofs.
Specific mathematical content is filtered out, resulting in sentences such as `Let MATH be the restriction of MATH to MATH.`
This dataset reflects how people prefer to write (non-formalized) proofs, and is also amenable to statistical analyses and experiments with Natural Language Processing (NLP) techniques.
We hope it can serve as an aid in the development of language-based proof assistants and proof checkers for professional and educational purposes.
## Dataset Structure
There are multiple TSV versions of the data. Primarily, `proofs` divides up the data proof-by-proof, and `sentences` further divides up the same data sentence-by-sentence.
The `raw` dataset is a less-cleaned-up version of `proofs`. More usefully, the `tags` dataset gives arXiv subject tags for each paper ID found in the other data files.
* The data in `proofs` (and `raw`) consists of a `paper` ID (identifying where the proof was extracted from), and the `proof` as a string.
* The data in `sentences` consists of a `paper` ID, and the `sentence` as a string.
* The data in `tags` consists of a `paper` ID, and the arXiv subject tags for that paper as a single comma-separated string.
Further metadata about papers can be queried from arXiv.org using the paper ID.
In particular, each paper `<id>` in the dataset can be accessed online at the url `https://arxiv.org/abs/<id>`
## Dataset Size
* `proofs` is 3,094,779,182 bytes (unzipped) and has 3,681,893 examples.
* `sentences` is 3,545,309,822 bytes (unzipped) and has 38,899,132 examples.
* `tags` is 7,967,839 bytes (unzipped) and has 328,642 rows.
* `raw` is 3,178,997,379 bytes (unzipped) and has 3,681,903 examples.
## Dataset Statistics
* The average length of `sentences` is 14.1 words.
* The average length of `proofs` is 10.5 sentences.
## Dataset Usage
Data can be downloaded as (zipped) TSV files.
Accessing the data programmatically from Python is also possible using the `Datasets` library.
For example, to print the first 10 proofs:
```python
from datasets import load_dataset
dataset = load_dataset('proofcheck/prooflang', 'proofs', split='train', streaming='True')
for d in dataset.take(10):
print(d['paper'], d['proof'])
```
To look at individual sentences from the proofs,
```python
dataset = load_dataset('proofcheck/prooflang', 'proofs', split='train', streaming='True')
for d in dataset.take(10):
print(d['paper'], d['sentence'])
```
To get a comma-separated list of arXiv subject tags for each paper,
```python
from datasets import load_dataset
dataset = load_dataset('proofcheck/prooflang', 'tags', split='train', streaming='True')
for d in dataset.take(10):
print(d['paper'], d['tags'])
```
Finally, to look at a version of the proofs with less aggressive cleanup (straight from the LaTeX extraction),
```python
dataset = load_dataset('proofcheck/prooflang', 'raw', split='train', streaming='True')
for d in dataset.take(10):
print(d['paper'], d['proof'])
```
### Data Splits
There is currently no train/test split; all the data is in `train`.
## Dataset Creation
We started with the LaTeX source of 1.6M papers that were submitted to [arXiv.org](https://arXiv.org) between 1992 and April 2022.
The proofs were extracted using a Python script simulating parts of LaTeX (including defining and expanding macros).
It does no actual typesetting, throws away output not between `\begin{proof}...\end{proof}`, and skips math content. During extraction,
* Math-mode formulas (signalled by `$`, `\begin{equation}`, etc.) become `MATH`
* `\ref{...}` and variants (`autoref`, `\subref`, etc.) become `REF`
* `\cite{...}` and variants (`\Citet`, `\shortciteNP`, etc.) become `CITE`
* Words that appear to be proper names become `NAME`
* `\item` becomes `CASE:`
We then run a cleanup pass on the extracted proofs that includes
* Cleaning up common extraction errors (e.g., due to uninterpreted macros)
* Replacing more references by `REF`, e.g., `Theorem 2(a)` or `Postulate (*)`
* Replacing more citations with `CITE`, e.g., `Page 47 of CITE`
* Replacing more proof-case markers with `CASE:`, e.g., `Case (a).`
* Fixing a few common misspellings
## Additional Information
This dataset is released under the Creative Commons Attribution 4.0 licence.
Copyright for the actual proofs remains with the authors of the papers on [arXiv.org](https://arXiv.org), but these simplified snippets are fair use under US copyright law.
| [
-0.30562862753868103,
-0.3719845414161682,
0.4329254925251007,
0.029449736699461937,
-0.18645432591438293,
-0.1821955293416977,
-0.1862308830022812,
-0.2404249757528305,
-0.0803210511803627,
0.47670698165893555,
0.04631355032324791,
-0.4751601815223694,
-0.6358206868171692,
0.2649561762809... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Hyperspace-Technologies/scp-wiki-text | Hyperspace-Technologies | 2023-04-01T02:44:11Z | 13 | 0 | null | [
"size_categories:100M<n<1B",
"language:en",
"license:cc-by-4.0",
"scp",
"region:us"
] | 2023-04-01T02:44:11Z | 2023-04-01T01:40:27.000Z | 2023-04-01T01:40:27 | ---
license: cc-by-4.0
language:
- en
tags:
- scp
size_categories:
- 100M<n<1B
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 24497718.02277939
num_examples: 314294
- name: test
num_bytes: 2722003.3115220205
num_examples: 34922
download_size: 72410093
dataset_size: 27219721.334301412
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nanakonoda/xnli_parallel | nanakonoda | 2023-04-18T13:23:10Z | 13 | 0 | null | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:extended|xnli",
"language:en",
"language:de",
"language:fr",
"mode classification",
"aligned",
"region:us"
] | 2023-04-18T13:23:10Z | 2023-04-03T00:49:12.000Z | 2023-04-03T00:49:12 | ---
annotations_creators:
- expert-generated
language:
- en
- de
- fr
language_creators:
- found
license: []
multilinguality:
- multilingual
pretty_name: XNLI Parallel Corpus
size_categories:
- 100K<n<1M
source_datasets:
- extended|xnli
tags:
- mode classification
- aligned
task_categories:
- text-classification
task_ids: []
dataset_info:
- config_name: en
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': spoken
'1': written
splits:
- name: train
num_bytes: 92288
num_examples: 830
- name: test
num_bytes: 186853
num_examples: 1669
- config_name: de
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': spoken
'1': written
splits:
- name: train
num_bytes: 105681
num_examples: 830
- name: test
num_bytes: 214008
num_examples: 1669
- config_name: fr
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': spoken
'1': written
splits:
- name: train
num_bytes: 830
num_examples: 109164
- name: test
num_bytes: 221286
num_examples: 1669
download_size: 1864
dataset_size: 1840
---
# Dataset Card for XNLI Parallel Corpus
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
### Supported Tasks and Leaderboards
Binary mode classification (spoken vs written)
### Languages
- English
- German
- French
## Dataset Structure
### Data Instances
{
'text': "And he said , Mama , I 'm home .",
'label': 0
}
### Data Fields
- text: sentence
- label: binary label of text (0: spoken 1: written)
### Data Splits
- train: 830
- test: 1669
### Other Statistics
#### Vocabulary Size
- English
- train: 4363
- test: 7128
- German
- train: 5070
- test: 8601
- French
- train: 4881
- test: 7935
#### Average Sentence Length
- English
- train: 20.689156626506023
- test: 20.75254643499101
- German
- train: 20.367469879518072
- test: 20.639904134212102
- French
- train: 23.455421686746988
- test: 23.731575793888556
#### Label Split
- train:
- 0: 166
- 1: 664
- test:
- 0: 334
- 1: 1335
#### Out-of-vocabulary words in model
- English
- BERT (bert-base-uncased)
- train: 800
- test: 1638
- mBERT (bert-base-multilingual-uncased)
- train: 1347
- test: 2693
- German BERT (bert-base-german-dbmdz-uncased)
- train: 3228
- test: 5581
- flauBERT (flaubert-base-uncased)
- train: 4363
- test: 7128
- German
- BERT (bert-base-uncased)
- train: 4285
- test: 7387
- mBERT (bert-base-multilingual-uncased)
- train: 3126
- test: 5863
- German BERT (bert-base-german-dbmdz-uncased)
- train: 2033
- test: 3938
- flauBERT (flaubert-base-uncased)
- train: 5069
- test: 8600
- French
- BERT (bert-base-uncased)
- train: 3784
- test: 6289
- mBERT (bert-base-multilingual-uncased)
- train: 2847
- test: 5084
- German BERT (bert-base-german-dbmdz-uncased)
- train: 4212
- test: 6964
- flauBERT (flaubert-base-uncased)
- train: 4881
- test: 7935
## Dataset Creation
### Curation Rationale
N/A
### Source Data
https://github.com/facebookresearch/XNLI
Here is the citation for the original XNLI paper.
```
@InProceedings{conneau2018xnli,
author = "Conneau, Alexis
and Rinott, Ruty
and Lample, Guillaume
and Williams, Adina
and Bowman, Samuel R.
and Schwenk, Holger
and Stoyanov, Veselin",
title = "XNLI: Evaluating Cross-lingual Sentence Representations",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods
in Natural Language Processing",
year = "2018",
publisher = "Association for Computational Linguistics",
location = "Brussels, Belgium",
}
```
#### Initial Data Collection and Normalization
N/A
#### Who are the source language producers?
N/A
### Annotations
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
N/A
## Considerations for Using the Data
### Social Impact of Dataset
N/A
### Discussion of Biases
N/A
### Other Known Limitations
N/A
## Additional Information
### Dataset Curators
N/A
### Licensing Information
N/A
### Citation Information
### Contributions
N/A | [
-0.39720070362091064,
-0.5238009095191956,
0.13511888682842255,
0.26357001066207886,
-0.07434818148612976,
0.04233637452125549,
-0.578096866607666,
-0.42472878098487854,
0.7121387720108032,
0.18039566278457642,
-0.6627247333526611,
-0.8282694220542908,
-0.5715964436531067,
0.23329661786556... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pritam1984314/cool_job_dataset | pritam1984314 | 2023-04-05T21:02:20Z | 13 | 0 | null | [
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"license:openrail",
"region:us"
] | 2023-04-05T21:02:20Z | 2023-04-05T20:40:04.000Z | 2023-04-05T20:40:04 | ---
license: openrail
task_categories:
- text-generation
language:
- en
pretty_name: headline
size_categories:
- n<1K
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
climatebert/climate_specificity | climatebert | 2023-04-18T16:02:48Z | 13 | 1 | null | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2023-04-18T16:02:48Z | 2023-04-11T13:12:11.000Z | 2023-04-11T13:12:11 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license: cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
pretty_name: ClimateSpecificity
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': non-specific
'1': specific
splits:
- name: train
num_bytes: 492077
num_examples: 1000
- name: test
num_bytes: 174265
num_examples: 320
download_size: 373454
dataset_size: 666342
---
# Dataset Card for climate_specificity
## Dataset Description
- **Homepage:** [climatebert.ai](https://climatebert.ai)
- **Repository:**
- **Paper:** [papers.ssrn.com/sol3/papers.cfm?abstract_id=3998435](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3998435)
- **Leaderboard:**
- **Point of Contact:** [Nicolas Webersinke](mailto:nicolas.webersinke@fau.de)
### Dataset Summary
We introduce an expert-annotated dataset for classifying the climate-related specificity of climate-related paragraphs in corporate disclosures.
### Supported Tasks and Leaderboards
The dataset supports a binary classification task of whether a given climate-related paragraph is specific or not.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
```
{
'text': '− Scope 3: Optional scope that includes indirect emissions associated with the goods and services supply chain produced outside the organization. Included are emissions from the transport of products from our logistics centres to stores (downstream) performed by external logistics operators (air, land and sea transport) as well as the emissions associated with electricity consumption in franchise stores.',
'label': 1
}
```
### Data Fields
- text: a climate-related paragraph extracted from corporate annual reports and sustainability reports
- label: the label (0 -> non-specific, 1 -> specific)
### Data Splits
The dataset is split into:
- train: 1,000
- test: 320
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Our dataset contains climate-related paragraphs extracted from financial disclosures by firms. We collect text from corporate annual reports and sustainability reports.
For more information regarding our sample selection, please refer to the Appendix of our paper (see [citation](#citation-information)).
#### Who are the source language producers?
Mainly large listed companies.
### Annotations
#### Annotation process
For more information on our annotation process and annotation guidelines, please refer to the Appendix of our paper (see [citation](#citation-information)).
#### Who are the annotators?
The authors and students at Universität Zürich and Friedrich-Alexander-Universität Erlangen-Nürnberg with majors in finance and sustainable finance.
### Personal and Sensitive Information
Since our text sources contain public information, no personal and sensitive information should be included.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- Julia Anna Bingler
- Mathias Kraus
- Markus Leippold
- Nicolas Webersinke
### Licensing Information
This dataset is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license (cc-by-nc-sa-4.0). To view a copy of this license, visit [creativecommons.org/licenses/by-nc-sa/4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/).
If you are interested in commercial use of the dataset, please contact [markus.leippold@bf.uzh.ch](mailto:markus.leippold@bf.uzh.ch).
### Citation Information
```bibtex
@techreport{bingler2023cheaptalk,
title={How Cheap Talk in Climate Disclosures Relates to Climate Initiatives, Corporate Emissions, and Reputation Risk},
author={Bingler, Julia and Kraus, Mathias and Leippold, Markus and Webersinke, Nicolas},
type={Working paper},
institution={Available at SSRN 3998435},
year={2023}
}
```
### Contributions
Thanks to [@webersni](https://github.com/webersni) for adding this dataset. | [
-0.2836640179157257,
-0.35122644901275635,
0.18557989597320557,
0.16146446764469147,
-0.35901299118995667,
-0.12690435349941254,
-0.2922341823577881,
-0.5909507870674133,
0.3678703308105469,
0.4124862551689148,
-0.4723578095436096,
-0.7904286980628967,
-0.47379928827285767,
0.0374681055545... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pphuc25/VLSP_T1 | pphuc25 | 2023-04-17T13:06:54Z | 13 | 0 | null | [
"region:us"
] | 2023-04-17T13:06:54Z | 2023-04-16T16:10:42.000Z | 2023-04-16T16:10:42 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
splits:
- name: train
num_bytes: 870843590.0
num_examples: 7500
download_size: 862653100
dataset_size: 870843590.0
---
# Dataset Card for "VLSP_T1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.37456783652305603,
-0.04842660576105118,
0.19838808476924896,
0.3106991648674011,
-0.4546888470649719,
0.032168224453926086,
0.4434165954589844,
-0.17470934987068176,
0.9375604391098022,
0.578590989112854,
-0.8530001044273376,
-0.88210529088974,
-0.7370001077651978,
-0.4224155843257904,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Vision-CAIR/cc_sbu_align | Vision-CAIR | 2023-04-19T22:21:39Z | 13 | 29 | null | [
"region:us"
] | 2023-04-19T22:21:39Z | 2023-04-19T21:45:46.000Z | 2023-04-19T21:45:46 | # MiniGPT-4: Enhancing Vision-language Understanding with Advanced Large Language Models
[Deyao Zhu](https://tsutikgiau.github.io/)* (On Job Market!), [Jun Chen](https://junchen14.github.io/)* (On Job Market!), [Xiaoqian Shen](https://xiaoqian-shen.github.io), [Xiang Li](https://xiangli.ac.cn), and [Mohamed Elhoseiny](https://www.mohamed-elhoseiny.com/). *Equal Contribution
**King Abdullah University of Science and Technology**
## Online Demo
Click the image to chat with MiniGPT-4 around your images
[](https://minigpt-4.github.io)
## Examples
| | |
:-------------------------:|:-------------------------:
 | 
 | 
More examples can be found in the [project page](https://minigpt-4.github.io).
## Introduction
- MiniGPT-4 aligns a frozen visual encoder from BLIP-2 with a frozen LLM, Vicuna, using just one projection layer.
- We train MiniGPT-4 with two stages. The first traditional pretraining stage is trained using roughly 5 million aligned image-text pairs in 10 hours using 4 A100s. After the first stage, Vicuna is able to understand the image. But the generation ability of Vicuna is heavilly impacted.
- To address this issue and improve usability, we propose a novel way to create high-quality image-text pairs by the model itself and ChatGPT together. Based on this, we then create a small (3500 pairs in total) yet high-quality dataset.
- The second finetuning stage is trained on this dataset in a conversation template to significantly improve its generation reliability and overall usability. To our surprise, this stage is computationally efficient and takes only around 7 minutes with a single A100.
- MiniGPT-4 yields many emerging vision-language capabilities similar to those demonstrated in GPT-4.

## Getting Started
### Installation
**1. Prepare the code and the environment**
Git clone our repository, creating a python environment and ativate it via the following command
```bash
git clone https://github.com/Vision-CAIR/MiniGPT-4.git
cd MiniGPT-4
conda env create -f environment.yml
conda activate minigpt4
```
**2. Prepare the pretrained Vicuna weights**
The current version of MiniGPT-4 is built on the v0 versoin of Vicuna-13B.
Please refer to our instruction [here](PrepareVicuna.md)
to prepare the Vicuna weights.
The final weights would be in a single folder with the following structure:
```
vicuna_weights
├── config.json
├── generation_config.json
├── pytorch_model.bin.index.json
├── pytorch_model-00001-of-00003.bin
...
```
Then, set the path to the vicuna weight in the model config file
[here](minigpt4/configs/models/minigpt4.yaml#L16) at Line 16.
**3. Prepare the pretrained MiniGPT-4 checkpoint**
To play with our pretrained model, download the pretrained checkpoint
[here](https://drive.google.com/file/d/1a4zLvaiDBr-36pasffmgpvH5P7CKmpze/view?usp=share_link).
Then, set the path to the pretrained checkpoint in the evaluation config file
in [eval_configs/minigpt4_eval.yaml](eval_configs/minigpt4_eval.yaml#L10) at Line 11.
### Launching Demo Locally
Try out our demo [demo.py](demo.py) on your local machine by running
```
python demo.py --cfg-path eval_configs/minigpt4_eval.yaml --gpu-id 0
```
Here, we load Vicuna as 8 bit by default to save some GPU memory usage.
Besides, the default beam search width is 1.
Under this setting, the demo cost about 23G GPU memory.
If you have a more powerful GPU with larger GPU memory, you can run the model
in 16 bit by setting low_resource to False in the config file
[minigpt4_eval.yaml](eval_configs/minigpt4_eval.yaml) and use a larger beam search width.
### Training
The training of MiniGPT-4 contains two alignment stages.
**1. First pretraining stage**
In the first pretrained stage, the model is trained using image-text pairs from Laion and CC datasets
to align the vision and language model. To download and prepare the datasets, please check
our [first stage dataset preparation instruction](dataset/README_1_STAGE.md).
After the first stage, the visual features are mapped and can be understood by the language
model.
To launch the first stage training, run the following command. In our experiments, we use 4 A100.
You can change the save path in the config file
[train_configs/minigpt4_stage1_pretrain.yaml](train_configs/minigpt4_stage1_pretrain.yaml)
```bash
torchrun --nproc-per-node NUM_GPU train.py --cfg-path train_configs/minigpt4_stage1_pretrain.yaml
```
A MiniGPT-4 checkpoint with only stage one training can be downloaded
[here](https://drive.google.com/file/d/1u9FRRBB3VovP1HxCAlpD9Lw4t4P6-Yq8/view?usp=share_link).
Compared to the model after stage two, this checkpoint generate incomplete and repeated sentences frequently.
**2. Second finetuning stage**
In the second stage, we use a small high quality image-text pair dataset created by ourselves
and convert it to a conversation format to further align MiniGPT-4.
To download and prepare our second stage dataset, please check our
[second stage dataset preparation instruction](dataset/README_2_STAGE.md).
To launch the second stage alignment,
first specify the path to the checkpoint file trained in stage 1 in
[train_configs/minigpt4_stage1_pretrain.yaml](train_configs/minigpt4_stage2_finetune.yaml).
You can also specify the output path there.
Then, run the following command. In our experiments, we use 1 A100.
```bash
torchrun --nproc-per-node NUM_GPU train.py --cfg-path train_configs/minigpt4_stage2_finetune.yaml
```
After the second stage alignment, MiniGPT-4 is able to talk about the image coherently and user-friendly.
## Acknowledgement
+ [BLIP2](https://huggingface.co/docs/transformers/main/model_doc/blip-2) The model architecture of MiniGPT-4 follows BLIP-2. Don't forget to check this great open-source work if you don't know it before!
+ [Lavis](https://github.com/salesforce/LAVIS) This repository is built upon Lavis!
+ [Vicuna](https://github.com/lm-sys/FastChat) The fantastic language ability of Vicuna with only 13B parameters is just amazing. And it is open-source!
If you're using MiniGPT-4 in your research or applications, please cite using this BibTeX:
```bibtex
@misc{zhu2022minigpt4,
title={MiniGPT-4: Enhancing Vision-language Understanding with Advanced Large Language Models},
author={Deyao Zhu and Jun Chen and Xiaoqian Shen and xiang Li and Mohamed Elhoseiny},
year={2023},
}
```
## License
This repository is under [BSD 3-Clause License](LICENSE.md).
Many codes are based on [Lavis](https://github.com/salesforce/LAVIS) with
BSD 3-Clause License [here](LICENSE_Lavis.md).
| [
-0.5627525448799133,
-0.6834515929222107,
0.5311148166656494,
-0.04545726627111435,
-0.4393371641635895,
-0.3400447368621826,
-0.18691587448120117,
-0.44523152709007263,
-0.031763143837451935,
0.1893898993730545,
-0.6574622392654419,
-0.39622533321380615,
-0.465477854013443,
-0.06614837050... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AlekseyKorshuk/gpteacher-role-play-chatml | AlekseyKorshuk | 2023-07-24T22:32:56Z | 13 | 7 | null | [
"region:us"
] | 2023-07-24T22:32:56Z | 2023-04-27T20:08:22.000Z | 2023-04-27T20:08:22 | ---
dataset_info:
features:
- name: conversation
list:
- name: content
dtype: string
- name: do_train
dtype: bool
- name: role
dtype: string
splits:
- name: train
num_bytes: 6168190
num_examples: 9111
download_size: 0
dataset_size: 6168190
---
# Dataset Card for "gpteacher-role-play-chatml"
Data preprocessing pipeline: https://github.com/AlekseyKorshuk/chat-data-pipeline | [
-0.27258923649787903,
-0.38943660259246826,
-0.03704528883099556,
0.22749340534210205,
-0.16339117288589478,
0.14646410942077637,
-0.08911387622356415,
0.11609716713428497,
0.2578554153442383,
0.6282426714897156,
-1.0622514486312866,
-1.1702743768692017,
-0.5030381083488464,
-0.49423518776... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TrainingDataPro/license_plates | TrainingDataPro | 2023-09-14T16:42:28Z | 13 | 3 | null | [
"task_categories:image-to-text",
"language:en",
"license:cc-by-nc-nd-4.0",
"finance",
"region:us"
] | 2023-09-14T16:42:28Z | 2023-05-03T07:38:20.000Z | 2023-05-03T07:38:20 | ---
license: cc-by-nc-nd-4.0
task_categories:
- image-to-text
language:
- en
tags:
- finance
dataset_info:
- config_name: Brazil_youtube
features:
- name: image
dtype: image
- name: labeled_image
dtype: image
- name: bbox
dtype: string
- name: license_plate.id
dtype: string
- name: license_plate.visibility
dtype: string
- name: license_plate.rows_count
dtype: uint8
- name: license_plate.number
dtype: string
- name: license_plate.serial
dtype: string
- name: license_plate.country
dtype: string
- name: license_plate.mask
dtype: string
splits:
- name: train
num_bytes: 173536648
num_examples: 72
download_size: 22606962
dataset_size: 173536648
- config_name: Estonia_platesmania
features:
- name: image
dtype: image
- name: labeled_image
dtype: image
- name: bbox
dtype: string
- name: license_plate.id
dtype: string
- name: license_plate.visibility
dtype: string
- name: license_plate.rows_count
dtype: uint8
- name: license_plate.number
dtype: string
- name: license_plate.serial
dtype: string
- name: license_plate.country
dtype: string
- name: license_plate.mask
dtype: string
splits:
- name: train
num_bytes: 7990452
num_examples: 10
download_size: 7863164
dataset_size: 7990452
- config_name: Finland_platesmania
features:
- name: image
dtype: image
- name: labeled_image
dtype: image
- name: bbox
dtype: string
- name: license_plate.id
dtype: string
- name: license_plate.visibility
dtype: string
- name: license_plate.rows_count
dtype: uint8
- name: license_plate.number
dtype: string
- name: license_plate.serial
dtype: string
- name: license_plate.country
dtype: string
- name: license_plate.mask
dtype: string
splits:
- name: train
num_bytes: 9650579
num_examples: 10
download_size: 9485725
dataset_size: 9650579
- config_name: Kazakhstan_platesmania
features:
- name: image
dtype: image
- name: labeled_image
dtype: image
- name: bbox
dtype: string
- name: license_plate.id
dtype: string
- name: license_plate.visibility
dtype: string
- name: license_plate.rows_count
dtype: uint8
- name: license_plate.number
dtype: string
- name: license_plate.serial
dtype: string
- name: license_plate.country
dtype: string
- name: license_plate.mask
dtype: string
splits:
- name: train
num_bytes: 14064541
num_examples: 19
download_size: 7265915
dataset_size: 14064541
- config_name: Kazakhstan_youtube
features:
- name: image
dtype: image
- name: labeled_image
dtype: image
- name: bbox
dtype: string
- name: license_plate.id
dtype: string
- name: license_plate.visibility
dtype: string
- name: license_plate.rows_count
dtype: uint8
- name: license_plate.number
dtype: string
- name: license_plate.serial
dtype: string
- name: license_plate.country
dtype: string
- name: license_plate.mask
dtype: string
splits:
- name: train
num_bytes: 6324396
num_examples: 22
download_size: 2852873
dataset_size: 6324396
- config_name: Lithuania_platesmania
features:
- name: image
dtype: image
- name: labeled_image
dtype: image
- name: bbox
dtype: string
- name: license_plate.id
dtype: string
- name: license_plate.visibility
dtype: string
- name: license_plate.rows_count
dtype: uint8
- name: license_plate.number
dtype: string
- name: license_plate.serial
dtype: string
- name: license_plate.country
dtype: string
- name: license_plate.mask
dtype: string
splits:
- name: train
num_bytes: 8127614
num_examples: 10
download_size: 7940839
dataset_size: 8127614
- config_name: Serbia_platesmania
features:
- name: image
dtype: image
- name: labeled_image
dtype: image
- name: bbox
dtype: string
- name: license_plate.id
dtype: string
- name: license_plate.visibility
dtype: string
- name: license_plate.rows_count
dtype: uint8
- name: license_plate.number
dtype: string
- name: license_plate.serial
dtype: string
- name: license_plate.country
dtype: string
- name: license_plate.mask
dtype: string
splits:
- name: train
num_bytes: 10000777
num_examples: 10
download_size: 9808356
dataset_size: 10000777
- config_name: Serbia_youtube
features:
- name: image
dtype: image
- name: labeled_image
dtype: image
- name: bbox
dtype: string
- name: license_plate.id
dtype: string
- name: license_plate.visibility
dtype: string
- name: license_plate.rows_count
dtype: uint8
- name: license_plate.number
dtype: string
- name: license_plate.serial
dtype: string
- name: license_plate.country
dtype: string
- name: license_plate.mask
dtype: string
splits:
- name: train
num_bytes: 26535839
num_examples: 67
download_size: 4044272
dataset_size: 26535839
- config_name: UAE_platesmania
features:
- name: image
dtype: image
- name: labeled_image
dtype: image
- name: bbox
dtype: string
- name: license_plate.id
dtype: string
- name: license_plate.visibility
dtype: string
- name: license_plate.rows_count
dtype: uint8
- name: license_plate.number
dtype: string
- name: license_plate.serial
dtype: string
- name: license_plate.country
dtype: string
- name: license_plate.mask
dtype: string
splits:
- name: train
num_bytes: 8236358
num_examples: 10
download_size: 8028800
dataset_size: 8236358
- config_name: UAE_youtube
features:
- name: image
dtype: image
- name: labeled_image
dtype: image
- name: bbox
dtype: string
- name: license_plate.id
dtype: string
- name: license_plate.visibility
dtype: string
- name: license_plate.rows_count
dtype: uint8
- name: license_plate.number
dtype: string
- name: license_plate.serial
dtype: string
- name: license_plate.country
dtype: string
- name: license_plate.mask
dtype: string
splits:
- name: train
num_bytes: 41202317
num_examples: 162
download_size: 2666314
dataset_size: 41202317
---
# License Plates
Over **1.2 million** annotated license plates from vehicles around the world. This dataset is tailored for **License Plate Recognition tasks** and includes images from both YouTube and PlatesMania.
Annotation details are provided in the About section below.
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=license_plates) to discuss your requirements, learn about the price and buy the dataset.
# About
## Variables in .csv files:
- **file_name** - filename of the original car photo
- **license_plate.country** - country where the vehicle was captured
- **bbox** - normalized Bounding Box labeling of the car
- **license_plate.visibility** - the visibility type of the license plate
- **license_plate.id** - unique license plate's id
- **license_plate.mask** - normalized coordinates of the license plate
- **license_plate.rows_count** - single-line or double-line number
- **license_plate.number** - recognized text of the license plate
- **license_plate.serial** - only for UAE numbers - license plate series
- **license_plate.region** - only for UAE numbers - license plate subregion
- **license_plate.color** - only for Saudi Arabia - color of the international plate code
**How it works**: *go to the folder of the country, CSV-file contains all labeling information about images located in the subfolder "photos" of the corresponding folder.*
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=license_plates) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** | [
-0.7654334306716919,
-0.23930326104164124,
-0.051136039197444916,
0.3595684766769409,
-0.45605936646461487,
0.024404402822256088,
0.01884385757148266,
-0.5882318019866943,
0.2342870682477951,
0.6924489140510559,
-0.5416977405548096,
-0.8146578073501587,
-0.38120031356811523,
-0.04512775689... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
helenlu/ade20k | helenlu | 2023-05-12T03:51:47Z | 13 | 1 | null | [
"region:us"
] | 2023-05-12T03:51:47Z | 2023-05-11T06:05:44.000Z | 2023-05-11T06:05:44 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shi3z/alpaca_cleaned_ja_json | shi3z | 2023-08-25T23:18:42Z | 13 | 4 | null | [
"task_categories:text-generation",
"language:ja",
"license:cc-by-4.0",
"region:us"
] | 2023-08-25T23:18:42Z | 2023-05-17T06:37:34.000Z | 2023-05-17T06:37:34 | ---
license: cc-by-4.0
task_categories:
- text-generation
language:
- ja
configs:
- config_name: default
data_files:
- split: train
path: "alpaca_cleaned_ja.json"
- split: test
path: "alpaca_cleaned_ja.json"
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.545617401599884,
-0.42588162422180176,
-0.051285725086927414,
0.38739174604415894,
-0.4620097875595093,
0.054228655993938446,
-0.24659407138824463,
-0.2884671688079834,
0.6999504566192627,
0.5781952142715454,
-0.9070088267326355,
-1.1513409614562988,
-0.7566764950752258,
0.0290524754673... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Norquinal/WizardLM_alpaca_claude_evol_instruct_70k | Norquinal | 2023-05-18T23:09:15Z | 13 | 9 | null | [
"license:apache-2.0",
"region:us"
] | 2023-05-18T23:09:15Z | 2023-05-18T13:56:26.000Z | 2023-05-18T13:56:26 | ---
license: apache-2.0
---
WizardLM's instructions with Claude's outputs. Includes an unfiltered version as well. | [
-0.5846055746078491,
-0.3841230273246765,
0.526012659072876,
0.43915706872940063,
-0.34674710035324097,
-0.22873269021511078,
0.05666679888963699,
-0.05401533469557762,
0.6684189438819885,
1.72871994972229,
-0.89058518409729,
-0.2737342417240143,
-0.5952297449111938,
-0.21498022973537445,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TrainingDataPro/pose_estimation | TrainingDataPro | 2023-09-14T16:47:12Z | 13 | 2 | null | [
"task_categories:image-classification",
"language:en",
"license:cc-by-nc-nd-4.0",
"code",
"finance",
"region:us"
] | 2023-09-14T16:47:12Z | 2023-05-19T11:17:45.000Z | 2023-05-19T11:17:45 | ---
license: cc-by-nc-nd-4.0
task_categories:
- image-classification
language:
- en
tags:
- code
- finance
dataset_info:
features:
- name: image_id
dtype: uint32
- name: image
dtype: image
- name: mask
dtype: image
- name: shapes
dtype: string
splits:
- name: train
num_bytes: 142645152
num_examples: 29
download_size: 137240523
dataset_size: 142645152
---
# Pose Estimation
The dataset is primarly intended to dentify and predict the positions of major joints of a human body in an image. It consists of people's photographs with body part labeled with keypoints.
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=pose_estimation) to discuss your requirements, learn about the price and buy the dataset.

# Data Format
Each image from `EP` folder is accompanied by an XML-annotation in the `annotations.xml` file indicating the coordinates of the key points. For each point, the x and y coordinates are provided, and there is a `Presumed_Location` attribute, indicating whether the point is presumed or accurately defined.
# Example of XML file structure
.png?generation=1684358333663868&alt=media)
# Labeled body parts
Each keypoint is ordered and corresponds to the concrete part of the body:
0. **Nose**
1. **Neck**
2. **Right shoulder**
3. **Right elbow**
4. **Right wrist**
5. **Left shoulder**
6. **Left elbow**
7. **Left wrist**
8. **Right hip**
9. **Right knee**
10. **Right foot**
11. **Left hip**
12. **Left knee**
13. **Left foot**
14. **Right eye**
15. **Left eye**
16. **Right ear**
17. **Left ear**
# Keypoint annotation is made in accordance with your requirements.
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=pose_estimation) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** | [
-0.33552876114845276,
-0.30803319811820984,
0.49677586555480957,
-0.07498298585414886,
-0.24350859224796295,
-0.11134535074234009,
0.2249969094991684,
-0.5344552397727966,
0.3920340836048126,
0.33357250690460205,
-0.6952024102210999,
-1.1446707248687744,
-0.6326060891151428,
-0.08491791784... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ccmusic-database/chest_falsetto | ccmusic-database | 2023-10-03T17:14:13Z | 13 | 4 | null | [
"task_categories:audio-classification",
"size_categories:1K<n<10K",
"language:zh",
"language:en",
"license:mit",
"music",
"art",
"region:us"
] | 2023-10-03T17:14:13Z | 2023-05-25T13:53:10.000Z | 2023-05-25T13:53:10 | ---
license: mit
task_categories:
- audio-classification
language:
- zh
- en
tags:
- music
- art
pretty_name: Chest voice and Falsetto Database
size_categories:
- 1K<n<10K
---
# Dataset Card for Chest voice and Falsetto Database
## Dataset Description
- **Homepage:** <https://ccmusic-database.github.io>
- **Repository:** <https://huggingface.co/datasets/ccmusic-database/chest_falsetto>
- **Paper:** <https://doi.org/10.5281/zenodo.5676893>
- **Leaderboard:** <https://ccmusic-database.github.io/team.html>
- **Point of Contact:** N/A
### Dataset Summary
This database contains 1280 monophonic singing audio (.wav format) of chest and falsetto voices, with chest voice tagged as _chest_ and falsetto voice tagged as _falsetto_.
### Supported Tasks and Leaderboards
Audio classification, singing method classification, voice classification
### Languages
Chinese, English
## Dataset Structure
### Data Instances
.zip(.wav, .jpg)
### Data Fields
m_chest, f_chest, m_falsetto, f_falsetto
### Data Splits
train, validation, test
## Dataset Creation
### Curation Rationale
Lack of a dataset for Chest voice and Falsetto
### Source Data
#### Initial Data Collection and Normalization
Zhaorui Liu, Monan Zhou
#### Who are the source language producers?
Students from CCMUSIC
### Annotations
#### Annotation process
1280 monophonic singing audio (.wav format) of chest and falsetto voices, with chest voice tagged as _chest_ and falsetto voice tagged as _falsetto_.
#### Who are the annotators?
Students from CCMUSIC
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
Promoting the development of AI in the music industry
### Discussion of Biases
Only for chest and falsetto voices
### Other Known Limitations
Recordings are cut into slices that are too short
## Additional Information
### Dataset Curators
Zijin Li
### Evaluation
Coming soon...
### Licensing Information
```
MIT License
Copyright (c) CCMUSIC
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
### Citation Information
```
@dataset{zhaorui_liu_2021_5676893,
author = {Zhaorui Liu, Monan Zhou, Shenyang Xu and Zijin Li},
title = {CCMUSIC DATABASE: Music Data Sharing Platform for Computational Musicology Research},
month = {nov},
year = {2021},
publisher = {Zenodo},
version = {1.1},
doi = {10.5281/zenodo.5676893},
url = {https://doi.org/10.5281/zenodo.5676893}
}
```
### Contributions
Provide a dataset for distinguishing chest and falsetto voices | [
-0.5476288199424744,
-0.5424981713294983,
0.02811466157436371,
0.37845563888549805,
-0.34711623191833496,
0.033411599695682526,
-0.21032075583934784,
-0.49369797110557556,
0.40286362171173096,
0.7622712254524231,
-1.1583917140960693,
-0.9862799048423767,
-0.08305934071540833,
-0.0491541735... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
edarchimbaud/earnings-forecast-stocks | edarchimbaud | 2023-11-11T23:13:06Z | 13 | 2 | null | [
"task_categories:tabular-regression",
"language:en",
"license:mit",
"region:us"
] | 2023-11-11T23:13:06Z | 2023-05-28T22:48:23.000Z | 2023-05-28T22:48:23 | ---
language:
- en
license: mit
task_categories:
- tabular-regression
dataset_info:
features:
- name: symbol
dtype: string
- name: date
dtype: string
- name: id
dtype: int64
- name: fiscal_end
dtype: string
- name: consensus_eps_forecast
dtype: float64
- name: high_eps_forecast
dtype: float64
- name: low_eps_forecast
dtype: float64
- name: no_of_estimates
dtype: int64
- name: up
dtype: int64
- name: down
dtype: int64
splits:
- name: train
num_bytes: 8431444
num_examples: 94547
download_size: 768366
dataset_size: 8431444
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "earnings-forecast-sp500"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://edarchimbaud.substack.com
- **Repository:** https://github.com/edarchimbaud
- **Point of Contact:** contact@edarchimbaud.com
### Dataset Summary
The earnings-forecast-sp500 dataset provides information about the earnings forecast for the S&P 500 index constituents. The dataset includes features that detail each company's fiscal end, the consensus earnings per share (EPS) forecast, the high and low EPS forecasts, the number of estimates, and the number of upward and downward revisions.
### Supported Tasks and Leaderboards
[N/A]
### Languages
[N/A]
## Dataset Structure
### Data Instances
[N/A]
### Data Fields
- symbol (string): A string representing the ticker symbol or abbreviation used to identify the company.
- date (string): A string indicating the date of the forecast.
- id (int64): An integer representing the unique identifier for the forecast.
- fiscal_end (string): A string indicating the fiscal end date for the forecast.
- consensus_eps_forecast (float64): A floating-point number representing the consensus earnings per share forecast.
- high_eps_forecast (float64): A floating-point number representing the highest earnings per share forecast.
- low_eps_forecast (float64): A floating-point number representing the lowest earnings per share forecast.
- no_of_estimates (int64): An integer representing the number of estimates contributing to the consensus forecast.
- up (int64): An integer representing the number of upward revisions to the forecast.
- down (int64): An integer representing the number of downward revisions to the forecast.
### Data Splits
[N/A]
## Dataset Creation
### Curation Rationale
The earnings-forecast-sp500 dataset was developed to support the development of high-frequency trading algorithms and investment strategies that rely on earnings forecasts.
### Source Data
#### Initial Data Collection and Normalization
This data was sourced from financial data providers and normalized for consistency.
### Annotations
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
[N/A]
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
The earnings-forecast-sp500 dataset was collected by https://edarchimbaud.substack.com.
### Licensing Information
The earnings-forecast-sp500 dataset is licensed under the MIT License.
### Citation Information
> https://edarchimbaud.substack.com, earnings-forecast-sp500 dataset, GitHub repository, https://github.com/edarchimbaud
### Contributions
Thanks to [@edarchimbaud](https://github.com/edarchimbaud) for adding this dataset. | [
-0.37822917103767395,
-0.23819346725940704,
0.06905815005302429,
0.5519198775291443,
-0.2595955729484558,
0.09668702632188797,
0.05576964095234871,
-0.47684192657470703,
0.8448768258094788,
0.2789963483810425,
-1.1675490140914917,
-0.4859979450702667,
-0.5930137038230896,
-0.04652493819594... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
edarchimbaud/extended-trading-stocks | edarchimbaud | 2023-11-11T23:14:52Z | 13 | 2 | null | [
"task_categories:tabular-regression",
"language:en",
"license:mit",
"region:us"
] | 2023-11-11T23:14:52Z | 2023-05-28T22:48:38.000Z | 2023-05-28T22:48:38 | ---
language:
- en
license: mit
task_categories:
- tabular-regression
dataset_info:
features:
- name: symbol
dtype: string
- name: date
dtype: string
- name: time
dtype: string
- name: price
dtype: float64
- name: share_volume
dtype: string
splits:
- name: train
num_bytes: 84477405
num_examples: 1800058
download_size: 14923692
dataset_size: 84477405
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "extended-trading-sp500"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://edarchimbaud.substack.com
- **Repository:** https://github.com/edarchimbaud
- **Point of Contact:** contact@edarchimbaud.com
### Dataset Summary
The extended-trading-sp500 dataset contains detailed information on the extended trading of the S&P 500 index.
### Supported Tasks and Leaderboards
[N/A]
### Languages
[N/A]
## Dataset Structure
### Data Instances
[N/A]
### Data Fields
- symbol (string): A string representing the ticker symbol or abbreviation used to identify the company.
- date (string): A string representing the date of the trading.
- time (string): A string representing the time of the trading.
- price (float64): A floating-point number representing the price of the stock at the given date and time.
- share_volume (string): A string representing the volume of shares traded during this time.
### Data Splits
[N/A]
## Dataset Creation
### Curation Rationale
The extended-trading-sp500 dataset was developed to support research into after-hours trading patterns and behaviors.
### Source Data
#### Initial Data Collection and Normalization
This data was sourced from various trading platforms and aggregated for this dataset.
### Annotations
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
[N/A]
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
The extended-trading-sp500 dataset was collected by https://edarchimbaud.substack.com.
### Licensing Information
The extended-trading-sp500 dataset is licensed under the MIT License.
### Citation Information
> https://edarchimbaud.substack.com, extended-trading-sp500 dataset, GitHub repository, https://github.com/edarchimbaud
### Contributions
Thanks to [@edarchimbaud](https://github.com/edarchimbaud) for adding this dataset. | [
-0.4461154639720917,
-0.39232492446899414,
-0.08557803928852081,
0.2868943214416504,
-0.1891285479068756,
0.24410763382911682,
-0.30570998787879944,
-0.48610642552375793,
0.9548546671867371,
0.41715008020401,
-0.9717135429382324,
-0.7084152698516846,
-0.4342440664768219,
0.0648431628942489... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
LinkSoul/instruction_merge_set | LinkSoul | 2023-10-25T10:39:46Z | 13 | 113 | null | [
"region:us"
] | 2023-10-25T10:39:46Z | 2023-05-31T12:16:24.000Z | 2023-05-31T12:16:24 | ---
dataset_info:
features:
- name: id
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 13444870155
num_examples: 10077297
download_size: 3542585235
dataset_size: 13444870155
---
# Dataset Card for "instruction_merge_set"
## 本数据集由以下数据集构成:
| 数据(id in the merged set) | Hugging face 地址 | notes |
| --- | --- | --- |
| OIG (unified-任务名称) 15k | https://huggingface.co/datasets/laion/OIG | Open Instruction Generalist Dataset |
| Dolly databricks-dolly-15k | https://huggingface.co/datasets/databricks/databricks-dolly-15k | an open-source dataset of instruction-following records generated by thousands of Databricks employees in several of the behavioral categories |
| UltraChat | https://huggingface.co/datasets/stingning/ultrachat | multi-round dialogue data |
| Camel | https://huggingface.co/datasets/camel-ai/ai_society | 25K conversations between two gpt-3.5-turbo agents. |
| camel (同上) | https://github.com/camel-ai/camel | |
| ChatDoctor icliniq-15k HealthCareMagic-200k | https://github.com/Kent0n-Li/ChatDoctor | 200k real conversations between patients and doctors from HealthCareMagic.com 15k real conversations between patients and doctors from iciniq-10k |
| Dolly | https://github.com/databrickslabs/dolly | |
| GPT4ALL | https://github.com/nomic-ai/gpt4all | |
| GPT-4-LLM comparision_data_b alpaca_gpt4_data_zh comparision_data_a alpaca_gpt4_data 5k | https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM | English Instruction-Following Data generated by GPT-4 using Alpaca prompts for fine-tuning LLMs. Chinese Instruction-Following Data generated by GPT-4 using Chinese prompts translated from Alpaca by ChatGPT. Comparison Data ranked by GPT-4 to train reward models. Answers on Unnatural Instructions Data from GPT-4 to quantify the gap between GPT-4 and instruction-tuned models at scale. |
| GuanacoDataset guanaco_chat_all-utf8 guanaco_non_chat-utf8 paper_answers-utf8 general_ans-utf8 general_questions-utf8 paper_questions-utf8 30k | https://huggingface.co/datasets/JosephusCheung/GuanacoDataset | The dataset for the Guanaco model is designed to enhance the multilingual capabilities and address various linguistic tasks. It builds upon the 175 tasks from the Alpaca model by providing rewrites of seed tasks in different languages and adding new tasks specifically designed for English grammar analysis, natural language understanding, cross-lingual self-awareness, and explicit content recognition. The Paper/General-QA dataset is a collection of questions and answers constructed for AI-generated papers or general texts in English, Chinese, Japanese, and German. |
| HC3 ALL | https://huggingface.co/datasets/Hello-SimpleAI/HC3 | human-ChatGPT comparison datasets |
| instinwild instinwild_en instinwild_ch 5k | https://huggingface.co/datasets/QingyiSi/Alpaca-CoT/tree/main/instinwild | Instruction-Finetuning Dataset Collection (Alpaca-CoT) |
| Instruct-to-Code | https://huggingface.co/datasets/Graverman/Instruct-to-Code | |
| ShareGPT90K sg_90k_part2 sg_90k_part1 | https://huggingface.co/datasets/RyokoAI/ShareGPT52K | 90,000 conversations scraped via the ShareGPT API before it was shut down. These conversations include both user prompts and responses from OpenAI's ChatGPT. |
| UltraChat ultrachat_material_release_230412 ultrachat_release_230407 | https://github.com/thunlp/UltraChat | |
| wealth-alpaca-lora final_dataset_clean 4.3k | https://www.kaggle.com/code/gbhacker23/wealth-alpaca-lora | combination of Stanford's Alpaca (https://github.com/tatsu-lab/stanford_alpaca) and FiQA (https://sites.google.com/view/fiqa/) with another 1.3k pairs custom generated using GPT3.5, 有instruction |
| Alpaca alpaca_data 5k | https://github.com/tatsu-lab/stanford_alpaca | instruct-tuning |
| Baize alpaca_chat_data medical_chat_data quora_chat_data stack_overflow_chat_data | https://github.com/project-baize/baize-chatbot | instruction-following data we used for fine-tuning the Alpaca model. |
| botbots Reasoning flight_bookings medical_appointments travel_agency restaurants_mixed real_estate car_dealership home_maintenance, job_interview 'insurance_consultation': 16, 'hotels': 400, 'tech_support': 32, 'car_rentals': 32, 'pet_care': 48, 'restaurants': 200, 'legal_consultation': 16, 'event_tickets': 240, 'fitness_personal_training': 16, 'scientific_problems': 100 | https://github.com/radi-cho/botbots | A dataset consisting of dialogues between two instances of ChatGPT (gpt-3.5-turbo). The CLI commands and dialogue prompts themselves have been written by GPT-4. The dataset covers a wide range of contexts (questions and answers, arguing and reasoning, task-oriented dialogues) and downstream tasks (e.g., hotel reservations, medical advice). |
| ChatAlpaca chatalpaca_data_10k | https://github.com/cascip/ChatAlpaca | a chat dataset, multi-turn instruction-following conversations. |
| DERA train | https://github.com/curai/curai-research/tree/main/DERA | The following repository contains the open-ended question-answering version of MedQA. |
| GPTeacher Toolformer-dedupe-only-dataset roleplay-simple-deduped-roleplay-dataset gpt4-instruct-dedupe-only-dataset | https://github.com/teknium1/GPTeacher | A collection of modular datasets generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformer |
| OpenAGI | https://github.com/agiresearch/OpenAGI | |
| presto | https://github.com/google-research-datasets/presto | A Multilingual Dataset for Parsing Realistic Task-Oriented Dialogs |
| [
-0.3965212106704712,
-0.9757702350616455,
0.15420567989349365,
0.10558617115020752,
-0.05257941782474518,
-0.09653306007385254,
-0.20558282732963562,
-0.4158073663711548,
0.15126539766788483,
0.37914589047431946,
-0.640728771686554,
-0.7480546236038208,
-0.2800232470035553,
-0.157146200537... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cjvt/janes_tag | cjvt | 2023-06-06T10:07:53Z | 13 | 0 | null | [
"task_categories:token-classification",
"size_categories:1K<n<10K",
"language:sl",
"license:cc-by-sa-4.0",
"code-mixed",
"nonstandard",
"ner",
"region:us"
] | 2023-06-06T10:07:53Z | 2023-06-05T10:35:43.000Z | 2023-06-05T10:35:43 | ---
license: cc-by-sa-4.0
dataset_info:
features:
- name: id
dtype: string
- name: words
sequence: string
- name: lemmas
sequence: string
- name: msds
sequence: string
- name: nes
sequence: string
splits:
- name: train
num_bytes: 2653609
num_examples: 2957
download_size: 2871765
dataset_size: 2653609
task_categories:
- token-classification
language:
- sl
tags:
- code-mixed
- nonstandard
- ner
size_categories:
- 1K<n<10K
---
# Dataset Card for Janes-Tag
### Dataset Summary
Janes-Tag is a manually annotated corpus of Slovene Computer-Mediated Communication (CMC) consisting of mostly tweets but also blogs, forums and news comments.
### Languages
Code-switched/nonstandard Slovenian.
## Dataset Structure
### Data Instances
A sample instance from the dataset - each word is annotated with its form (`word`), lemma, MSD tag (XPOS), and IOB2-encoded named entity tag.
```
{
'id': 'janes.news.rtvslo.279732.2',
'words': ['Jst', 'mam', 'tud', 'dons', 'rojstn', 'dan', '.'],
'lemmas': ['jaz', 'imeti', 'tudi', 'danes', 'rojsten', 'dan', '.'],
'msds': ['mte:Pp1-sn', 'mte:Vmpr1s-n', 'mte:Q', 'mte:Rgp', 'mte:Agpmsay', 'mte:Ncmsan', 'mte:Z'],
'nes': ['O', 'O', 'O', 'O', 'O', 'O', 'O']
}
```
### Data Fields
- `id`: unique identifier of the example;
- `words`: words in the example;
- `lemmas`: lemmas in the example;
- `msds`: msds in the example;
- `nes`: IOB2-encoded named entity tag (person, location, organization, misc, other)
## Additional Information
### Dataset Curators
Jakob Lenardič et al. (please see http://hdl.handle.net/11356/1732 for the full list)
### Licensing Information
CC BY-SA 4.0.
### Citation Information
```
@misc{janes_tag,
title = {{CMC} training corpus Janes-Tag 3.0},
author = {Lenardi{\v c}, Jakob and {\v C}ibej, Jaka and Arhar Holdt, {\v S}pela and Erjavec, Toma{\v z} and Fi{\v s}er, Darja and Ljube{\v s}i{\'c}, Nikola and Zupan, Katja and Dobrovoljc, Kaja},
url = {http://hdl.handle.net/11356/1732},
note = {Slovenian language resource repository {CLARIN}.{SI}},
copyright = {Creative Commons - Attribution-{ShareAlike} 4.0 International ({CC} {BY}-{SA} 4.0)},
year = {2022}
}
```
### Contributions
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset. | [
-0.23602214455604553,
-0.44650647044181824,
0.13417841494083405,
0.1552492082118988,
-0.2928529679775238,
-0.16518892347812653,
-0.19921496510505676,
0.04146755114197731,
0.29593080282211304,
0.4816696345806122,
-0.7911589741706848,
-1.2118282318115234,
-0.779583752155304,
0.32456395030021... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Weni/LLM-base | Weni | 2023-08-25T18:00:38Z | 13 | 0 | null | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:pt",
"region:us"
] | 2023-08-25T18:00:38Z | 2023-06-09T18:21:54.000Z | 2023-06-09T18:21:54 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: question
dtype: string
- name: resposta
dtype: string
- name: context
dtype: string
- name: correct_ans
dtype: int64
splits:
- name: train
num_bytes: 18628924
num_examples: 29073
download_size: 8866205
dataset_size: 18628924
task_categories:
- question-answering
language:
- pt
pretty_name: LLM_Base_QnA
size_categories:
- 10K<n<100K
---
# Dataset Card for "LLM-base"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5925280451774597,
-0.2541314363479614,
0.32638996839523315,
0.22488029301166534,
-0.25932905077934265,
0.12384474277496338,
0.2678070664405823,
-0.02192065678536892,
0.841102123260498,
0.6983696818351746,
-0.9953465461730957,
-0.9904436469078064,
-0.6003227829933167,
-0.2819263339042663... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
LennardZuendorf/legalis | LennardZuendorf | 2023-10-07T20:14:00Z | 13 | 0 | null | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:de",
"license:mit",
"legal",
"region:us"
] | 2023-10-07T20:14:00Z | 2023-06-18T14:50:36.000Z | 2023-06-18T14:50:36 | ---
license: mit
dataset_info:
features:
- name: id
dtype: int64
- name: file_number
dtype: string
- name: date
dtype: timestamp[us]
- name: type
dtype: string
- name: content
dtype: string
- name: tenor
dtype: string
- name: facts
dtype: string
- name: reasoning
dtype: string
- name: winner
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 159271707.27722773
num_examples: 2660
- name: test
num_bytes: 8442598.017326733
num_examples: 141
download_size: 83977470
dataset_size: 167714305.29455447
task_categories:
- text-classification
language:
- de
tags:
- legal
pretty_name: labeled German Court case decisions
size_categories:
- 1K<n<10K
---
# Dataset Card for openlegaldata.io bulk case data
## Dataset Description
This is a labeled version of my already edited data from [openlegaldata.io](https://de.openlegaldata.io/).
#### The Entire Dataset Is In German
- **Github Repository:** [uniArchive-legalis]](https://github.com/LennardZuendorf/uniArchive-legalis)
- **Processed Data:** [openlegaldata-processed](https://huggingface.co/datasets/LennardZuendorf/openlegaldata-processed)
- **Original Bulk Data:** [Bulk Data](https://static.openlegaldata.io/dumps/de/)
## Edit Summary
- This Data is based on already processed data from openlegaldata. Repositories for both can be found on Huggingface (links above).
### Data Fields
| id | court | file_number | date | type | content | tenor | reasoning | facts |
| - | - | - | - | - | - | - | - | - |
| numeric id | name of the court that made the decision | file number of the case ("Aktenzeichen") | decision date | type of the case decision | entire content (text) of the case decision | An abstract, legal summary of the cases decision | the entire rest of the decision, explaining in detail why the decision has been made | the facts and details of a case |
Additionally, I have added 2 field that label the data
#### label fields
- The labels are created using ChatGPT to extract/summarize the tenor (the summary of the decision) down to a winner. **This might lead to errors**. While I have checked the data occasionally, I have not check every single decision of the 2800 cases. But for my project, which was a proof of concept for University this is more than enough.
- see Github for the used Jupyter Notebook
| winner | label |
| - | - |
| Winner in text form - plaintiff("Kläger*in") or defendent ("Verklagte*r") | binary label: 1 if plaintiff won, 0 if defendent won |
### Languages
- German
## Additional Information
### Licensing/Citation Information
The [openlegaldata platform](https://github.com/openlegaldata/oldp) is licensed under the MIT license, you can access the dataset by citing the original source, [openlegaldata.io](https://de.openlegaldata.io/) and me, [Lennard Zündorf](https://github.com/LennardZuendorf) as the editor of this dataset. | [
-0.1779923439025879,
-0.645027220249176,
0.35468170046806335,
0.3101259768009186,
-0.4280851185321808,
-0.28846320509910583,
-0.22007893025875092,
-0.142704039812088,
0.44340619444847107,
0.5655574202537537,
-0.1563715934753418,
-0.9490270614624023,
-0.5652761459350586,
-0.1779462248086929... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
d0rj/oasst1_pairwise_rlhf_reward-ru | d0rj | 2023-06-21T15:39:42Z | 13 | 0 | null | [
"region:us"
] | 2023-06-21T15:39:42Z | 2023-06-21T15:39:37.000Z | 2023-06-21T15:39:37 | ---
dataset_info:
features:
- name: lang
dtype: string
- name: parent_id
dtype: string
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 67126933.0
num_examples: 17966
- name: validation
num_bytes: 3526794.0
num_examples: 952
download_size: 32509550
dataset_size: 70653727.0
---
# Dataset Card for "oasst1_pairwise_rlhf_reward-ru"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.27930253744125366,
-0.2701171040534973,
0.10089406371116638,
0.07707937061786652,
-0.3097427189350128,
0.009841990657150745,
0.40193837881088257,
-0.12250617891550064,
0.953002393245697,
0.33932727575302124,
-0.8865536451339722,
-0.6176769137382507,
-0.6076542735099792,
-0.3702101111412... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Waterhorse/chess_data | Waterhorse | 2023-08-14T18:35:02Z | 13 | 4 | null | [
"task_categories:text-generation",
"task_categories:conversational",
"language:en",
"license:apache-2.0",
"arxiv:2306.09200",
"region:us"
] | 2023-08-14T18:35:02Z | 2023-06-28T13:54:28.000Z | 2023-06-28T13:54:28 | ---
license: apache-2.0
task_categories:
- text-generation
- conversational
language:
- en
---
# The Chess Dataset
## Dataset Description
- **Paper:** [ChessGPT: Bridging Policy Learning and Language Modeling](https://arxiv.org/abs/2306.09200)
### Dataset Summary
The dataset consists of three sources of dataset described in the paper, including:
- **ChessCLIP dataset**: Annotated PGNs for training CLIP.
- **ChessGPT Base dataset**: Game dataset, language dataset and mixed dataset for training ChessGPT-Base.
- **ChessGPT Chat dataset**: Conversational dataset for training ChessGPT-Chat.
Because of the legal issue, for ChessGPT dataset, we do not open-source the chess-book, chess-forum, chess-blog, and Youtube transcript datasets.
And for ChessCLIP dataset, we do not open-source two commercial annotated datasets we use.
### Languages
The language of the data is primarily English.
## Dataset Structure
- **ChessCLIP dataset**: Annotated PGNs for training CLIP.
- **ChessGPT Base dataset**: Game dataset: ccrl, pro_player, lichess_db_37, chess_puzzles, chess_modeling. Language dataset: redpajama, oscar, c4, pile, wikipedia, and stackexchange, and mixed dataset: annotated_pgn.
- **ChessGPT Chat dataset**: Chess-related conversation dataset:
### Data Instances
- **ChessCLIP dataset**:
```python
[Event "GMA, Wijk aan Zee NED"]
[Site "?"]
[Date "2003.??.??"]
[Round "1"]
[White "Anand,V"]
[Black "Radjabov,T"]
[Result "1/2"]
[WhiteElo "2750"]
[BlackElo "2620"]
[ECO "C12"]
[PlyCount "55"]
[Annotator "Hathaway"]
1. e4 e6
{ I'm not terribly familiar with the style of Radjabov, so I don't know if this is his usual opening. }
2. d4 d5 3. Nc3 Nf6 (3...Bb4
{ The Winawer Variation is probably best, though not as easy to play. }) 4. Bg5
{ threatens e4-e5xf6 }
(4. e5
{ keeps pieces on the board and avoids ...dxe4 }) 4...Bb4 (4...Be7
{ is more common and aims to trade dark-square bishops to ease Black's cramp }) (4...dxe4
{ aims to avoid any cramp by bringing pieces into alignment for trading, though White does get at least one very good piece (Ne4 or Bg5) and an easier time castling queen-side, to stir up king-side threats } ...
```
- **ChessGPT dataset**:
```python
{"metadata": {"id": "65268522",
"title": "1923 USSR Chess Championship",
"url": "https://en.wikipedia.org/wiki/1923%20USSR%20Chess%20Championship"},
"text": "The 1923 USSR Chess Championship was the second edition of USSR Chess Championship. Held from 8 to 24 July in Petrograd. The tournament was won by Peter Romanovsky.
"pipeline_key": "wiki-squad-train.jsonl.zst-10549"
}
```
- **ChessGPT Chat dataset**:
```python
{"conversations": [
{"author": "Human 0", "text": "Analyze the given sentence and identify its grammatical errors, if any.\nThese type of problems are difficult to solve.", "url": "gpt-4-alpaca", "score": 0},
{"author": "Human 1", "text": "The sentence is incorrect. The error is in the usage of \"these\". Since \"these\" is a plural determiner, it should be used with a plural noun. Similarly, \"type\" is singular noun and it is used with the singular determiner, \"this\".\n\nThe correct sentence should be \"This type of problem is difficult to solve.\" or \"These types of problems are difficult to solve.\"",
"url": "gpt-4-alpaca", "score": 0}
]
}
```
### Data Splits
The data is unsplit.
## Dataset Creation
Check [ChessGPT: Bridging Policy Learning and Language Modeling](https://arxiv.org/abs/2306.09200) for more details.
### Licensing Information
**Annotated PGN**: [PGNlib](https://www.angelfire.com/games3/smartbridge/), [lichess](https://lichess.org/terms-of-service), [GameKnot](https://gameknot.com/pg/pol_eula.htm), [pathtomaster](https://www.pathtochessmastery.com/)
**Game Dataset**: [Lichess dataset](https://www.tldrlegal.com/license/creative-commons-cc0-1-0-universal), [CCRL](https://ccrl.chessdom.com/ccrl/), [pro-player](https://www.pgnmentor.com/files.html), [puzzle](https://www.tldrlegal.com/license/creative-commons-cc0-1-0-universal), Modeling data(Apache-2.0).
**Language Dataset** [Wikipedia](https://huggingface.co/datasets/wikipedia#licensing-information), [Redpajama](https://github.com/togethercomputer/RedPajama-Data#license), [Oscar](https://huggingface.co/datasets/oscar-corpus/OSCAR-2301#licensing-information), [Pile](https://github.com/EleutherAI/the-pile/blob/master/LICENSE), [StackExchange](https://archive.org/details/stackexchange), [C4](https://huggingface.co/datasets/allenai/c4#license)
**Conversatoinal Datset**: [Chessable forums](https://www.chessable.com/terms), [Reddit](https://www.redditinc.com/policies/data-api-terms), [gpt-4](https://openai.com/policies/terms-of-use), [sharegpt](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb), oasst1(Apache-2.0), dolly-v2(MIT)
### Citation Information
```bash
@article{feng2023chessgpt,
title={ChessGPT: Bridging Policy Learning and Language Modeling},
author={Feng, Xidong and Luo, Yicheng and Wang, Ziyan and Tang, Hongrui and Yang, Mengyue and Shao, Kun and Mguni, David and Du, Yali and Wang, Jun},
journal={arXiv preprint arXiv:2306.09200},
year={2023}
}
``` | [
-0.35927197337150574,
-0.6642325520515442,
0.3518737852573395,
0.3599824011325836,
-0.20273038744926453,
0.1252446323633194,
-0.39760157465934753,
-0.3169941306114197,
0.22017309069633484,
0.5494844913482666,
-0.5319412350654602,
-0.7575876116752625,
-0.3630601763725281,
-0.098311416804790... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BlackKakapo/instruction-dataset-ro | BlackKakapo | 2023-07-06T12:52:50Z | 13 | 0 | null | [
"task_categories:question-answering",
"task_categories:text2text-generation",
"size_categories:n<8K",
"language:ro",
"license:apache-2.0",
"region:us"
] | 2023-07-06T12:52:50Z | 2023-07-06T12:43:39.000Z | 2023-07-06T12:43:39 | ---
license: apache-2.0
task_categories:
- question-answering
- text2text-generation
language:
- ro
size_categories:
- n<8K
---
[Original dataset] - This dataset is just the translation of the [instruction-dataset] dataset.
[Original dataset]: <https://huggingface.co/datasets/HuggingFaceH4/instruction-dataset>
[instruction-dataset]: <https://huggingface.co/datasets/HuggingFaceH4/instruction-dataset> | [
-0.17825332283973694,
-0.782189667224884,
0.18500137329101562,
0.1594151258468628,
-0.24017450213432312,
-0.12282466143369675,
-0.09595915675163269,
0.03536650910973549,
0.8392402529716492,
1.2281025648117065,
-1.155282735824585,
-0.5317127704620361,
-0.5711588263511658,
-0.117736510932445... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rcds/MultiLegalNeg | rcds | 2023-10-25T17:59:53Z | 13 | 0 | null | [
"task_categories:token-classification",
"size_categories:1K<n<10K",
"license:cc-by-nd-4.0",
"legal",
"arxiv:2306.02069",
"arxiv:2309.08695",
"region:us"
] | 2023-10-25T17:59:53Z | 2023-07-10T16:16:08.000Z | 2023-07-10T16:16:08 | ---
license: cc-by-nd-4.0
viewer: true
task_categories:
- token-classification
tags:
- legal
pretty_name: Multilingual Negation Scope Resolution
size_categories:
- 1K<n<10K
---
# Dataset Card for MultiLegalNeg
### Dataset Summary
This dataset consists of German, French, and Italian court documents annotated for negation cues and negation scopes. It also includes a reformated version of ConanDoyle-neg ([
Morante and Blanco. 2012](https://aclanthology.org/S12-1035/)), SFU Review ([Konstantinova et al. 2012](http://www.lrec-conf.org/proceedings/lrec2012/pdf/533_Paper.pdf)), BioScope ([Szarvas et al. 2008](https://aclanthology.org/W08-0606/)) and Dalloux ([Dalloux et al. 2020](https://clementdalloux.fr/?page_id=28)).
### Languages
| Language | Subset | Number of sentences | Negated sentences |
|----------------------|-----------------|----------------------|-------------------|
| French | **fr** | 1059 | 382 |
| Italian | **it** | 1001 | 418 |
| German(Germany) | **de(DE)** | 1068 | 1098 |
| German (Switzerland) | **de(CH)** | 206 | 208 |
| English | **SFU Review** | 17672 | 3528 |
| English | **BioScope** | 14700 | 2095 |
| English | **ConanDoyle-neg**| 5714 | 5714 |
| French | **Dalloux** | 11032 | 1817 |
## Dataset Structure
### Data Fields
- text (string): full sentence
- spans (list): list of annotated cues and scopes
- start (int): offset of the beginning of the annotation
- end (int): offset of the end of the annotation
- token_start(int): id of the first token in the annotation
- token_end(int): id of the last token in the annotation
- label (string): CUE or SCOPE
- tokens (list): list of tokens in the sentence
- text (string): token text
- start (int): offset of the first character
- end (int): offset of the last character
- id (int): token id
- ws (boolean): indicates if the token is followed by a white space
### Data Splits
For each subset a train (70%), test (20%), and validation (10%) split is available.
#### How to use this dataset
To load all data use ```'all_all'```, or specify which dataset to load as the second argument. The available configurations are
```'de', 'fr', 'it', 'swiss', 'fr_dalloux', 'fr_all', 'en_bioscope', 'en_sherlock', 'en_sfu', 'en_all', 'all_all'```
```
from datasets import load_dataset
dataset = load_dataset("rcds/MultiLegalNeg", "all_all")
dataset
```
```
DatasetDict({
train: Dataset({
features: ['text', 'spans', 'tokens'],
num_rows: 26440
})
test: Dataset({
features: ['text', 'spans', 'tokens'],
num_rows: 7593
})
validation: Dataset({
features: ['text', 'spans', 'tokens'],
num_rows: 4053
})
})
```
### Source Data
| Subset | Source |
|-------------------|----------------------|
| **fr** | [Niklaus et al. 2021](https://aclanthology.org/2021.nllp-1.3/), [Niklaus et al. 2023](https://arxiv.org/abs/2306.02069) |
| **it** | [Niklaus et al. 2021](https://aclanthology.org/2021.nllp-1.3/), [Niklaus et al. 2023](https://arxiv.org/abs/2306.02069) |
| **de(DE)** | [Glaser et al. 2021](https://www.scitepress.org/Link.aspx?doi=10.5220/0010246308120821) |
| **de(CH)** | [Niklaus et al. 2021](https://aclanthology.org/2021.nllp-1.3/) |
| **SFU Review** | [Konstantinova et al. 2012](http://www.lrec-conf.org/proceedings/lrec2012/pdf/533_Paper.pdf) |
| **BioScope** | [Szarvas et al. 2008](https://aclanthology.org/W08-0606/) |
| **ConanDoyle-neg**| [Morante and Blanco. 2012](https://aclanthology.org/S12-1035/) |
| **Dalloux** | [Dalloux et al. 2020](https://clementdalloux.fr/?page_id=28) |
### Annotations
The data is annotated for negation cues and their scopes. Annotation guidelines are available [here](https://github.com/RamonaChristen/Multilingual_Negation_Scope_Resolution_on_Legal_Data/blob/main/Annotation_Guidelines.pdf)
#### Annotation process
Each language was annotated by one native speaking annotator and follows strict annotation guidelines
### Citation Information
Please cite the following preprint:
```
@misc{christen2023resolving,
title={Resolving Legalese: A Multilingual Exploration of Negation Scope Resolution in Legal Documents},
author={Ramona Christen and Anastassia Shaitarova and Matthias Stürmer and Joel Niklaus},
year={2023},
eprint={2309.08695},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| [
-0.5518057942390442,
-0.7633689045906067,
0.31061434745788574,
0.08982283622026443,
-0.2966887652873993,
-0.3427935838699341,
-0.26906082034111023,
-0.4639434814453125,
0.6683546304702759,
0.44001758098602295,
-0.5258528590202332,
-0.8918235301971436,
-0.7259102463722229,
0.462493211030960... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dim/mt_bench_en | dim | 2023-07-17T22:51:38Z | 13 | 1 | null | [
"license:mit",
"region:us"
] | 2023-07-17T22:51:38Z | 2023-07-17T22:49:27.000Z | 2023-07-17T22:49:27 | ---
license: mit
dataset_info:
features:
- name: question_id
dtype: int64
- name: category
dtype: string
- name: turns
sequence: string
splits:
- name: train
num_bytes: 34899
num_examples: 80
download_size: 24635
dataset_size: 34899
---
Original Source https://github.com/lm-sys/FastChat/blob/main/fastchat/llm_judge/data/mt_bench/question.jsonl
| [
-0.09393206238746643,
-0.9711650609970093,
0.801338791847229,
0.13386452198028564,
-0.3977023661136627,
-0.1195446103811264,
-0.0913393497467041,
-0.34392982721328735,
0.5248810052871704,
1.0096255540847778,
-0.8276188373565674,
-0.4143649637699127,
-0.2845665216445923,
0.19623225927352905... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HanbingL/midjourney_prompty_dataset | HanbingL | 2023-07-18T06:12:17Z | 13 | 1 | null | [
"region:us"
] | 2023-07-18T06:12:17Z | 2023-07-18T04:49:01.000Z | 2023-07-18T04:49:01 | Entry not found | [
-0.3227648138999939,
-0.22568409144878387,
0.8622256517410278,
0.43461480736732483,
-0.5282989144325256,
0.7012966275215149,
0.7915716171264648,
0.07618606090545654,
0.7746022939682007,
0.25632181763648987,
-0.7852815985679626,
-0.2257382869720459,
-0.9104483723640442,
0.571566641330719,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Dmini/FFHQ-64x64 | Dmini | 2023-07-21T02:36:30Z | 13 | 0 | null | [
"region:us"
] | 2023-07-21T02:36:30Z | 2023-07-21T02:26:03.000Z | 2023-07-21T02:26:03 | Entry not found | [
-0.3227648138999939,
-0.22568409144878387,
0.8622256517410278,
0.43461480736732483,
-0.5282989144325256,
0.7012966275215149,
0.7915716171264648,
0.07618606090545654,
0.7746022939682007,
0.25632181763648987,
-0.7852815985679626,
-0.2257382869720459,
-0.9104483723640442,
0.571566641330719,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PrimeQA/TechQA | PrimeQA | 2023-07-28T14:44:00Z | 13 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-07-28T14:44:00Z | 2023-07-28T14:31:16.000Z | 2023-07-28T14:31:16 | ---
license: apache-2.0
---
| [
-0.12853386998176575,
-0.18616756796836853,
0.652912974357605,
0.4943627715110779,
-0.1931934952735901,
0.2360743284225464,
0.3607199192047119,
0.05056323856115341,
0.5793654918670654,
0.7400139570236206,
-0.6508104205131531,
-0.2378396987915039,
-0.7102250456809998,
-0.047825999557971954,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
RealTimeData/arxiv_july_week1_2023 | RealTimeData | 2023-08-02T00:33:19Z | 13 | 0 | null | [
"region:us"
] | 2023-08-02T00:33:19Z | 2023-08-02T00:33:11.000Z | 2023-08-02T00:33:11 | ---
dataset_info:
features:
- name: entry_id
dtype: string
- name: published
dtype: string
- name: title
dtype: string
- name: authors
sequence: string
- name: primary_category
dtype: string
- name: categories
sequence: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 91779018
num_examples: 2154
download_size: 45522237
dataset_size: 91779018
---
# Dataset Card for "arxiv_july_week1_2023"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6858202815055847,
-0.06534291803836823,
0.16778284311294556,
0.5553975105285645,
-0.22313855588436127,
-0.402653306722641,
0.6842515468597412,
-0.2917535901069641,
0.7645918726921082,
0.5721595883369446,
-0.9151613712310791,
-0.8182209730148315,
-0.48008161783218384,
-0.0469852648675441... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Basilisk181297/Cars_I_like | Basilisk181297 | 2023-08-02T07:29:11Z | 13 | 1 | null | [
"task_categories:image-classification",
"task_categories:image-to-text",
"task_categories:depth-estimation",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"cars",
"mercedes",
"jpg",
"region:us"
] | 2023-08-02T07:29:11Z | 2023-08-02T06:03:58.000Z | 2023-08-02T06:03:58 | ---
license: apache-2.0
task_categories:
- image-classification
- image-to-text
- depth-estimation
language:
- en
tags:
- cars
- mercedes
- jpg
pretty_name: My Favorite Cars
size_categories:
- n<1K
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
adkhamboy/sentiment-uz | adkhamboy | 2023-08-17T02:28:02Z | 13 | 0 | null | [
"license:mit",
"region:us"
] | 2023-08-17T02:28:02Z | 2023-08-17T02:05:41.000Z | 2023-08-17T02:05:41 | ---
license: mit
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
luisroque/instruct-python-500k | luisroque | 2023-08-18T09:44:42Z | 13 | 2 | null | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | 2023-08-18T09:44:42Z | 2023-08-17T18:14:25.000Z | 2023-08-17T18:14:25 | ---
dataset_info:
features:
- name: score_question
dtype: int16
- name: score_answer
dtype: int16
- name: question
dtype: string
- name: answer
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 987469369
num_examples: 501349
download_size: 550185963
dataset_size: 987469369
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-sa-3.0
task_categories:
- text-generation
language:
- en
pretty_name: Instruct Python 500k
size_categories:
- 100K<n<1M
---
# Fine-tuning Instruct Stack Overflow Python Q&A
## Transformed Dataset
### Objective
The transformed dataset is designed for fine-tuning LLMs to improve Python coding assistance by focusing on high-quality content from Stack Overflow.
### Structure
- **Question-Answer Pairing**: Questions and answers are paired using the `ParentId` linkage.
- **Quality Focus**: Only top-rated answers for each question are retained.
- **HTML Tag Removal**: All HTML tags in the content are removed.
- **Combined Question Field**: Each question's title and body are merged.
- **Filtering**: Entries with negative scores or those not containing Python code structures are excluded.
Final columns:
- `score_question`
- `score_answer`
- `question`
- `answer`
## Original Dataset
The dataset contains questions and answers from Stack Overflow with the `python` tag, covering the period from August 2, 2008, to October 19, 2016.
## License
All contributions are under the [CC-BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/). Attribution is required. The original dataset was posted [here](https://www.kaggle.com/datasets/stackoverflow/pythonquestions).
Keep in touch: [LinkedIn](https://www.linkedin.com/in/luisbrasroque/) | [
-0.471410870552063,
-0.8037417531013489,
0.1670362651348114,
0.07307741791009903,
-0.09078871458768845,
-0.00346251018345356,
-0.22659826278686523,
-0.2792917788028717,
0.08022873103618622,
0.6272052526473999,
-0.753075897693634,
-0.4995644986629486,
-0.3562895357608795,
0.2352584153413772... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Isotonic/marketing_email_samples | Isotonic | 2023-08-24T13:17:29Z | 13 | 0 | null | [
"license:cc-by-nc-4.0",
"region:us"
] | 2023-08-24T13:17:29Z | 2023-08-24T13:16:18.000Z | 2023-08-24T13:16:18 | ---
license: cc-by-nc-4.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: product
dtype: string
- name: description
dtype: string
- name: marketing_email
dtype: string
splits:
- name: train
num_bytes: 549973.1724738675
num_examples: 487
- name: test
num_bytes: 98249.8275261324
num_examples: 87
download_size: 376029
dataset_size: 648223.0
---
| [
-0.1285339593887329,
-0.1861676424741745,
0.6529127359390259,
0.49436259269714355,
-0.19319337606430054,
0.23607449233531952,
0.36071962118148804,
0.05056334659457207,
0.5793653130531311,
0.7400139570236206,
-0.650810182094574,
-0.23783966898918152,
-0.710224986076355,
-0.04782599955797195... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
CatUkraine/ukr-wikipedia-dump | CatUkraine | 2023-09-08T15:52:31Z | 13 | 0 | null | [
"task_categories:text-generation",
"language:uk",
"license:cc-by-sa-3.0",
"region:us"
] | 2023-09-08T15:52:31Z | 2023-08-31T07:54:31.000Z | 2023-08-31T07:54:31 | ---
dataset_info:
features:
- name: text
dtype: string
- name: title
dtype: string
- name: URL
dtype: string
splits:
- name: train
num_bytes: 794379
num_examples: 962
download_size: 400834
dataset_size: 794379
license: cc-by-sa-3.0
task_categories:
- text-generation
language:
- uk
---
# Dataset Card for "ukr-wikipedia-dump"
Random scraped pages from Ukrainian Wikipedia.
Scraped using "wikipedia" module for Python3. | [
-0.3468959927558899,
-0.16853484511375427,
-0.024148333817720413,
-0.10053879022598267,
-0.697076678276062,
-0.27504613995552063,
0.20678023993968964,
-0.10170020163059235,
0.3010331392288208,
0.4079565107822418,
-0.6641169190406799,
-0.5351226925849915,
0.03961225971579552,
0.124998301267... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
khalidalt/arc | khalidalt | 2023-09-05T04:28:01Z | 13 | 0 | null | [
"region:us"
] | 2023-09-05T04:28:01Z | 2023-09-05T04:27:47.000Z | 2023-09-05T04:27:47 | Entry not found | [
-0.32276493310928345,
-0.22568416595458984,
0.8622260093688965,
0.4346145987510681,
-0.5282987356185913,
0.7012965083122253,
0.7915719151496887,
0.07618632912635803,
0.7746023535728455,
0.25632187724113464,
-0.785281777381897,
-0.22573833167552948,
-0.9104480743408203,
0.5715669989585876,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PL-MTEB/hate_speech_pl-clustering | PL-MTEB | 2023-09-12T13:05:06Z | 13 | 0 | null | [
"license:cc-by-nc-sa-3.0",
"region:us"
] | 2023-09-12T13:05:06Z | 2023-09-11T13:57:22.000Z | 2023-09-11T13:57:22 | ---
license: cc-by-nc-sa-3.0
---
| [
-0.1285339593887329,
-0.1861676424741745,
0.6529127359390259,
0.49436259269714355,
-0.19319337606430054,
0.23607449233531952,
0.36071962118148804,
0.05056334659457207,
0.5793653130531311,
0.7400139570236206,
-0.650810182094574,
-0.23783966898918152,
-0.710224986076355,
-0.04782599955797195... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ImagenHub/Text_to_Image | ImagenHub | 2023-11-27T09:27:04Z | 13 | 1 | null | [
"arxiv:2310.01596",
"region:us"
] | 2023-11-27T09:27:04Z | 2023-09-14T21:03:08.000Z | 2023-09-14T21:03:08 | ---
configs:
- config_name: default
data_files:
- split: eval
path: data/eval-*
- split: DrawBench_trimmed
path: data/DrawBench_trimmed-*
- split: DiffusionDB_trimmed
path: data/DiffusionDB_trimmed-*
- split: Realism
path: data/Realism-*
- split: ABC_trimmed
path: data/ABC_trimmed-*
dataset_info:
features:
- name: prompt
dtype: string
- name: category
dtype: string
- name: source
dtype: string
- name: uid
dtype: int32
splits:
- name: eval
num_bytes: 24907
num_examples: 197
- name: DrawBench_trimmed
num_bytes: 7673
num_examples: 77
- name: DiffusionDB_trimmed
num_bytes: 8173
num_examples: 40
- name: Realism
num_bytes: 5383
num_examples: 40
- name: ABC_trimmed
num_bytes: 3678
num_examples: 40
download_size: 38022
dataset_size: 49814
---
# Dataset Card
Dataset in [ImagenHub](arxiv.org/abs/2310.01596).
# Citation
Please kindly cite our paper if you use our code, data, models or results:
```
@article{ku2023imagenhub,
title={ImagenHub: Standardizing the evaluation of conditional image generation models},
author={Max Ku and Tianle Li and Kai Zhang and Yujie Lu and Xingyu Fu and Wenwen Zhuang and Wenhu Chen},
journal={arXiv preprint arXiv:2310.01596},
year={2023}
}
``` | [
-0.2961347997188568,
-0.2845042049884796,
0.17502851784229279,
-0.046440739184617996,
-0.5530214905738831,
-0.7106481194496155,
0.008455425500869751,
-0.3149990737438202,
-0.16685637831687927,
0.5109954476356506,
-0.25519829988479614,
-0.6841367483139038,
-0.44233056902885437,
0.0949995219... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nirbhaysinghnarang/Mahabharat | nirbhaysinghnarang | 2023-09-15T22:08:41Z | 13 | 0 | null | [
"region:us"
] | 2023-09-15T22:08:41Z | 2023-09-15T22:00:53.000Z | 2023-09-15T22:00:53 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Aborevsky01/CLEVR-BT-DB | Aborevsky01 | 2023-09-20T16:44:56Z | 13 | 0 | null | [
"task_categories:visual-question-answering",
"language:en",
"region:us"
] | 2023-09-20T16:44:56Z | 2023-09-17T17:03:32.000Z | 2023-09-17T17:03:32 | ---
task_categories:
- visual-question-answering
language:
- en
---
### How to install?
```python
!pip install datasets -q
from huggingface_hub import snapshot_download
import pandas as pd
import matplotlib.pyplot as plt
# First step: download an entire datatset
snapshot_download(repo_id="Aborevsky01/CLEVR-BT-DB", repo_type="dataset", local_dir='path-to-your-local-dir')
# Second step: unarchive the images for VQA
!unzip [path-to-your-local-dir]/[type-of-task]/images.zip
# Example of the triplet (image - question - answer)
plt.imshow(plt.imread('[path-to-your-local-dir]/images/test/Reason_0.png'))
print(pd.read_csv('[path-to-your-local-dir]/[type-of-task]/Reason_test_questions.csv').iloc[0].question)
print([str(line) for line in open('[path-to-your-local-dir]/[type-of-task]/correct_answ.txt', 'rb')][0])
```
### Output of code

**Q**: There is an object to the left of a cylinder to the right of a cylinder, what color is it?
**A**: b'blue\n' | [
-0.5691003203392029,
-0.5365206599235535,
0.10953792184591293,
0.3803948163986206,
-0.549626886844635,
-0.006986454129219055,
0.3129757344722748,
-0.1291346698999405,
0.48149749636650085,
0.5425384640693665,
-0.7367650866508484,
-0.5068358778953552,
-0.28386613726615906,
0.3246348798274994... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
indiejoseph/ted-transcriptions-cantonese | indiejoseph | 2023-09-18T19:49:07Z | 13 | 2 | null | [
"region:us"
] | 2023-09-18T19:49:07Z | 2023-09-18T19:49:04.000Z | 2023-09-18T19:49:04 | ---
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1569597
num_examples: 249
download_size: 1066997
dataset_size: 1569597
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ted-transcriptions-cantonese"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.1632263958454132,
-0.4940939247608185,
0.2351931929588318,
0.5388457179069519,
-0.25648537278175354,
0.0862097218632698,
0.0077858311124145985,
-0.05348014831542969,
0.9952559471130371,
0.5998303890228271,
-0.7590121626853943,
-0.8780533075332642,
-0.5320538878440857,
-0.009468330070376... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TanvirOnHF/greetings | TanvirOnHF | 2023-10-14T15:10:38Z | 13 | 0 | null | [
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"license:cdla-sharing-1.0",
"GPT-3.5",
"GPT-4",
"Claude",
"Bard",
"Alpaca",
"LLaMA",
"LLaMA-2",
"Vicuna",
"PaLM-2",
"Multilingual",
"region:us"
] | 2023-10-14T15:10:38Z | 2023-09-21T16:52:51.000Z | 2023-09-21T16:52:51 | ---
license: cdla-sharing-1.0
pretty_name: Greetings
tags:
- GPT-3.5
- GPT-4
- Claude
- Bard
- Alpaca
- LLaMA
- LLaMA-2
- Vicuna
- PaLM-2
- Multilingual
multilinguality:
- multilingual
size_categories:
- 1K<n<10K
---
# Greetings [TXT dataset]
A dataset comprising artificially generated **greetings** derived from a diverse array of Large Language Models (LLMs) such as GPT-3.5, GPT-4, Claude, Bard, Alpaca, LLaMA, LLaMA-2, Vicuna, and PaLM-2. These greetings cover various types and are expressed in multiple languages.
## Prompt
The prompt used:
```txt
Please generate a diverse range of English greetings, and I'll guide you to continue if I require more. You can also incorporate greetings from different languages and cultures for added diversity. No need for explanations or additional information.
```
## TODO
- Categorize them into types (Formal, Informal/Casual, Professional, Family, Friendship, Multilingual, ...) and Cultural Origin (General, Indian, British, Australian, ...)
## Disclaimer
Please note that while I strive to maintain data quality, I cannot guarantee the accuracy or quality of all entries in this dataset. Use it responsibly and exercise caution when relying on the data for any critical applications. Your feedback and contributions are greatly appreciated for improving the dataset's overall quality.
| [
-0.21131736040115356,
-0.45782166719436646,
0.04919620230793953,
0.385420024394989,
-0.3287513554096222,
0.27250880002975464,
-0.16452479362487793,
-0.41404181718826294,
0.5398051738739014,
0.5239219665527344,
-0.690959095954895,
-0.8143758773803711,
-0.7481686472892761,
0.6672962307929993... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
miikatoi/DocLayNet-tiny | miikatoi | 2023-09-22T06:24:24Z | 13 | 0 | null | [
"region:us"
] | 2023-09-22T06:24:24Z | 2023-09-22T06:22:19.000Z | 2023-09-22T06:22:19 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: texts
sequence: string
- name: bboxes_block
sequence:
sequence: int64
- name: bboxes_line
sequence:
sequence: int64
- name: categories
sequence:
class_label:
names:
'0': Caption
'1': Footnote
'2': Formula
'3': List-item
'4': Page-footer
'5': Page-header
'6': Picture
'7': Section-header
'8': Table
'9': Text
'10': Title
- name: image
dtype: image
- name: page_hash
dtype: string
- name: original_filename
dtype: string
- name: page_no
dtype: int32
- name: num_pages
dtype: int32
- name: original_width
dtype: int32
- name: original_height
dtype: int32
- name: coco_width
dtype: int32
- name: coco_height
dtype: int32
- name: collection
dtype: string
- name: doc_category
dtype: string
splits:
- name: train
num_bytes: 28393556.512301013
num_examples: 70
- name: validation
num_bytes: 2641091.359375
num_examples: 7
- name: test
num_bytes: 1779922.857142857
num_examples: 5
download_size: 31476812
dataset_size: 32814570.72881887
---
# Dataset Card for "DocLayNet-tiny"
Tiny set for unit tests based on https://huggingface.co/datasets/pierreguillou/DocLayNet-small.
Total ~0.1% of DocLayNet.
| [
-0.5225328803062439,
-0.42086952924728394,
0.10460299998521805,
0.0354870930314064,
-0.029541412368416786,
-0.34784865379333496,
0.19559814035892487,
0.24433763325214386,
1.002381682395935,
0.2486981600522995,
-0.70653235912323,
-0.28180429339408875,
-0.02622123435139656,
-0.37893036007881... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HumanCompatibleAI/ppo-seals-Hopper-v1 | HumanCompatibleAI | 2023-09-27T07:06:10Z | 13 | 0 | null | [
"region:us"
] | 2023-09-27T07:06:10Z | 2023-09-26T14:42:54.000Z | 2023-09-26T14:42:54 | ---
dataset_info:
features:
- name: obs
sequence:
sequence: float64
- name: acts
sequence:
sequence: float32
- name: infos
sequence: string
- name: terminal
dtype: bool
- name: rews
sequence: float32
splits:
- name: train
num_bytes: 57153894
num_examples: 104
download_size: 12420708
dataset_size: 57153894
---
# Dataset Card for "ppo-seals-Hopper-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5075117349624634,
-0.004508507903665304,
0.06278394162654877,
0.19330871105194092,
-0.37165671586990356,
-0.17632536590099335,
0.8359910249710083,
-0.10693372040987015,
0.8682810664176941,
0.7672867178916931,
-0.8294706344604492,
-0.6856658458709717,
-0.8887589573860168,
-0.203132197260... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Vishal24/function_calling | Vishal24 | 2023-09-27T09:44:38Z | 13 | 2 | null | [
"region:us"
] | 2023-09-27T09:44:38Z | 2023-09-27T07:18:28.000Z | 2023-09-27T07:18:28 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
karan4d/machiavellian_synthetic_textbooks | karan4d | 2023-10-03T16:30:11Z | 13 | 2 | null | [
"license:apache-2.0",
"region:us"
] | 2023-10-03T16:30:11Z | 2023-10-02T03:05:16.000Z | 2023-10-02T03:05:16 | ---
license: apache-2.0
---
credits: shoutout @vikp for his textbook_quality GH repo this was created with
dataset info: a bunch of bad boy data for Machiavellian LLMs | [
-0.12912380695343018,
-0.29321590065956116,
0.4111455976963043,
-0.36337727308273315,
-0.4130561351776123,
-0.34319812059402466,
0.2653563320636749,
-0.133698970079422,
0.4271821975708008,
1.023633599281311,
-0.5193922519683838,
-0.9930130243301392,
-0.16241368651390076,
-0.069110095500946... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Pavitra05/finalContent | Pavitra05 | 2023-10-02T20:32:21Z | 13 | 0 | null | [
"region:us"
] | 2023-10-02T20:32:21Z | 2023-10-02T20:25:00.000Z | 2023-10-02T20:25:00 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HumanCompatibleAI/ppo-Pendulum-v1 | HumanCompatibleAI | 2023-10-04T16:52:12Z | 13 | 0 | null | [
"region:us"
] | 2023-10-04T16:52:12Z | 2023-10-04T16:52:08.000Z | 2023-10-04T16:52:08 | ---
dataset_info:
features:
- name: obs
sequence:
sequence: float32
- name: acts
sequence:
sequence: float32
- name: infos
sequence: string
- name: terminal
dtype: bool
- name: rews
sequence: float32
splits:
- name: train
num_bytes: 2575710
num_examples: 200
download_size: 940375
dataset_size: 2575710
---
# Dataset Card for "ppo-Pendulum-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3908683657646179,
-0.05842825397849083,
0.1751612424850464,
0.26865822076797485,
-0.5840170383453369,
-0.3898654580116272,
0.5238366723060608,
0.020974211394786835,
0.7952396869659424,
0.6498667001724243,
-0.909683108329773,
-0.7788940072059631,
-0.5106557607650757,
-0.5055813789367676,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
teragron/reviews | teragron | 2023-10-09T23:55:54Z | 13 | 1 | null | [
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:en",
"license:mit",
"finance",
"region:us"
] | 2023-10-09T23:55:54Z | 2023-10-05T13:32:32.000Z | 2023-10-05T13:32:32 | ---
license: mit
language:
- en
tags:
- finance
pretty_name: review_me
size_categories:
- 1M<n<10M
task_categories:
- text-generation
---
Following packages are necessary to compile the model in C:
```bash
sudo apt install gcc-7
```
```bash
sudo apt-get install build-essential
```
```python
for i in range(1,21):
!wget https://huggingface.co/datasets/teragron/reviews/resolve/main/chunk_{i}.bin
```
```bash
git clone https://github.com/karpathy/llama2.c.git
```
```bash
cd llama2.c
```
```bash
pip install -r requirements.txt
```
Path: data/TinyStories_all_data | [
-0.2572391629219055,
-0.6310564279556274,
0.6964508295059204,
0.16120795905590057,
-0.2427474856376648,
0.010206238366663456,
0.3746603727340698,
-0.35531511902809143,
0.303259938955307,
0.48360076546669006,
-0.5895063877105713,
-0.6214134693145752,
-0.6530713438987732,
-0.1660446375608444... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TernenceZ/taxdata | TernenceZ | 2023-10-13T02:06:42Z | 13 | 0 | null | [
"license:mit",
"region:us"
] | 2023-10-13T02:06:42Z | 2023-10-07T09:00:23.000Z | 2023-10-07T09:00:23 | ---
license: mit
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PMIndiaData/PMIndiaSum | PMIndiaData | 2023-11-09T19:26:00Z | 13 | 0 | null | [
"task_categories:summarization",
"size_categories:100K<n<1M",
"language:as",
"language:bn",
"language:gu",
"language:hi",
"language:mr",
"language:ml",
"language:mni",
"language:kn",
"language:pa",
"language:ta",
"language:or",
"language:te",
"language:ur",
"language:en",
"license:cc... | 2023-11-09T19:26:00Z | 2023-10-10T01:00:46.000Z | 2023-10-10T01:00:46 | ---
license: cc-by-4.0
task_categories:
- summarization
language:
- as
- bn
- gu
- hi
- mr
- ml
- mni
- kn
- pa
- ta
- or
- te
- ur
- en
configs:
- config_name: assamese-assamese
data_files:
- split: train
path: assamese-assamese/train.csv
- split: test
path: assamese-assamese/test.csv
- split: valid
path: assamese-assamese/valid.csv
default: true
config_names:
- assamese-assamese
size_categories:
- 100K<n<1M
---
# Dataset Card for "PMIndiaSum"
## Dataset Description
#### Summary
PMIndiaSum is a new multilingual and massively parallel headline summarization corpus focused on languages in India. Our corpus covers four language families, 14 languages, and the largest to date, 196 language pairs. It provides a testing ground for all cross-lingual pairs.
#### Supported tasks
Monolingual, multilingual and cross-lingual summarization for languages in India.
#### Languages
Assamese, Bengali, Gujarati, Hindi, Kannada, Marathi, Malayalam, Manipuri, Punjabi, Oriya, Telugu, Tamil, Urdu, English
## Example Usage
#### Monolingual and cross-lingual summarization
#### Multilingual summarization
## Dataset Structure
#### Data instances
We show an example of a Telugu-Hindi cross-lingual pair from PMIndiaSum:
```
{
"source_url": "https://www.pmindia.gov.in/te/news_updates/%E0%B0%8E%E0%B0%B2%E0%B0%95%E0%B1%8D%E0%B0%9F%E0%B1%8D%E0%B0%B0%E0%B0%BE%E0%B0%A8%E0%B0%BF%E0%B0%95%E0%B1%8D%E0%B0%B8%E0%B1%8D-%E0%B0%87%E0%B0%82%E0%B0%95%E0%B0%BE-%E0%B0%B8%E0%B0%AE%E0%B0%BE/"
"target_url": "https://www.pmindia.gov.in/hi/news_updates/%E0%A4%AA%E0%A5%8D%E0%A4%B0%E0%A4%A7%E0%A4%BE%E0%A4%A8%E0%A4%AE%E0%A4%82%E0%A4%A4%E0%A5%8D%E0%A4%B0%E0%A5%80-%E0%A4%B6%E0%A5%8D%E0%A4%B0%E0%A5%80-%E0%A4%A8%E0%A4%B0%E0%A5%87%E0%A4%A8%E0%A5%8D-45/"
"text": "ఎలక్ట్రానిక్స్, ఇంకా సమాచార సాంకేతిక విజ్ఞానం రంగంలో ద్వైపాక్షిక సహకారాన్ని పెంపొందింపచేయడంలో భారతదేశానికి మరియు అంగోలా కు మధ్య అవగాహనపూర్వక ఒప్పందాన్ని (ఎమ్ఒయు ను) గురించి ప్రధాన మంత్రి శ్రీ నరేంద్ర మోదీ అధ్యక్షతన జరిగిన కేంద్ర మంత్రివర్గ సమావేశం దృష్టి కి తీసుకువచ్చారు. ఈ ఎమ్ఒయు ఇ-గవర్నెన్స్, సమాచార సాంకేతిక విజ్ఞాన సంబంధ విద్య కు అవసరమైన మానవ వనరుల వికాసం, సమాచార భద్రత, ఎలక్ట్రానిక్స్ హార్డ్ వేర్ తయారీ, ఐటి ఎంబెడెడ్ సాఫ్ట్ వేర్ ఇండస్ట్రీ, టెలిమెడిసిన్ తదితర రంగాలలో సన్నిహిత సహకారాన్ని పెంపొందింపచేయడానికి ఉద్దేశించినటువంటిది"
"summary": "मंत्रिमंडल को इलेक्ट्रॉनिक्स एवं संचना प्रौद्योगिकी के क्षेत्र में द्विपक्षीय सहयोग के लिए भारत और अंगोला के बीच समझौता ज्ञापन से अवगत कराया गया"
}
```
#### Data fields
- 'source_url': A string representing the source article URL
- 'target_url': A string representing the target article URL
- 'text': A string containing the article text
- 'summary': A string containing the article summary
### Load dataset using hf-dataset class
```python
from datasets import load_dataset
dataset = load_dataset("PMIndiaData/PMIndiaSum", "hindi-telugu")
# you can use the combination of any of the following config names as a second argument:
# "assamese", "bengali", "english", "gujarati", "hindi", "kannada", "malayalm", "manipuri", "marathi", "punjabi", "odia", "telugu", "tamil", "urdu"
```
## Creation Details
#### Data source
The data source is [PMIndia](https://www.pmindia.gov.in/) with license information at [here](https://www.pmindia.gov.in/en/website-policies/).
We also extensively used materials from the [PMIndia parallel corpus](https://arxiv.org/abs/2001.09907) and its [code](https://github.com/bhaddow/pmindia-crawler).
#### Data construction details
You can find more details in our [paper](https://arxiv.org/abs/2305.08828).
## Other Information
#### License
Our materials are licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). We also request that you respect the [policies]([https://www.pmindia.gov.in/en/website-policies/](https://www.pmindia.gov.in/en/website-policies/)) from the source website.
#### Materials
- **Code repository:** [https://github.com/ashokurlana/pmindiasum](https://github.com/ashokurlana/pmindiasum)
- **Raw data also available at:** [https://drive.google.com/file/d/1KkJ4UbDprtoeeCA6wxfMknWXykYgnLUY/view?usp=sharing](https://drive.google.com/file/d/1KkJ4UbDprtoeeCA6wxfMknWXykYgnLUY/view?usp=sharing)
- **Description paper:** [PMIndiaSum: Multilingual and Cross-lingual Headline Summarization for Languages in India](https://arxiv.org/abs/2305.08828) at EMNLP Findings 2023.
#### Citation
Our work will be published at EMNLP Findings 2023. If you use our code or data, please kindly cite the following:
```
@inproceedings{urlana-etal-2023-pmindiasum,
title={{PMIndiaSum}: Multilingual and Cross-lingual Headline Summarization for Languages in {India}},
author={Urlana, Ashok and Chen, Pinzhen and Zhao, Zheng and Cohen, Shay B. and Shrivastava, Manish and Haddow, Barry},
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
url ={https://arxiv.org/abs/2305.08828},
year={2023}
}
```
#### Contributors
Ashok Urlana, Pinzhen Chen, Zheng Zhao, Shay B. Cohen, Manish Shrivastava, Barry Haddow
#### Contact
Ashok Urlana (ashokurlana@gmail.com) | [
-0.4570867419242859,
-0.47992759943008423,
-0.03346714749932289,
0.5552595257759094,
-0.443631649017334,
-0.027621032670140266,
-0.3770473599433899,
-0.12672069668769836,
0.47910815477371216,
0.05112597346305847,
-0.43578198552131653,
-0.7063806653022766,
-0.48557645082473755,
0.6311624050... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
erhwenkuo/c4-chinese-zhtw | erhwenkuo | 2023-10-12T04:00:07Z | 13 | 7 | null | [
"task_categories:text-generation",
"task_categories:fill-mask",
"size_categories:1M<n<10M",
"language:zh",
"region:us"
] | 2023-10-12T04:00:07Z | 2023-10-11T13:39:56.000Z | 2023-10-11T13:39:56 | ---
language:
- zh
size_categories:
- 1M<n<10M
task_categories:
- text-generation
- fill-mask
dataset_info:
features:
- name: url
dtype: string
- name: timestamp
dtype: string
- name: content_language
dtype: string
- name: content_type
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 12480603148
num_examples: 2967556
download_size: 8659425404
dataset_size: 12480603148
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "c4-chinese-zhtw"
## 內容
Common Crawl 是一個非營利組織,負責抓取網路並向公眾免費提供其檔案和資料集。Common Crawl 的網路檔案包含自 2008 年以來收集的 PB 級資料。它一般每月完成一次抓取。
Common Crawl 的爬蟲程式遵守 nofollow 和 robots.txt 政策。用於處理 Common Crawl 資料集的開源程式碼是公開可用的。
這個繁中的數據來是來自 [Common Crawl](https://commoncrawl.org/overview) **2023-14** 的 data archive 下載并進行清理 。
這是 [jed351](https://huggingface.co/jed351) 準備的版本,託管在這個位址:
- https://huggingface.co/datasets/jed351/Traditional-Chinese-Common-Crawl-Filtered
## 支援的任務
C4主要用於預訓練語言模型(pretrain language model)。
## 範例
一個樣本的範例:
```
{
'url': 'http://www.bilingtong.com/cpzx/96.html',
'timestamp': '2023-03-21 02:12:48',
'content_language': 'zho',
'content_type': 'text/plain',
'text': '新風系統是通過系統設計送風和排風使室內空氣存在一空氣 。無需開窗全天持續不斷有組.....'
}
```
## 資料欄位
資料有幾個欄位:
- `url`: 來源 url
- `timestamp`: 時間戳
- `content_language`: 內容包含的語言種類
- `content_type`: 內容類型,也稱為 MIME 或媒體類型,是 Web 伺服器回應標頭中的聲明
- `text`:網頁清理後的文字內容
## 數據清理
請參考在 Github 上的專案 [c4-dataset-script](https://github.com/jedcheng/c4-dataset-script) 來了解數據下載與清理的相關邏輯與程式碼。
主要的步驟有:
1. Download the WET crawl archive index file
2. Run download and Chinese screening script on Spark
3. Filter out non-sentence lines and toxic document
4. Remove duplicated text
5. Remove documents that are over self-repeating - Repetition Removal in DeepMind MassiveText
## 許可資訊
請尊循 Common Craw terms of use 的條款。
- https://commoncrawl.org/terms-of-use
| [
-0.41147276759147644,
-0.5121151804924011,
0.09404270350933075,
0.27915239334106445,
-0.7872491478919983,
-0.09880221635103226,
-0.16294321417808533,
-0.47856417298316956,
0.4864330291748047,
0.3638033866882324,
-0.6481851935386658,
-1.0720820426940918,
-0.23359991610050201,
0.322590887546... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
smangrul/hf-stack-peft | smangrul | 2023-10-12T06:43:30Z | 13 | 0 | null | [
"region:us"
] | 2023-10-12T06:43:30Z | 2023-10-12T06:43:27.000Z | 2023-10-12T06:43:27 | ---
dataset_info:
features:
- name: repo_id
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1280407
num_examples: 158
download_size: 424682
dataset_size: 1280407
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "hf-stack-peft"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6796300411224365,
-0.530893862247467,
0.22902455925941467,
0.4080129861831665,
-0.06990627944469452,
0.2522086501121521,
0.4471530020236969,
-0.05667950212955475,
0.7678413391113281,
0.7014616131782532,
-0.729296088218689,
-0.6035412549972534,
-0.46952491998672485,
-0.22453573346138,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gaodrew/sassy-aztec-qa-13k | gaodrew | 2023-10-19T22:12:00Z | 13 | 3 | null | [
"license:mit",
"region:us"
] | 2023-10-19T22:12:00Z | 2023-10-19T22:10:26.000Z | 2023-10-19T22:10:26 | ---
license: mit
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
antareepdey/Medical_chat_Llama-chat-template | antareepdey | 2023-10-20T04:53:27Z | 13 | 0 | null | [
"license:mit",
"region:us"
] | 2023-10-20T04:53:27Z | 2023-10-20T04:48:25.000Z | 2023-10-20T04:48:25 | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: Text
dtype: string
splits:
- name: train
num_bytes: 384344651
num_examples: 379455
download_size: 218544482
dataset_size: 384344651
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
coastalcph/fm_templates | coastalcph | 2023-10-24T07:03:22Z | 13 | 0 | null | [
"region:us"
] | 2023-10-24T07:03:22Z | 2023-10-20T07:38:49.000Z | 2023-10-20T07:38:49 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MU-NLPC/Calc-asdiv_a | MU-NLPC | 2023-10-30T15:56:07Z | 13 | 0 | null | [
"arxiv:2305.15017",
"region:us"
] | 2023-10-30T15:56:07Z | 2023-10-20T18:34:13.000Z | 2023-10-20T18:34:13 | ---
dataset_info:
- config_name: default
features:
- name: id
dtype: string
- name: question
dtype: string
- name: chain
dtype: string
- name: result
dtype: string
- name: result_float
dtype: float64
- name: result_unit
dtype: string
- name: grade
dtype: int64
- name: source_question
dtype: string
splits:
- name: test
num_bytes: 415636
num_examples: 1218
download_size: 152949
dataset_size: 415636
- config_name: original-splits
features:
- name: id
dtype: string
- name: question
dtype: string
- name: chain
dtype: string
- name: result
dtype: string
- name: result_float
dtype: float64
- name: result_unit
dtype: string
- name: grade
dtype: int64
- name: source_question
dtype: string
splits:
- name: test
num_bytes: 415664
num_examples: 1218
download_size: 152949
dataset_size: 415664
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- config_name: original-splits
data_files:
- split: test
path: original-splits/test-*
---
# Dataset Card for Calc-asdiv_a
## Summary
The dataset is a collection of simple math word problems focused on arithmetics. It is derived from the arithmetic subset of ASDiv ([original repo](https://github.com/chaochun/nlu-asdiv-dataset)).
The main addition in this dataset variant is the `chain` column. It was created by converting the solution to a simple html-like language that can be easily
parsed (e.g. by BeautifulSoup). The data contains 3 types of tags:
- gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case)
- output: An output of the external tool
- result: The final answer to the mathematical problem (a number)
## Supported Tasks
This variant of the dataset is intended for training Chain-of-Thought reasoning models able to use external tools to enhance the factuality of their responses.
This dataset presents in-context scenarios where models can outsource the computations in the reasoning chain to a calculator.
## Data splits
The dataset does not contain data splits. We consider the whole dataset as a testing benchmark.
## Attributes:
- **id**: id of the example
- **question** problem description in English
- **chain**: series of simple operations (derived from **expression**) that lead to the solution
- **result**: the solution for x as a number or fraction (string)
- **result_float**: same as **result** but converted to a float
- **result_unit**: the units of the result
- **grade**: an estimate of the school grade in which the problem would be practiced
- **source_question**: the source from which the example originates
Attributes **id**, **question**, **chain**, and **result** are present in all datasets in the [Calc-X collection](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483).
## Related work
This dataset was created as a part of a larger effort in training models capable of using a calculator during inference, which we call Calcformers.
- [**Calc-X collection**](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483) - datasets for training Calcformers
- [**Calcformers collection**](https://huggingface.co/collections/MU-NLPC/calcformers-65367392badc497807b3caf5) - calculator-using models we trained and published on HF
- [**Calc-X and Calcformers paper**](https://arxiv.org/abs/2305.15017)
- [**Calc-X and Calcformers repo**](https://github.com/prompteus/calc-x)
Here are links to the original dataset:
- [**original ASDiv dataset and repo**](https://github.com/chaochun/nlu-asdiv-dataset)
- [**original ASDiv paper**](https://aclanthology.org/2020.acl-main.92)
## Licence
CC BY-NC 4.0, consistent with the original source dataset linked above.
## Cite
If you use this dataset in research, please cite the original [ASDiv paper](https://aclanthology.org/2020.acl-main.92), and [Calc-X collection](https://arxiv.org/abs/2305.15017) as follows:
```bibtex
@inproceedings{kadlcik-etal-2023-soft,
title = "Calc-X and Calcformers: Empowering Arithmetical Chain-of-Thought through Interaction with Symbolic Systems",
author = "Marek Kadlčík and Michal Štefánik and Ondřej Sotolář and Vlastimil Martinek",
booktitle = "Proceedings of the The 2023 Conference on Empirical Methods in Natural Language Processing: Main track",
month = dec,
year = "2023",
address = "Singapore, Singapore",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2305.15017",
}
```
| [
-0.32707667350769043,
-0.5232989192008972,
0.10012735426425934,
0.04855765774846077,
0.05456874892115593,
0.023739399388432503,
-0.14751943945884705,
-0.1989051103591919,
0.28824087977409363,
0.37396878004074097,
-0.6608824729919434,
-0.5013028383255005,
-0.44051381945610046,
0.15874691307... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
georgeyw/dsir-pile-1m | georgeyw | 2023-10-22T21:20:58Z | 13 | 0 | null | [
"license:mit",
"region:us"
] | 2023-10-22T21:20:58Z | 2023-10-22T21:18:04.000Z | 2023-10-22T21:18:04 | ---
license: mit
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ai2lumos/lumos_unified_plan_iterative | ai2lumos | 2023-10-23T22:27:04Z | 13 | 0 | null | [
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"language-agent",
"maths",
"reasoning",
"question-answering",
"web-agent",
"planning",
"region:us"
] | 2023-10-23T22:27:04Z | 2023-10-23T05:38:03.000Z | 2023-10-23T05:38:03 | ---
license: apache-2.0
task_categories:
- conversational
- text-generation
- question-answering
language:
- en
tags:
- language-agent
- maths
- reasoning
- question-answering
- web-agent
- planning
size_categories:
- 10K<n<100K
---
# 🪄 Lumos: Language Agents with Unified Formats, Modular Design, and Open-Source LLMs
<p align="center">
🌐<a href="https://allenai.github.io/lumos">[Website]</a>
📝<a href="">[Paper]</a>
🤗<a href="https://huggingface.co/datasets?sort=trending&search=ai2lumos">[Data]</a>
🤗<a href="https://huggingface.co/models?sort=trending&search=ai2lumos">[Model]</a>
</p>
We introduce 🪄**Lumos**, Language Agents with **Unified** Formats, **Modular** Design, and **Open-Source** LLMs. **Lumos** unifies a suite of complex interactive tasks and achieves competitive performance with GPT-4/3.5-based and larger open-source agents.
**Lumos** has following features:
* 🧩 **Modular Architecture**:
- **Lumos** consists of planning, grounding, and execution modules built based on LLAMA-2-7B.
* 🌍 **Diverse Training Data**:
- **Lumos** is trained with ~40K high-quality annotations from ground-truth reasoning steps in existing benchmarks with GPT-4.
* 🚀 **Competitive Performance**:
- 🚀 **Lumos** outperforms **GPT-4/3.5-based** agents on complex QA and web agent tasks, and **larger open agents** on maths tasks.
- 🚀 **Lumos** performs better than open agent baseline formulations including **chain-of-thoughts** and **unmodularized** training.
- 🚀 **Lumos** surpasses larger open LLM agents and domain-specific agents on an unseen task, WebShop.
## Data Overview
`lumos_unified_plan_iterative` is the data for training **planning** module on **maths**, **complex QA** and **web agent** tasks in **Lumos-Iterative (Lumos-I)** formulation.
The source of the training annotation training data is shown below:
| Task | Number |
|---|---|
|PRM800K|10000|
|GSM8K|7473|
|ASDiv|2305|
|StrategyQA|1777|
|Musique|17632|
|Mind2Web|1009|
## Models Trained with the Data
`lumos_unified_plan_iterative` is used to train the following models.
|Model|Huggingface Repo|
|---|---|
|`lumos_unified_plan_iterative`| [🤗Huggingface Repo](https://huggingface.co/ai2lumos/lumos_unified_plan_iterative) |
## Citation
If you find this work is relevant with your research, please feel free to cite our work!
```
@article{yin2023lumos,
title={Lumos: Towards Language Agents that are Unified, Modular, and Open Source},
author={Yin, Da and Brahman, Faeze and Ravichander, Abhilasha and Chandu, Khyathi and Chang, Kai-Wei and Choi, Yejin and Lin, Bill Yuchen},
year={2023}
}
``` | [
-0.07330036163330078,
-0.48826611042022705,
0.37751832604408264,
0.2970030605792999,
-0.19219569861888885,
0.048855412751436234,
-0.4556001126766205,
-0.5901021361351013,
0.39217492938041687,
0.40010392665863037,
-0.5815843343734741,
-0.5330526232719421,
-0.3183465301990509,
-0.07643245905... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MattBastar/Medicine_Details | MattBastar | 2023-10-25T00:04:39Z | 13 | 0 | null | [
"region:us"
] | 2023-10-25T00:04:39Z | 2023-10-24T22:48:33.000Z | 2023-10-24T22:48:33 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Ka4on/ultrasound_test | Ka4on | 2023-10-25T20:16:13Z | 13 | 0 | null | [
"region:us"
] | 2023-10-25T20:16:13Z | 2023-10-25T20:08:59.000Z | 2023-10-25T20:08:59 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mickume/dnd_drow | mickume | 2023-10-25T20:48:10Z | 13 | 0 | null | [
"region:us"
] | 2023-10-25T20:48:10Z | 2023-10-25T20:48:03.000Z | 2023-10-25T20:48:03 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 41076570
num_examples: 179983
download_size: 25478602
dataset_size: 41076570
---
# Dataset Card for "dnd_drow"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.43594902753829956,
-0.24024105072021484,
0.16185414791107178,
0.21061961352825165,
-0.30197757482528687,
0.15716145932674408,
0.5038725733757019,
-0.1646057516336441,
0.9388692378997803,
0.6700014472007751,
-1.001219630241394,
-0.8628594279289246,
-0.5681122541427612,
-0.181376025080680... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pnadel/jfk_senior_thesis_data | pnadel | 2023-10-26T12:43:24Z | 13 | 0 | null | [
"region:us"
] | 2023-10-26T12:43:24Z | 2023-10-26T12:42:28.000Z | 2023-10-26T12:42:28 | ---
dataset_info:
features:
- name: index
dtype: int64
- name: collection
dtype: string
- name: packageId
dtype: string
- name: granuleId
dtype: string
- name: title
dtype: string
- name: detailsLink
dtype: string
- name: pdfLink
dtype: string
- name: htmlLink
dtype: string
- name: xmlLink
dtype: string
- name: otherLink1
dtype: string
- name: otherLink2
dtype: float64
- name: teaser
dtype: string
- name: historical
dtype: float64
- name: publishdate
dtype: string
- name: president
dtype: string
- name: full_text
dtype: string
- name: url_to_use
dtype: string
- name: path_to_text
dtype: string
splits:
- name: train
num_bytes: 3121664312
num_examples: 4908
download_size: 1609034276
dataset_size: 3121664312
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "jfk_senior_thesis_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.42632123827934265,
-0.31638655066490173,
0.1818634569644928,
-0.003729218617081642,
-0.11163198202848434,
0.27759212255477905,
0.2824481129646301,
0.1528477668762207,
0.7368508577346802,
0.7402248382568359,
-0.6948756575584412,
-1.2392306327819824,
-0.41590821743011475,
-0.3384146690368... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
VuongQuoc/english_learn | VuongQuoc | 2023-10-27T09:09:46Z | 13 | 0 | null | [
"region:us"
] | 2023-10-27T09:09:46Z | 2023-10-27T09:05:55.000Z | 2023-10-27T09:05:55 | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 4602747761.0
num_examples: 77456
download_size: 4600511540
dataset_size: 4602747761.0
---
# Dataset Card for "english_learn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.42089855670928955,
-0.2707720994949341,
0.0669938325881958,
0.1842477172613144,
-0.004053190350532532,
0.1270386427640915,
-0.10544412583112717,
-0.28069159388542175,
0.7553356885910034,
0.2509883940219879,
-0.7165249586105347,
-0.8925647735595703,
-0.6720889806747437,
-0.09105230867862... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mrzaid/bootcamp_qna | mrzaid | 2023-10-30T12:41:15Z | 13 | 0 | null | [
"license:mit",
"region:us"
] | 2023-10-30T12:41:15Z | 2023-10-29T01:28:19.000Z | 2023-10-29T01:28:19 | ---
license: mit
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.