id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
adalbertojunior/ICD_dataset | 2023-09-13T21:59:45.000Z | [
"region:us"
] | adalbertojunior | null | null | 0 | 18 | 2023-09-13T02:49:47 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: text
dtype: string
- name: label
sequence: string
splits:
- name: train
num_bytes: 418410601
num_examples: 39354
- name: test
num_bytes: 53529100
num_examples: 5000
- name: validation
num_bytes: 52947510
num_examples: 5000
download_size: 301971173
dataset_size: 524887211
---
# Dataset Card for "ICD_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 701 | [
[
-0.03460693359375,
-0.0007634162902832031,
0.0223846435546875,
0.016204833984375,
-0.026702880859375,
0.0047454833984375,
0.0294342041015625,
-0.01082611083984375,
0.05877685546875,
0.025177001953125,
-0.04656982421875,
-0.06195068359375,
-0.04193115234375,
... |
FreedomIntelligence/ACVA-Arabic-Cultural-Value-Alignment | 2023-09-21T12:39:18.000Z | [
"size_categories:1K<n<10K",
"language:ar",
"license:apache-2.0",
"region:us"
] | FreedomIntelligence | null | null | 0 | 18 | 2023-09-15T10:18:04 | ---
language:
- ar
viewer: true
license: apache-2.0
size_categories:
- 1K<n<10K
---
# About ArabicCulture
The ArabicCulture dataset was generated by gpt3.5 and contains 8000+ True and False questions.
The dataset contains questions from 58 different areas.
In the answers, "True" accounted for 59.62%, and "False" accounted for 40.38%
# data-all
It contains 8000+ data, and we took 5 data from each area as few-shot data.
# data-select
We asked two Arabs to judge 4000 of all the data for us, and we left data that two Arabs both thought were good. Finally, we got 2.4k data covering 9 areas.
We divided them into test sets and validation sets as above. | 662 | [
[
-0.057403564453125,
-0.041259765625,
0.0228424072265625,
-0.003387451171875,
-0.00279998779296875,
-0.0015392303466796875,
0.01447296142578125,
-0.034942626953125,
0.002033233642578125,
0.0312042236328125,
-0.0302581787109375,
-0.0643310546875,
-0.03854370117187... |
enrdur/monero_xmr_question_answer | 2023-10-11T20:57:37.000Z | [
"language:en",
"license:wtfpl",
"finance",
"region:us"
] | enrdur | null | null | 0 | 18 | 2023-09-16T13:35:54 | ---
language:
- en
license: wtfpl
pretty_name: XMR questions & answers
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: asnwer
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 118480
num_examples: 236
download_size: 73482
dataset_size: 118480
tags:
- finance
---
# Monero (XMR) Q&A Dataset
## Overview
The Monero (XMR) Q&A Dataset is a meticulously curated compilation of questions and answers focused on the Monero cryptocurrency. This dataset is designed to serve as a resource for machine learning practitioners, data scientists, cryptocurrency enthusiasts, and researchers aiming to build models that can understand, interact with, or analyze the Monero ecosystem.
## Features
- **Comprehensive Coverage**: The dataset covers a wide array of topics, ranging from basic concepts like "What is Monero?" to more complex subjects such as ring signatures, stealth addresses, and privacy mechanisms.
- **Quality Assurance**: Each entry has undergone thorough validation to ensure factual accuracy and relevance to the evolving landscape of Monero.
- **Machine Learning Ready**: Formatted to be readily used in a variety of machine learning models, including NLP algorithms for chatbots.
## Applications
- **Chatbots**: Enhance the conversational capabilities of bots focused on cryptocurrency topics.
## Format
The dataset is structured as pairs of questions and answers, you will need to process further in case your model is expecting a particular format. | 1,631 | [
[
-0.0450439453125,
-0.060394287109375,
0.01155853271484375,
-0.0223236083984375,
-0.0178680419921875,
0.037994384765625,
0.0095367431640625,
-0.035552978515625,
0.019622802734375,
0.05535888671875,
-0.0628662109375,
-0.032257080078125,
-0.038848876953125,
0.0... |
yashmangal28/langchain-docs | 2023-09-18T13:18:32.000Z | [
"region:us"
] | yashmangal28 | null | null | 0 | 18 | 2023-09-18T13:15:26 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
A-Roucher/amazon_product_reviews_datafiniti | 2023-09-26T14:12:40.000Z | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:feature-extraction",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | A-Roucher | null | null | 0 | 18 | 2023-09-18T14:16:55 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: brand
dtype:
class_label:
names:
'0': Amazon
'1': AmazonBasics
'2': Amazonbasics
- name: primaryCategories
dtype: string
- name: reviews.numHelpful
dtype: float64
- name: reviews.rating
dtype: int64
- name: reviews.text
dtype: string
splits:
- name: train
num_bytes: 1107781.5
num_examples: 6000
- name: test
num_bytes: 369260.5
num_examples: 2000
download_size: 704792
dataset_size: 1477042
task_categories:
- text-classification
- question-answering
- feature-extraction
language:
- en
pretty_name: Amazon Product Reviews by Datafiniti
size_categories:
- 1K<n<10K
---
# Dataset Card for "amazon_product_reviews_datafiniti"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,019 | [
[
-0.04876708984375,
-0.0228424072265625,
0.00803375244140625,
0.039031982421875,
-0.0261077880859375,
0.002529144287109375,
0.027069091796875,
-0.0124664306640625,
0.04498291015625,
0.034332275390625,
-0.0634765625,
-0.062347412109375,
-0.023101806640625,
-0.... |
Hyder12/LLM_Bootcamp_Fine_tune_QnA | 2023-09-25T21:53:39.000Z | [
"region:us"
] | Hyder12 | null | null | 0 | 18 | 2023-09-20T04:00:25 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
diegomiranda/small-dataset-img-test | 2023-10-04T20:32:06.000Z | [
"arxiv:2308.16900",
"region:us"
] | diegomiranda | null | @article{bender2023learning,
title={Learning to Taste: A Multimodal Wine Dataset},
author={Bender, Thoranna and S{\o}rensen, Simon M{\o}e and Kashani, Alireza and Hjorleifsson, K Eldjarn and Hyldig, Grethe and Hauberg, S{\o}ren and Belongie, Serge and Warburg, Frederik},
journal={arXiv preprint arXiv:2308.16900},
year={2023}
} | 0 | 18 | 2023-09-30T16:55:49 | # Dataset Card for WineSensed
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [WineSensed Dataset](https://https://thoranna.github.io/learning_to_taste/)
- **Repository:**
- **Paper:** [Paper](https://arxiv.org/pdf/2308.16900.pdf)
### Dataset Summary
The dataset encompasses 897k images of wine labels and 824k reviews of wines
curated from the Vivino platform. It has over 350k unique vintages, annotated
with year, region, rating, alcohol percentage, price, and grape composition.
We obtained fine-grained flavor annotations on a subset by conducting a wine-tasting experiment
with 256 participants who were asked to rank wines based on their similarity in flavor,
resulting in more than 5k pairwise flavor distances.
### Languages
English
## Dataset Structure
### Data Fields
The dataset contains the file metadata.zip, consisting of the files participants.csv, which contains information connecting participants to annotations in the experiment, images_reviews_attributes.csv, which contains reviews, links to images, and wine attributes, and napping.csv, which contains the coordinates of each wine on the napping paper alongside information connecting each coordinate pair to the wine being annotated and the participant who annotated it. The chunk_<chunk num>.zip folders contain the images of the wines in the dataset in .jpg format.
#### napping.csv contains the following fields:
- session_round_name: session number during the event_name, at most three sessions per event (maps to experiment_round in participants.csv)
- event_name: name of the data collection event (maps to the same attribute in participants.csv)
- experiment_no: which number the napping paper was in the list of papers returned for this session_round_name (maps to experiment_no in participants.csv)
- experiment_id: id the wine being annotated was given in the experiment
- coor1: x-axis coordinate on the napping paper
- coor2: y-axis coordinate on the napping paper
- color: color of the sticker used
#### participants.csv contains the following fields:
- session_round_name: session number during the event_name, at most three sessions per event (maps to experiment_round in napping.csv)
- event_name: name of data-collection event (maps to event_name in napping.csv)
- experiment_no: which number the napping paper was in the list of papers returned for this session_round_name (maps to experiment_no in napping.csv)
- round_id: round number (from 1-3)
- participant_id: id the participant was given in the experiment
#### images_reviews_attributes.csv contains the following fields:
- vintage_id: vintage id of the wine
- image: image link (each .jpg in chunk_<chunk num>.zip can be mapped to a corresponding image link in this column by removing the /p prefix from the link).
- review: user review of the wine
- experiment_id: id the wine got during data collection (each experiment_id can be mapped to the same column in napping.csv)
- year: year the wine was produced
- winery_id: id of the winery that produced the wine
- wine: name of the wine
- alcohol: the wine's alcohol percentage
- country: the country where the wine was produced
- region: the region where the wine was produced
- price: price of the wine in USD (collected 05/2023)
- rating: average rating of the wine (collected 05/2023)
- grape: the wine's grape composition, represented as a comma-separated list ordered in descending sequence of the percentage contribution of each grape variety to the overall blend.
## Dataset Creation
### All Images Dataset
1) Unzip all the chunk_*.zip files
2) Copy the script create_all_images_dataset.sh to the output_images/ directory
3) Execute chmod +x create_all_images_dataset.sh
4) Execute ./create_all_images_dataset.sh
## Additional Information
### Licensing Information
LICENSE AGREEMENT
=================
- WineSensed by Thoranna Bender, Simon Søresen, Alireza Kashani, Kristjan Eldjarn, Grethe Hyldig,
Søren Hauberg, Serge Belongie, Frederik Warburg is licensed under a CC BY-NC-ND 4.0 Licence
### Citation Information
```
@article{bender2023learning,
title={Learning to Taste: A Multimodal Wine Dataset},
author={Bender, Thoranna and S{\o}rensen, Simon M{\o}e and Kashani, Alireza and Hjorleifsson, K Eldjarn and Hyldig, Grethe and Hauberg, S{\o}ren and Belongie, Serge and Warburg, Frederik},
journal={arXiv preprint arXiv:2308.16900},
year={2023}
```
| 4,799 | [
[
-0.014007568359375,
-0.040069580078125,
0.0250091552734375,
0.016693115234375,
-0.0233154296875,
-0.0234222412109375,
-0.0211944580078125,
-0.0211334228515625,
0.0297088623046875,
0.030609130859375,
-0.036224365234375,
-0.070068359375,
-0.0290374755859375,
-... |
MLNavigator/russian-retrieval | 2023-10-30T13:22:15.000Z | [
"license:mit",
"region:us"
] | MLNavigator | null | null | 1 | 18 | 2023-10-02T14:58:05 | ---
license: mit
---
Based on Sberquad
- Answer converted to human affordable answer.
- Context augmented with some pices of texts from wiki accordant to text on tematic and keywords.
- This dataset cold be used for training retrieval LLM models or modificators for ability of LLM to retrieve target information from collection of tematic related texts.
- Dataset has version with SOURCE data for generating answer with specifing source document for right answer. See file retrieval_dataset_src.jsonl
Dataset consists of 45278 examples in russian language of format:
{
'text': 'text with correct answer',
'q': 'question text',
'a': 'correct answer text',
'context': 'text of 4-10 text chunks, one with right answer and others relevant with text and question on tematic and keywords'
}
Length of one example of context + question + answer is less than 7000 symbols. It should be less than 2048 tokens of rugpt tokenizer.
File retrieval_dataset_src.jsonl has additionally SOURCE data for every text chunk in context, also SOURCE of right answer is set in answer.
This variant of dataset is useful if you need extract answer with specifing source of the right answer.
{
'text': 'text with correct answer',
'q': 'question text',
'a': 'correct answer text with SOURCE data of text',
'context': 'text of 4-10 text chunks, one with right answer and others relevant with text and question on tematic and keywords.
Each of text chunks has it's own SOURCE data'
}
All SOURCE data are sintetic generated and not real. | 1,579 | [
[
-0.0092620849609375,
-0.058685302734375,
0.01445770263671875,
-0.01397705078125,
-0.0237579345703125,
-0.0014057159423828125,
-0.0081939697265625,
-0.0118865966796875,
0.0110931396484375,
0.043243408203125,
-0.052459716796875,
-0.03912353515625,
-0.0112152099609... |
shossain/govreport-qa-5-8192 | 2023-10-03T21:26:08.000Z | [
"region:us"
] | shossain | null | null | 0 | 18 | 2023-10-02T23:53:30 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 410925
num_examples: 5
download_size: 110024
dataset_size: 410925
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "govreport-qa-5-8192"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 529 | [
[
-0.02886962890625,
-0.004993438720703125,
0.0306396484375,
0.01418304443359375,
-0.0168914794921875,
-0.007480621337890625,
0.04046630859375,
-0.0094451904296875,
0.0537109375,
0.0416259765625,
-0.03961181640625,
-0.0533447265625,
-0.03289794921875,
-0.00551... |
sbarham/megawika-test | 2023-10-03T17:22:49.000Z | [
"task_categories:summarization",
"task_categories:question-answering",
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:10M<n<100M",
"language:af",
"language:ar",
"language:az",
"language:bn",
"language:cs",
"language:de",
"language:en",
"language:e... | sbarham | MegaWika is a multi- and crosslingual text dataset containing 30 million
Wikipedia passages with their scraped and cleaned web citations. The
passages span 50 Wikipedias in 50 languages, and the articles in which
the passages were originally embedded are included for convenience. Where
a Wikipedia passage is in a non-English language, an automated English
translation is provided. Furthermore, nearly 130 million English
question/answer pairs were extracted from the passages, and FrameNet events
occurring in the passages are detected using the LOME FrameNet parser. | @article{barham2023megawika,
title={MegaWika: Millions of reports and their sources across 50 diverse languages},
author={Barham, Samuel and Weller, Orion and
Yuan, Michelle and Murray, Kenton and
Yarmohammadi, Mahsa and Jiang, Zhengping and
Vashishtha, Siddharth and Martin, Alexander and
Liu, Anqi and White, Aaron Steven and
Boyd-Graber, Jordan and Van Durme, Benjamin
},
journal={INSERT ARXIV PREPRINT ID HERE},
year={2023}
} | 0 | 18 | 2023-10-03T15:24:34 | ---
license: cc-by-sa-4.0
task_categories:
- summarization
- question-answering
- text-generation
- text2text-generation
language:
- af
- ar
- az
- bn
- cs
- de
- en
- es
- et
- fa
- fi
- fr
- ga
- gl
- gu
- he
- hi
- hr
- id
- it
- ja
- ka
- kk
- km
- ko
- lt
- lv
- mk
- ml
- mn
- mr
- my
- ne
- nl
- pl
- ps
- pt
- ro
- ru
- si
- sl
- sv
- ta
- th
- tr
- uk
- ur
- vi
- xh
- zh
pretty_name: MegaWika
size_categories:
- 10M<n<100M
---
# Dataset Card for MegaWika
## Dataset Description
- **Homepage:** [HuggingFace](https://huggingface.co/datasets/hltcoe/megawika)
- **Repository:** [HuggingFace](https://huggingface.co/datasets/hltcoe/megawika)
- **Paper:** [Coming soon]
- **Leaderboard:** [Coming soon]
- **Point of Contact:** [Samuel Barham](samuel.barham@jhuapl.edu)
### Dataset Summary
MegaWika is a multi- and crosslingual text dataset containing 30 million Wikipedia passages with their scraped and cleaned web citations. The passages span
50 Wikipedias in 50 languages, and the articles in which the passages were originally embedded are included for convenience. Where a Wikipedia passage is in a
non-English language, an automated English translation is provided. Furthermore, nearly 130 million English question/answer pairs were extracted from the
passages, and FrameNet events occurring in the passages are detected using the [LOME](https://aclanthology.org/2021.eacl-demos.19.pdf) FrameNet parser.
<!---
To get a feel for the dataset -- its structure, content, strengths and weaknesses -- you may visit the [dataset viewer](https://huggingface.co/spaces/hltcoe/megawika)
we have set up as a HuggingFace Space. It allows the curious visitor to explore a small set of examples spread across a number of the dataset's constituent languages.
-->
### Dataset Creation
The pipeline through which MegaWika was created is complex, and is described in more detail in the paper (linked above),
but the following diagram illustrates the basic approach.

### Supported Tasks and Leaderboards
MegaWika is meant to support research across a variety of tasks, including report generation, summarization, information retrieval, question answering, etc.
### Languages
MegaWika is divided by Wikipedia language. There are 50 languages, including English, each designated by their 2-character ISO language code:
- `af`: Afrikaans
- `ar`: Arabic
- `az`: Azeri (Azerbaijani)
- `bn`: Bengali
- `cs`: Czech
- `de`: German (Deutsch)
- `en`: English
- `es`: Spanish (Español)
- `et`: Estonian
- `fa`: Farsi (Persian)
- `fi`: Finnish
- `fr`: French
- `ga`: Irish (Gaelic)
- `gl`: Galician
- `gu`: Gujarati
- `he`: Hebrew
- `hi`: Hindi
- `hr`: Hungarian
- `id`: Indonesian
- `it`: Italian
- `ja`: Japanese
- `ka`: Georgian (Kartvelian/Kartlian)
- `kk`: Kazakh
- `km`: Khmer
- `ko`: Korean
- `lt`: Lithuanian
- `lv`: Latvian
- `mk`: Macedonian (Makedonski)
- `ml`: Malay (Malayalam)
- `mn`: Mongolian
- `mr`: Marathi
- `my`: Burmese (Myanmar language)
- `ne`: Nepali
- `nl`: Dutch (Nederlands)
- `pl`: Polish
- `ps`: Pashto
- `pt`: Portuguese
- `ro`: Romanian
- `ru`: Russian
- `si`: Sinhalese (Sri Lankan language)
- `sl`: Slovenian
- `sv`: Swedish (Svenska)
- `ta`: Tamil
- `th`: Thai
- `tr`: Turkish
- `uk`: Ukrainian
- `ur`: Urdu
- `vi`: Vietnamese
- `xh`: Xhosa
- `zh`: Chinese (Zhōng wén)
## Dataset Structure
The dataset is divided by language, and the data for each of the 50 languages is further chunked into discrete JSON lines files.
Each line of these files -- we'll call such a line an **instance** -- contains the data extracted from a single Wikipedia article.
### Data Instances
Each instance contains the text of the seed Wikipedia article, along with a list of **entries**. Each entry consists basically in
an extracted Wikipedia passage, the URL and scraped text of the web source it cites, a list of questions/answer pairs extracted from the passage,
and a framenet parse of the passage. Where the passage is from a non-English Wikipedia, a machine translation into English is also provided.
### Data Fields
The detailed structure of an instance is as follows:
```
{
"article_title": <string : title of original Wikipedia article>
"article_text": <string : text of Wikipedia article>
"entries": [
# Wiki Passage
"id": <string : passage ID>
"passage": {
"text": <string : text of passage in English (possibly via MT)>
"parse": <list of dict : FrameNet parse of English passage text>
"en_tokens": <dict : tokenization of passage in English>
"lang_tokens": <dict : tokenization of original non-English passage>
"en_lang_token_map": <dict : alignment mapping between English and original language token indices>
}
# MT
"original": <string : original language passage>
"original_sents": <list of string : sentencized original language passage>
"translation": <string : machine translation of passage>
"translation_sents": <list of string : sentencized machine translation of passage>
"translation_probs": <list of float : log prob of machine translation by sentence, where available>
"repetitious_translation": <string \in ("true", "false") : automated judgment on whether machine translation is pathologically repetitious>
"source_lang": <string : language ID, 2-character ISO code>
# Source
"source_url": <string : URL of the cited web source>
"source_text": <string : content extracted from the scrape of the source URL>
# Question/Answer Pairs
"qa_pairs": [
...
{
"question": <string : generated question>
"passage_id": <string : passage ID>
"en_answer": <string : English answer>
"lang_answer": <string : aligned original language answer>
"frames": [
...
{
"frame": <string : frame triggered by the question>
"argument": <string : detected frame arguments>
}
...
]
# NB: answer matches can be empty, in the case no matching span exists
"en_matches_in_source": <list of int : start and end index of the English language-answer token(s) in the source document>
"en_match_in_passage": <list of int : start and end index of the English language-answer token(s) in the English language translation of the passage>
"lang_matches_in_source": <list of int : start and end index of the original language-answer token(s) in the source document>
"lang_match_in_passage": <list of int : start and end index of the original language-answer token(s) in the original language passage>
"passage": <list of string : sentencized view of the passage>
"en_answer_tokens": <list of string>
"match_disambiguated_question": <string : disambiguated version of question obtained by matching pronouns with article title (noisy but often helpful)>
}
...
]
]
}
```
English language instances differ not in structure but in content;
1. Fields in the block labeled "MT" above are naturally null (that is, they are set to falsy values in Python -- specifically `None`)
2. Since the Wiki passage only exists in English, and has no corresponding non-English "original language" version, answer spans also necessarily have only an English-language version (and no non-English "original-language" version. Therefore, fields in the `qa_pairs` block beginning with `lang_` are set to null/falsy values in Python (in this case, empty lists).
### Data Splits
MegaWika is currently split only by language, as each task will imply its own approach to filtering, sampling, downselecting, and splitting into train/test splits.
<!---
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
-->
## Licensing and Takedown
MegaWika 1.0 consists in part of documents scraped from across the web (based on citations linked in Wikipedia articles.)
We do not own any of the scraped text nor do we claim copyright: text drawn from Wikipedia citations are meant for research use in algorithmic design and model training.
We release this dataset and all its contents under CC-BY-SA-4.0.
### Notice and Takedown Policy:
*NB*: Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
- Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
- Clearly identify the copyrighted work claimed to be infringed.
- Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
And contact the authors.
*Take down*: We will comply to legitimate requests by removing the affected sources from the next release of the dataset.
## Additional Information
### Dataset Curators
Released and maintained by the Johns Hopkins University Human Language Technology Center of Excellence (JHU/HLTCOE).
You can contact one the MegaWika authors, including [Samuel Barham](mailto:samuel.barham@jhuapl.edu), [Orion Weller](mailto:oweller2@jhu.edu),
and [Ben van Durme](mailto:vandurme@jhu.edu) with questions.
### Licensing Information
Released under the [Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/) license.
### Citation Information
```
@misc{barham2023megawika,
title={MegaWika: Millions of reports and their sources across 50 diverse languages},
author={Samuel Barham and and Weller and Michelle Yuan and Kenton Murray and Mahsa Yarmohammadi and Zhengping Jiang and Siddharth Vashishtha and Alexander Martin and Anqi Liu and Aaron Steven White and Jordan Boyd-Graber and Benjamin Van Durme},
year={2023},
eprint={2307.07049},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
### Contributions
[More Information Needed]
--> | 10,430 | [
[
-0.044464111328125,
-0.058929443359375,
0.0177001953125,
0.01059722900390625,
-0.014892578125,
-0.01629638671875,
-0.0262908935546875,
-0.033416748046875,
0.04608154296875,
0.033233642578125,
-0.04974365234375,
-0.036651611328125,
-0.0360107421875,
0.0478820... |
NickyNicky/finance-financialmodelingprep-stock-news-sentiments-rss-feed | 2023-10-05T00:40:43.000Z | [
"region:us"
] | NickyNicky | null | null | 1 | 18 | 2023-10-05T00:40:32 | ---
dataset_info:
features:
- name: symbol
dtype: string
- name: publishedDate
dtype: string
- name: title
dtype: string
- name: image
dtype: string
- name: site
dtype: string
- name: text
dtype: string
- name: url
dtype: string
- name: sentiment
dtype: string
- name: sentimentScore
dtype: float64
splits:
- name: train
num_bytes: 107184621
num_examples: 142000
download_size: 49547931
dataset_size: 107184621
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "finance-financialmodelingprep-stock-news-sentiments-rss-feed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 787 | [
[
-0.024688720703125,
0.002285003662109375,
-0.0027446746826171875,
0.045013427734375,
-0.0192718505859375,
0.007061004638671875,
0.006481170654296875,
0.01641845703125,
0.057525634765625,
0.01397705078125,
-0.068603515625,
-0.06829833984375,
-0.0419921875,
-0... |
Intuit-GenSRF/haternet | 2023-10-05T01:37:51.000Z | [
"region:us"
] | Intuit-GenSRF | null | null | 0 | 18 | 2023-10-05T01:37:49 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
splits:
- name: train
num_bytes: 788430
num_examples: 6000
download_size: 513972
dataset_size: 788430
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "haternet"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 471 | [
[
-0.04150390625,
-0.0117340087890625,
0.00498199462890625,
0.0164642333984375,
-0.01132965087890625,
-0.0050201416015625,
0.0227813720703125,
-0.01434326171875,
0.06451416015625,
0.021942138671875,
-0.06134033203125,
-0.0535888671875,
-0.049591064453125,
-0.0... |
RtwC/people | 2023-10-07T02:58:03.000Z | [
"region:us"
] | RtwC | null | null | 0 | 18 | 2023-10-07T02:34:57 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PE
'2': I-PE
'3': B-OR
'4': I-OR
'5': B-LO
'6': I-LO
splits:
- name: train
num_bytes: 14972408
num_examples: 20865
- name: validation
num_bytes: 1676725
num_examples: 2319
- name: test
num_bytes: 3346959
num_examples: 4637
download_size: 2731946
dataset_size: 19996092
---
# Dataset Card for "people"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 893 | [
[
-0.047210693359375,
-0.00972747802734375,
0.018310546875,
0.01525115966796875,
-0.00982666015625,
0.00923919677734375,
0.019866943359375,
-0.0230560302734375,
0.06634521484375,
0.035308837890625,
-0.05926513671875,
-0.048614501953125,
-0.036712646484375,
-0.... |
johannes-garstenauer/embeddings_from_distilbert_masking_heaps_and_eval_part0 | 2023-10-09T07:16:29.000Z | [
"region:us"
] | johannes-garstenauer | null | null | 0 | 18 | 2023-10-09T07:13:33 | ---
dataset_info:
features:
- name: struct
dtype: string
- name: label
dtype: int64
- name: pred
dtype: int64
- name: cls_layer_6
sequence: float32
- name: cls_layer_5
sequence: float32
- name: cls_layer_4
sequence: float32
splits:
- name: train
num_bytes: 1282993344
num_examples: 134592
download_size: 1493342036
dataset_size: 1282993344
---
# Dataset Card for "embeddings_from_distilbert_masking_heaps_and_eval_part0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 607 | [
[
-0.03314208984375,
-0.03424072265625,
0.01971435546875,
0.029815673828125,
-0.0165252685546875,
0.01537322998046875,
0.034088134765625,
0.0096282958984375,
0.062347412109375,
0.0213165283203125,
-0.039825439453125,
-0.06072998046875,
-0.056243896484375,
-0.0... |
baebee/chatgpt-custom_inst | 2023-10-09T19:16:48.000Z | [
"task_categories:summarization",
"task_categories:question-answering",
"task_categories:conversational",
"size_categories:n<1K",
"language:en",
"language:tl",
"license:mit",
"region:us"
] | baebee | null | null | 0 | 18 | 2023-10-09T07:31:02 | ---
license: mit
task_categories:
- summarization
- question-answering
- conversational
language:
- en
- tl
size_categories:
- n<1K
---
# Languages: English, Tagalog
## Collection Process:
- Dialogs generated by instructing ChatGPT to respond concisely
- Responses edited by Nuph researchers for naturalness
- Bilingual exchanges added for diversity
## Intended Use:
- Train conversational agents
- Research in straightforward dialog
# Limitations:
- Small scale (300 rows)
- Biased toward English
- Limited to text conversations
# Ethics and Privacy:
- No personal or offensive content
- ChatGPT instructed to avoid unethical responses
- Data anonymized - no personally identifiable information | 703 | [
[
-0.018157958984375,
-0.06121826171875,
-0.00801849365234375,
0.046539306640625,
-0.03155517578125,
0.016510009765625,
-0.0199432373046875,
-0.031463623046875,
0.025665283203125,
0.051910400390625,
-0.044677734375,
-0.005855560302734375,
-0.030029296875,
0.02... |
miulab/tmlu | 2023-10-31T02:46:15.000Z | [
"task_categories:question-answering",
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:zh",
"region:us"
] | miulab | null | null | 0 | 18 | 2023-10-09T11:15:13 | ---
task_categories:
- question-answering
- text-classification
language:
- zh
pretty_name: TMLU
size_categories:
- 1K<n<10K
configs:
- config_name: AST_chinese
data_files:
- split: test
path: "AST_chinese_test.jsonl"
- split: dev
path: "AST_chinese_dev.jsonl"
- config_name: AST_biology
data_files:
- split: test
path: "AST_biology_test.jsonl"
- split: dev
path: "AST_biology_dev.jsonl"
- config_name: AST_chemistry
data_files:
- split: test
path: "AST_chemistry_test.jsonl"
- split: dev
path: "AST_chemistry_dev.jsonl"
- config_name: AST_physics
data_files:
- split: test
path: "AST_physics_test.jsonl"
- split: dev
path: "AST_physics_dev.jsonl"
- config_name: AST_civics
data_files:
- split: test
path: "AST_civics_test.jsonl"
- split: dev
path: "AST_civics_dev.jsonl"
- config_name: AST_geography
data_files:
- split: test
path: "AST_geography_test.jsonl"
- split: dev
path: "AST_geography.jsonl"
- config_name: AST_history
data_files:
- split: test
path: "AST_history_test.jsonl"
- split: dev
path: "AST_history_dev.jsonl"
- config_name: GSAT_chinese
data_files:
- split: test
path: "GSAT_chinese_test.jsonl"
- split: dev
path: "GSAT_chinese _dev.jsonl"
- config_name: GSAT_chemistry
data_files:
- split: test
path: "GSAT_chemistry_test.jsonl"
- split: dev
path: "GSAT_chemistry_dev.jsonl"
- config_name: GSAT_biology
data_files:
- split: test
path: "GSAT_biology_test.jsonl"
- split: dev
path: "GSAT_biology_dev.jsonl"
- config_name: GSAT_physics
data_files:
- split: test
path: "GSAT_physicis_test.jsonl"
- split: dev
path: "GSAT_physicis_dev.jsonl"
- config_name: GSAT_chemistry
data_files:
- split: test
path: "GSAT_earth_science_test.jsonl"
- split: dev
path: "GSAT_earth_science_dev.jsonl"
- config_name: GSAT_geography
data_files:
- split: test
path: "GSAT_geography_test.jsonl"
- split: dev
path: "GSAT_geography_dev.jsonl"
- config_name: GSAT_history
data_files:
- split: test
path: "GSAT_history_test.jsonl"
- split: dev
path: "GSAT_history_dev.jsonl"
- config_name: GSAT_civics
data_files:
- split: test
path: "GSAT_civics_test.jsonl"
- split: dev
path: "GSAT_civics_dev.jsonl"
- config_name: CAP_biology
data_files:
- split: test
path: "CAP_biology_test.jsonl"
- split: dev
path: "CAP_biology_dev.jsonl"
- config_name: CAP_physics
data_files:
- split: test
path: "CAP_physics_test.jsonl"
- split: dev
path: "CAP_physics_dev.jsonl"
- config_name: CAP_chemistry
data_files:
- split: test
path: "CAP_chemistry_test.jsonl"
- split: dev
path: "CAP_chemistry_dev.jsonl"
- config_name: CAP_earth_science
data_files:
- split: test
path: "CAP_earth_science_test.jsonl"
- split: dev
path: "CAP_earth_science_dev.jsonl"
- config_name: Driving Rule
data_files:
- split: test
path: "Driving Rule_test.jsonl"
- split: dev
path: "driving rule_dev.jsonl"
- config_name: Basic Traditional Chinese Medicine
data_files:
- split: test
path: "Basic Traditional Chinese Medicine_test.jsonl"
- split: dev
path: "Basic Traditional Chinese Medicine_dev.jsonl"
- config_name: Clinical Traditional Chinese Medicine
data_files:
- split: test
path: "Clinical Traditional Chinese Medicine_test.jsonl"
- split: dev
path: "Clinical Traditional Chinese Medicine_dev.jsonl"
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
- AST: 分科測驗(110前指考)
- GSAT: 學科能力測驗
- CAP: 國中教育會考
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | 8,178 | [
[
-0.037506103515625,
-0.04150390625,
0.0088958740234375,
0.0205841064453125,
-0.031890869140625,
-0.01110076904296875,
-0.0047760009765625,
-0.0487060546875,
0.043212890625,
0.05938720703125,
-0.05548095703125,
-0.0689697265625,
-0.042388916015625,
0.00909423... |
04RR/tiny-instruct | 2023-10-15T16:55:38.000Z | [
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:en",
"license:apache-2.0",
"region:us"
] | 04RR | null | null | 13 | 18 | 2023-10-09T14:06:24 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
size_categories:
- 1M<n<10M
pretty_name: tiny-instruct
---
# tiny-instruct-v1
This dataset is collated from multiple other open-source datasets (de-duplicated). This has a total of ~6M rows each with an instruction and response (single-turn converstion).
#### Code Datasets:
1. [CodeAlpaca_20K](https://huggingface.co/datasets/HuggingFaceH4/CodeAlpaca_20K)
2. [CodeExercise-Python-27k](https://huggingface.co/datasets/codefuse-ai/CodeExercise-Python-27k)
3. [Evol-Instruct-Code-80k-v1](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1)
4. [tiny-codes](https://huggingface.co/datasets/nampdn-ai/tiny-codes)
5. [Evol-instruction-66k](https://huggingface.co/datasets/codefuse-ai/Evol-instruction-66k)
6. [sciphi-python-textbook](https://huggingface.co/datasets/emrgnt-cmplxty/sciphi-python-textbook)
7. [programming_books_llama](https://huggingface.co/datasets/open-phi/programming_books_llama)
8. [WizardLM_evol_instruct_70k](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_70k)
#### Math Datasets:
1. [MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA)
2. [arxiv-math-instruct-50k](https://huggingface.co/datasets/ArtifactAI/arxiv-math-instruct-50k)
3. [MathInstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)
#### General Datasets:
1. [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca)
2. [claude_evol_instruct_210k](https://huggingface.co/datasets/Norquinal/claude_evol_instruct_210k) | 1,537 | [
[
-0.03857421875,
-0.03759765625,
0.0032482147216796875,
0.016693115234375,
0.00998687744140625,
-0.018951416015625,
-0.007503509521484375,
-0.0156097412109375,
0.015350341796875,
0.032562255859375,
-0.032989501953125,
-0.039398193359375,
-0.01116943359375,
0.... |
carnival13/sur_test_rt5_few_8 | 2023-10-11T04:34:04.000Z | [
"region:us"
] | carnival13 | null | null | 0 | 18 | 2023-10-11T04:33:48 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 656906195
num_examples: 900000
download_size: 161337040
dataset_size: 656906195
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "sur_test_rt5_few_8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 505 | [
[
-0.06317138671875,
-0.01494598388671875,
0.00921630859375,
0.0147705078125,
-0.019561767578125,
-0.01361083984375,
0.031951904296875,
-0.003406524658203125,
0.0400390625,
0.04144287109375,
-0.053558349609375,
-0.051971435546875,
-0.032684326171875,
0.0163574... |
librarian-bots/dataset_cards_with_metadata | 2023-11-02T01:31:59.000Z | [
"task_categories:text-retrieval",
"size_categories:10K<n<100K",
"ethics",
"documentation",
"region:us"
] | librarian-bots | null | null | 6 | 18 | 2023-10-11T09:15:10 | ---
size_categories:
- 10K<n<100K
task_categories:
- text-retrieval
dataset_info:
features:
- name: id
dtype: string
- name: lastModified
dtype: string
- name: tags
sequence: string
- name: author
dtype: string
- name: description
dtype: string
- name: citation
dtype: string
- name: likes
dtype: int64
- name: downloads
dtype: int64
- name: created
dtype: timestamp[us]
- name: card
dtype: string
splits:
- name: train
num_bytes: 185387594
num_examples: 73991
download_size: 48554779
dataset_size: 185387594
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- ethics
- documentation
---
# Dataset Card for Hugging Face Hub Dataset Cards
This datasets consists of [dataset cards](https://huggingface.co/docs/hub/datasets-cards) for models hosted on the Hugging Face Hub. The dataset cards are created by the community and provide information about datasets hosted on the Hugging Face Hub.
This dataset is updated on a daily basis and includes publicly available datasets on the Hugging Face Hub.
This dataset is made available to help support users wanting to work with a large number of Dataset Cards from the Hub. We hope that this dataset will help support research in the area of Dataset Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new [discussion](https://huggingface.co/datasets/librarian-bots/model_cards_with_metadata/discussions/new).
## Dataset Details
### Dataset Description
- **Curated by:** Daniel van Strien
- **Language(s) (NLP):** Dataset cards on the Hugging Face Hub are predominantly in English but may include other languages.
## Uses
There are a number of potential uses for this dataset including:
- text mining to find common themes in dataset cards
- analysis of the dataset card format/content
- topic modelling of dataset cards
- training language models on the dataset cards
### Out-of-Scope Use
[More Information Needed]
## Dataset Structure
This dataset has a single split.
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
The dataset was created to assist people in working with dataset cards. In particular it was created to support research in the area of dataset cards and their use. It is possible to use the Hugging Face Hub API or client library to download dataset cards and this option may be preferable if you have a very specific use case or require a different format.
### Source Data
The source data is `README.md` files for datasets hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the dataset directory.
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
The data is downloaded using a CRON job on a daily basis.
#### Who are the source data producers?
The source data producers are the creators of the dataset cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the dataset card in this repository although this information can be gathered from the Hugging Face Hub API.
### Annotations [optional]
There are no additional annotations in this dataset beyond the dataset card content.
#### Annotation process
N/A
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
N/A
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
We make no effort to anonymize the data. Whilst we don't expect the majority of dataset cards to contain personal or sensitive information, it is possible that some dataset cards may contain this information. Dataset cards may also link to websites or email addresses.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Dataset cards are created by the community and we do not have any control over the content of the dataset cards. We do not review the content of the dataset cards and we do not make any claims about the accuracy of the information in the dataset cards.
Some dataset cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the dataset. As a result this dataset may contain examples of bias.
Whilst we do not directly download any images linked to in the dataset cards, some dataset cards may include images. Some of these images may not be suitable for all audiences.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation
No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.
## Dataset Card Authors
[@davanstrien](https://huggingface.co/davanstrien)
## Dataset Card Contact
[@davanstrien](https://huggingface.co/davanstrien) | 5,837 | [
[
-0.037841796875,
-0.049346923828125,
0.0025501251220703125,
0.03192138671875,
-0.0249786376953125,
-0.01493072509765625,
-0.01227569580078125,
-0.056396484375,
0.045928955078125,
0.04046630859375,
-0.06817626953125,
-0.057891845703125,
-0.04302978515625,
0.0... |
iara-project/raw_dataset_with_embeddings_bert-base-portuguese-cased-nli-assin-2 | 2023-10-11T21:24:50.000Z | [
"region:us"
] | iara-project | null | null | 0 | 18 | 2023-10-11T21:22:19 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: news_id
dtype: string
- name: embeddings
sequence: float64
- name: sentence
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 1672222472
num_examples: 176114
- name: test
num_bytes: 1670470539
num_examples: 176114
download_size: 2474408751
dataset_size: 3342693011
---
# Dataset Card for "raw_dataset_with_embeddings_bert-base-portuguese-cased-nli-assin-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 729 | [
[
-0.029541015625,
-0.021087646484375,
0.0042572021484375,
0.031768798828125,
-0.03912353515625,
0.00431060791015625,
0.00704193115234375,
-0.01171875,
0.06146240234375,
0.0242462158203125,
-0.04107666015625,
-0.061553955078125,
-0.041778564453125,
-0.01745605... |
carnival13/rbrt_test | 2023-10-14T06:49:04.000Z | [
"region:us"
] | carnival13 | null | null | 0 | 18 | 2023-10-14T06:48:17 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 1270137685
num_examples: 900000
download_size: 282453475
dataset_size: 1270137685
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "rbrt_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 498 | [
[
-0.047821044921875,
-0.0396728515625,
-0.0024890899658203125,
0.01348876953125,
-0.0153656005859375,
0.005462646484375,
0.0110321044921875,
-0.0137786865234375,
0.042938232421875,
0.0274810791015625,
-0.048614501953125,
-0.04046630859375,
-0.035003662109375,
... |
sunjun/medmcqa | 2023-10-14T13:42:50.000Z | [
"region:us"
] | sunjun | null | null | 0 | 18 | 2023-10-14T13:42:29 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: cop
dtype:
class_label:
names:
'0': a
'1': b
'2': c
'3': d
- name: choice_type
dtype: string
- name: exp
dtype: string
- name: subject_name
dtype: string
- name: topic_name
dtype: string
- name: options
struct:
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer_idx
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 136988451
num_examples: 182822
- name: test
num_bytes: 2350095
num_examples: 4183
download_size: 90978864
dataset_size: 139338546
---
# Dataset Card for "medmcqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,095 | [
[
-0.041900634765625,
-0.0081939697265625,
0.0306243896484375,
-0.0035762786865234375,
-0.01385498046875,
0.01268768310546875,
0.03875732421875,
0.0007390975952148438,
0.05426025390625,
0.042236328125,
-0.068115234375,
-0.05853271484375,
-0.038665771484375,
-0... |
Nbardy/light_illusion | 2023-10-15T02:45:04.000Z | [
"region:us"
] | Nbardy | null | null | 0 | 18 | 2023-10-15T01:55:29 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 39671500920.473
num_examples: 14859
download_size: 43276966952
dataset_size: 39671500920.473
---
# Dataset Card for "light_illusion"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 496 | [
[
-0.0298309326171875,
-0.0277252197265625,
0.016876220703125,
0.01910400390625,
-0.017547607421875,
-0.00649261474609375,
0.0208740234375,
-0.03289794921875,
0.06585693359375,
0.033935546875,
-0.048614501953125,
-0.036407470703125,
-0.02642822265625,
-0.02053... |
carnival13/rbrt_eval_sur_lrg3 | 2023-10-15T02:32:55.000Z | [
"region:us"
] | carnival13 | null | null | 0 | 18 | 2023-10-15T02:32:50 | ---
dataset_info:
features:
- name: domain_label
dtype: int64
- name: pass_label
dtype: int64
- name: input
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 13795820
num_examples: 6970
download_size: 3884690
dataset_size: 13795820
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "rbrt_eval_sur_lrg3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 611 | [
[
-0.042144775390625,
-0.045135498046875,
0.0152740478515625,
0.0150146484375,
-0.01300048828125,
0.0217742919921875,
0.0133056640625,
-0.0100860595703125,
0.035430908203125,
0.04022216796875,
-0.03802490234375,
-0.047332763671875,
-0.0289764404296875,
-0.0039... |
zelros/pj-ce | 2023-10-22T18:11:24.000Z | [
"region:us"
] | zelros | null | null | 0 | 18 | 2023-10-15T14:32:52 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
se2p/code-readability-merged | 2023-10-18T14:33:15.000Z | [
"task_categories:text-classification",
"size_categories:n<1K",
"language:en",
"license:unknown",
"readability",
"code",
"source code",
"code readability",
"Java",
"region:us"
] | se2p | null | null | 0 | 18 | 2023-10-17T17:12:08 | ---
language:
- en
license: unknown
size_categories:
- n<1K
task_categories:
- text-classification
pretty_name: Java Code Readability Merged Dataset
tags:
- readability
- code
- source code
- code readability
- Java
features:
- name: code_snippet
dtype: string
- name: score
dtype: float
dataset_info:
features:
- name: code_snippet
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 354539
num_examples: 421
download_size: 139793
dataset_size: 354539
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Java Code Readability Merged Dataset
This dataset contains **421 Java code snippets** along with a **readability score**, aggregated from several scientific papers [1, 2, 3].
You can download the dataset using Hugging Face:
```python
from datasets import load_dataset
ds = load_dataset("se2p/code-readability-merged")
```
The snippets are **not** split into train and test (and validation) set. Thus, the whole dataset is in the **train** set:
```python
ds = ds['train']
ds_as_list = ds.to_list() # Convert the dataset to whatever format suits you best
```
The dataset is structured as follows:
```json
{
"code_snippet": ..., # Java source code snippet
"score": ... # Readability score
}
```
The main goal of this repository is to train code **readability classifiers for Java source code**.
The dataset is a combination and normalization of three datasets:
1. **Buse**, R. P., & Weimer, W. R. (2009). Learning a metric for code readability. IEEE Transactions on software engineering, 36(4), 546-558.
2. **Dorn**, J. (2012). A General Software Readability Model.
3. **Scalabrino**, S., Linares‐Vásquez, M., Oliveto, R., & Poshyvanyk, D. (2018). A comprehensive model for code readability. Journal of Software: Evolution and Process, 30(6), e1958.
The raw datasets can be downloaded [here](https://dibt.unimol.it/report/readability/).
## Dataset Details
### Dataset Description
- **Curated by:** Buse Raymond PL, Dorn Jonathan, Sclabrino Simone
- **Shared by:** Krodinger Lukas
- **Language(s) (NLP):** Java
- **License:** Unknown
## Uses
The dataset can be used for training Java code readability classifiers.
## Dataset Structure
Each entry of the dataset consists of a **code_snippet** and a **score**.
The code_snippet (string) is the code snippet that was rated in a study by multiple participants.
Those could answer based on a five point Likert scale, with 1 being very unreadable and 5 being very readable.
The score (float) is the averaged rating score of all participants between 1.0 (very unreadable) and 5.0 (very readable).
## Dataset Creation
### Curation Rationale
To advance code readability classification, the creation of datasets in this research field is of high importance.
As a first step, we provide a merged and normalized version of existing datasets on Hugging Face.
This makes access and ease of usage of this existing data easier.
### Source Data
The source of the data are the papers from Buse, Dorn and Scalabrino.
Buse conducted a survey with 120 computer science students (17 from first year courses, 63 from second year courses, 30 third or fourth year courses, 10 graduated) on 100 code snippets.
The code snippets were generated from five open source Java projects.
Dorn conducted a survey with 5000 participants (1800 with industry experience) on 360 code snippets from which 121 are Java code snippets.
The used snippets were drawn from ten open source projects in the SourceForge repository (of March 15, 2012).
Scalabrino conducted a survey with 9 computer science students on 200 new code snippets.
The snippets were selected from four open source Java projects: jUnit, Hibernate, jFreeChart and ArgoUML.
#### Data Collection and Processing
The dataset was preprocessed by **averaging the readability rating** for each code snippet.
The code snippets and ratings were then **merged** from the three sources.
Each of the three, Buse, Dorn and Sclabrino selected their code snippets based on different criteria.
They had a different number of participants for their surveys.
One could argue that a code snippet that was rated by more participants might have a more accurate readability score and therefore is more valuable than one with less ratings.
However, for simplicity those differences are ignored.
Other than the selection (and generation) done by the original data source authors, no further processing is applied to the data.
#### Who are the source data producers?
The source data producers are the people that wrote the used open source Java projects, as well as the study participants, which were mostly computer science students.
#### Personal and Sensitive Information
The ratings of the code snippets are anonymized and averaged. Thus, no personal or sensitive information is contained in this dataset.
## Bias, Risks, and Limitations
The size of the dataset is very **small**.
The ratings of code snippets were done mostly by **computer science students**, who do not represent the group of Java programmers in general.
### Recommendations
The dataset should be used to train **small** Java code readability classifiers.
## Citation
1. Buse, R. P., & Weimer, W. R. (2009). Learning a metric for code readability. IEEE Transactions on software engineering, 36(4), 546-558.
2. Dorn, J. (2012). A General Software Readability Model.
3. Scalabrino, S., Linares‐Vásquez, M., Oliveto, R., & Poshyvanyk, D. (2018). A comprehensive model for code readability. Journal of Software: Evolution and Process, 30(6), e1958.
```bibtex
@article{buse2009learning,
title={Learning a metric for code readability},
author={Buse, Raymond PL and Weimer, Westley R},
journal={IEEE Transactions on software engineering},
volume={36},
number={4},
pages={546--558},
year={2009},
publisher={IEEE}
}
@inproceedings{dorn2012general,
title={A General Software Readability Model},
author={Jonathan Dorn},
year={2012},
url={https://api.semanticscholar.org/CorpusID:14098740}
}
@article{scalabrino2018comprehensive,
title={A comprehensive model for code readability},
author={Scalabrino, Simone and Linares-V{\'a}squez, Mario and Oliveto, Rocco and Poshyvanyk, Denys},
journal={Journal of Software: Evolution and Process},
volume={30},
number={6},
pages={e1958},
year={2018},
publisher={Wiley Online Library}
}
```
## Dataset Card Authors
Lukas Krodinger, [Chair of Software Engineering II](https://www.fim.uni-passau.de/en/chair-for-software-engineering-ii), [University of Passau](https://www.uni-passau.de/en/).
## Dataset Card Contact
Feel free to contact me via [E-Mail](mailto:krodin03@ads.uni-passau.de) if you have any questions or remarks. | 6,778 | [
[
-0.0032329559326171875,
-0.02410888671875,
0.0204010009765625,
0.00336456298828125,
-0.01434326171875,
-0.004302978515625,
-0.0244598388671875,
-0.048370361328125,
0.002521514892578125,
0.031982421875,
-0.01348114013671875,
-0.05902099609375,
-0.043609619140625,... |
argilla/mistral_vs_llama2 | 2023-10-18T14:02:13.000Z | [
"region:us"
] | argilla | null | null | 0 | 18 | 2023-10-18T14:02:11 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': activate_my_card
'1': age_limit
'2': apple_pay_or_google_pay
'3': atm_support
'4': automatic_top_up
'5': balance_not_updated_after_bank_transfer
'6': balance_not_updated_after_cheque_or_cash_deposit
'7': beneficiary_not_allowed
'8': cancel_transfer
'9': card_about_to_expire
'10': card_acceptance
'11': card_arrival
'12': card_delivery_estimate
'13': card_linking
'14': card_not_working
'15': card_payment_fee_charged
'16': card_payment_not_recognised
'17': card_payment_wrong_exchange_rate
'18': card_swallowed
'19': cash_withdrawal_charge
'20': cash_withdrawal_not_recognised
'21': change_pin
'22': compromised_card
'23': contactless_not_working
'24': country_support
'25': declined_card_payment
'26': declined_cash_withdrawal
'27': declined_transfer
'28': direct_debit_payment_not_recognised
'29': disposable_card_limits
'30': edit_personal_details
'31': exchange_charge
'32': exchange_rate
'33': exchange_via_app
'34': extra_charge_on_statement
'35': failed_transfer
'36': fiat_currency_support
'37': get_disposable_virtual_card
'38': get_physical_card
'39': getting_spare_card
'40': getting_virtual_card
'41': lost_or_stolen_card
'42': lost_or_stolen_phone
'43': order_physical_card
'44': passcode_forgotten
'45': pending_card_payment
'46': pending_cash_withdrawal
'47': pending_top_up
'48': pending_transfer
'49': pin_blocked
'50': receiving_money
'51': Refund_not_showing_up
'52': request_refund
'53': reverted_card_payment?
'54': supported_cards_and_currencies
'55': terminate_account
'56': top_up_by_bank_transfer_charge
'57': top_up_by_card_charge
'58': top_up_by_cash_or_cheque
'59': top_up_failed
'60': top_up_limits
'61': top_up_reverted
'62': topping_up_by_card
'63': transaction_charged_twice
'64': transfer_fee_charged
'65': transfer_into_account
'66': transfer_not_received_by_recipient
'67': transfer_timing
'68': unable_to_verify_identity
'69': verify_my_identity
'70': verify_source_of_funds
'71': verify_top_up
'72': virtual_card_not_working
'73': visa_or_mastercard
'74': why_verify_identity
'75': wrong_amount_of_cash_received
'76': wrong_exchange_rate_for_cash_withdrawal
- name: response
dtype: string
- name: response_mistral
dtype: string
- name: responses
sequence: string
splits:
- name: train
num_bytes: 348336
num_examples: 100
download_size: 164598
dataset_size: 348336
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "mistral_vs_llama2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 3,496 | [
[
-0.0266876220703125,
-0.014190673828125,
0.0161895751953125,
0.0325927734375,
-0.02093505859375,
-0.00725555419921875,
0.029571533203125,
-0.019805908203125,
0.04388427734375,
0.0205078125,
-0.049560546875,
-0.044708251953125,
-0.0469970703125,
-0.0123977661... |
SalomonMetre13/nnd_fr_14k | 2023-10-27T09:10:40.000Z | [
"task_categories:translation",
"size_categories:10K<n<100K",
"language:nnd",
"license:mit",
"region:us"
] | SalomonMetre13 | null | null | 0 | 18 | 2023-10-22T08:12:12 | ---
license: mit
language:
- nnd
task_categories:
- translation
size_categories:
- 10K<n<100K
---
This <span style="color:teal;">parallel corpus </span> contains <span style="color:teal;">14,478</span> aligned sentence pairs <span style="color:teal;">Nande-French</span> in a <span style="color:teal;">90:10</span> split for the train and the test sets. It has been mainly used to fine-tune the <span style="color:teal;"> t5-base </span> pretrained model for the development of <a href="https://huggingface.co/SalomonMetre13/nnd_fr_mt" style="color:green;">this translation model </a> | 584 | [
[
-0.034088134765625,
-0.0438232421875,
0.031463623046875,
0.057861328125,
-0.0297088623046875,
0.018829345703125,
-0.0208740234375,
-0.01690673828125,
0.0220947265625,
-0.0005826950073242188,
-0.0380859375,
-0.040740966796875,
-0.04876708984375,
0.02532958984... |
yashnbx/gita_supersite_dump | 2023-10-22T13:39:17.000Z | [
"size_categories:n<1K",
"region:us"
] | yashnbx | null | null | 0 | 18 | 2023-10-22T12:18:24 | ---
dataset_info:
features:
- name: shloka_id
dtype: string
- name: chapter
dtype: string
- name: sutra
dtype: string
- name: trans-htrskd
dtype: string
description: Hindi Translation By Swami Ramsukhdas
- name: trans-httyn
dtype: string
description: Hindi Translation By Swami Tejomayananda
- name: trans-hcchi
dtype: string
description: Hindi Commentary By Swami Chinmayananda
- name: trans-hcrskd
dtype: string
description: Hindi Commentary By Swami Ramsukhdas
- name: trans-scang
dtype: string
description: Sanskrit Commentary By Sri Abhinavgupta
- name: trans-scram
dtype: string
description: Sanskrit Commentary By Sri Ramanujacharya
- name: trans-scanand
dtype: string
description: Sanskrit Commentary By Sri Anandgiri
- name: trans-scval
dtype: string
description: Sanskrit Commentary By Sri Vallabhacharya
- name: trans-scms
dtype: string
description: Sanskrit Commentary By Sri Madhusudan Saraswati
- name: trans-scsri
dtype: string
description: Sanskrit Commentary By Sri Sridhara Swami
- name: trans-scvv
dtype: string
description: Sanskrit Commentary By Sri Vedantadeshikacharya Venkatanatha
- name: trans-scpur
dtype: string
description: Sanskrit Commentary By Sri Purushottamji
- name: trans-scneel
dtype: string
description: Sanskrit Commentary By Sri Neelkanth
- name: trans-scdhan
dtype: string
description: Sanskrit Commentary By Sri Dhanpati
- name: trans-ecsiva
dtype: string
description: English Commentary By Swami Sivananda
- name: trans-etsiva
dtype: string
description: English Translation By Swami Sivananda
- name: trans-etpurohit
dtype: string
description: English Translation By Purohit Swami
- name: trans-etgb
dtype: string
description: English Translation By Swami Gambirananda
- name: trans-setgb
dtype: string
description: English Translation Of Sri Shankaracharya By Swami Gambirananda
- name: trans-etssa
dtype: string
description: English Translation By Dr. S. Sankaranarayan
- name: trans-etassa
dtype: string
description: English Translation of Abhinavgupta's Sanskrit Commentary By Dr. S. Sankaranarayan
- name: trans-etradi
dtype: string
description: English Translation of Ramanujacharya's Sanskrit Commentary By Swami Adidevananda
- name: trans-etadi
dtype: string
description: English Translation By Swami Adidevananda
- name: trans-htshg
dtype: string
description: Hindi Translation Of Sri Shankaracharya's Sanskrit Commentary By Sri Harikrishnadas Goenka
- name: trans-scsh
dtype: string
description: Sanskrit Commentary By Sri Shankaracharya
- name: trans-scjaya
dtype: string
description: Sanskrit Commentary By Sri Jayatirtha
- name: trans-scmad
dtype: string
description: Sanskrit Commentary By Sri Madhvacharya
- name: script-dv
dtype: string
description: Devanagari
- name: script-as
dtype: string
description: Assamese
- name: script-bn
dtype: string
description: Bengali
- name: script-gu
dtype: string
description: Gujarati
- name: script-pa
dtype: string
description: Gurmukhi
- name: script-kn
dtype: string
description: Kannada
- name: script-ml
dtype: string
description: Malayalam
- name: script-or
dtype: string
description: Odia
- name: script-ro
dtype: string
description: Roman
- name: script-ta
dtype: string
description: Tamil
- name: script-te
dtype: string
description: Telugu
splits:
- name: train
num_bytes: 31628579
num_examples: 701
download_size: 11660830
dataset_size: 31628579
size_categories:
- n<1K
---
# Dataset Card for "gita_supersite_dump"
Extracted from: [gitasupersite.iitk](https://www.gitasupersite.iitk.ac.in/)
To recreate checkout [this notebook](./dump.ipynb)
Translation column names:
- `htrskd` - Hindi Translation By Swami Ramsukhdas
- `httyn` - Hindi Translation By Swami Tejomayananda
- `htshg` - Hindi Translation Of Sri Shankaracharya's Sanskrit Commentary By Sri Harikrishnadas Goenka
- `scsh` - Sanskrit Commentary By Sri Shankaracharya
- `hcchi` - Hindi Commentary By Swami Chinmayananda
- `hcrskd` - Hindi Commentary By Swami Ramsukhdas
- `scang` - Sanskrit Commentary By Sri Abhinavgupta
- `scram` - Sanskrit Commentary By Sri Ramanujacharya
- `scanand` - Sanskrit Commentary By Sri Anandgiri
- `scjaya` - Sanskrit Commentary By Sri Jayatirtha
- `scmad` - Sanskrit Commentary By Sri Madhvacharya
- `scval` - Sanskrit Commentary By Sri Vallabhacharya
- `scms` - Sanskrit Commentary By Sri Madhusudan Saraswati
- `scsri` - Sanskrit Commentary By Sri Sridhara Swami
- `scvv` - Sanskrit Commentary By Sri Vedantadeshikacharya Venkatanatha
- `scpur` - Sanskrit Commentary By Sri Purushottamji
- `scneel` - Sanskrit Commentary By Sri Neelkanth
- `scdhan` - Sanskrit Commentary By Sri Dhanpati
- `ecsiva` - English Commentary By Swami Sivananda
- `etsiva` - English Translation By Swami Sivananda
- `etpurohit` - English Translation By Purohit Swami
- `etgb` - English Translation By Swami Gambirananda
- `setgb` - English Translation Of Sri Shankaracharya By Swami Gambirananda
- `etssa` - English Translation By Dr. S. Sankaranarayan
- `etassa` - English Translation of Abhinavgupta's Sanskrit Commentary By Dr. S. Sankaranarayan
- `etradi` - English Translation of Ramanujacharya's Sanskrit Commentary By Swami Adidevananda
- `etadi` - English Translation By Swami Adidevananda
Script column names:
- `dv` - "Devanagari"
- `as` - "Assamese"
- `bn` - "Bengali"
- `gu` - "Gujarati"
- `pa` - "Gurmukhi"
- `kn` - "Kannada"
- `ml` - "Malayalam"
- `or` - "Odia"
- `ro` - "Roman"
- `ta` - "Tamil"
- `te` - "Telugu" | 5,850 | [
[
-0.02264404296875,
-0.0268707275390625,
-0.00743865966796875,
0.0142974853515625,
-0.045135498046875,
0.00835418701171875,
-0.01323699951171875,
-0.005886077880859375,
0.04730224609375,
0.01200103759765625,
-0.048553466796875,
-0.04168701171875,
-0.0451049804687... |
zelros/pj-da | 2023-10-22T20:11:41.000Z | [
"region:us"
] | zelros | null | null | 0 | 18 | 2023-10-22T17:50:52 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
zhaospei/cmg-llama | 2023-10-23T08:46:11.000Z | [
"region:us"
] | zhaospei | null | null | 0 | 18 | 2023-10-22T18:21:31 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Rocinante/tulu_merge | 2023-10-23T05:18:40.000Z | [
"region:us"
] | Rocinante | null | null | 0 | 18 | 2023-10-23T05:16:37 | ---
dataset_info:
features:
- name: output
dtype: string
- name: instruction
dtype: string
- name: input
dtype: string
- name: data_source
dtype: string
- name: history
sequence:
sequence: string
splits:
- name: train
num_bytes: 306750727
num_examples: 203886
download_size: 174953486
dataset_size: 306750727
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "tulu_merge"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 617 | [
[
-0.048797607421875,
-0.0201873779296875,
-0.00734710693359375,
0.0099945068359375,
-0.0189361572265625,
0.01015472412109375,
0.022857666015625,
-0.0184478759765625,
0.04302978515625,
0.0217437744140625,
-0.040252685546875,
-0.033935546875,
-0.04754638671875,
... |
theblackcat102/llava-instruct-mix | 2023-10-23T10:14:27.000Z | [
"task_categories:visual-question-answering",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-nc-4.0",
"multimodal",
"vision",
"region:us"
] | theblackcat102 | null | null | 0 | 18 | 2023-10-23T09:34:45 | ---
dataset_info:
features:
- name: image
dtype: image
- name: conversations
dtype: string
splits:
- name: train
num_bytes: 46019106088.205
num_examples: 272795
download_size: 20289135489
dataset_size: 46019106088.205
task_categories:
- visual-question-answering
language:
- en
tags:
- multimodal
- vision
size_categories:
- 100K<n<1M
license: cc-by-nc-4.0
---
# LLaVA Instruct Mix
Added OCR and Chart QA dataset into this for more text extraction questions
| 490 | [
[
-0.01800537109375,
-0.043243408203125,
0.043548583984375,
-0.01459503173828125,
-0.02490234375,
0.035736083984375,
0.0264892578125,
-0.05889892578125,
0.01824951171875,
0.07708740234375,
-0.0260772705078125,
-0.0178985595703125,
-0.03497314453125,
0.01838684... |
iahlt/alarab_articles | 2023-10-29T06:23:03.000Z | [
"region:us"
] | iahlt | null | null | 0 | 18 | 2023-10-24T09:43:23 | ---
dataset_info:
features:
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: meta_language
dtype: string
- name: authors
sequence: string
- name: domain
dtype: string
- name: description
dtype: string
- name: meta_description
dtype: string
- name: meta_keywords
sequence: string
- name: meta_encoding
dtype: string
- name: tags
sequence: string
- name: toc
dtype: 'null'
- name: site_name
dtype: string
- name: language
dtype: string
- name: canonical_link
dtype: string
- name: public_date
dtype: string
splits:
- name: train
num_bytes: 527162478
num_examples: 145743
download_size: 219666828
dataset_size: 527162478
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "alarab_articles"
## Description
Scraped articles from [alarab](https://www.alarab.co.il/) (~145,743 articles)
## Usage
```python
from datasets import load_dataset
ds = load_dataset("iahlt/alarab_articles")
```
## Sample
```json
{'url': 'https://www.alarab.co.il/Article/882359.html',
'title': 'الاحتفال باضاءة شجرة عيد الميلاد في مدينة عكا',
'text': 'جرى مساء الجمعة الاحتفال الكبير في مدينة عكا لإضاءة شجرة عيد الميلاد وذلك بمشاركة المئات من الاهالي. وتخلل الاحتفال مسيرة كشفية مميزة .',
'meta_language': 'ar',
'authors': ['Sulieman Nimer'],
'domain': 'www.alarab.co.il',
'description': 'جرى مساء الجمعة الاحتفال الكبير في مدينة عكا لإضاءة شجرة عيد الميلاد وذلك بمشاركة المئات من الاهالي. وتخلل الاحتفال مسيرة كشفية مميزة .',
'meta_description': 'جرى مساء الجمعة الاحتفال الكبير في مدينة عكا لإضاءة شجرة عيد الميلاد وذلك بمشاركة المئات من الاهالي. وتخلل الاحتفال مسيرة كشفية مميزة .',
'meta_keywords': ['اخبار اليوم، موقع العرب، اخبار العرب، موقع أخبار ، رياضة ، سياسة ، فن عالمي ، فن عربي ، اقتصاد ، موسيقى ، ترفيه ، ألعاب ، سيارات ، أغاني ، كليبات ، افلام عربية ، صور جميلات العرب ومشاهير العرب'],
'meta_encoding': 'utf-8',
'tags': ['حالة الطقس',
'اسعار العملات مقابل الشيكل',
'الطقس',
'حالة الطقس اليوم'],
'toc': None,
'site_name': 'alarab',
'language': 'arabic',
'canonical_link': 'https://www.alarab.co.il/Article/882359',
'public_date': '14/12/18 21:04'}
```
## Citation
If you use this dataset, please cite:
```
@InProceedings{iahlt2023alarab_articles,
author = "iahlt",
title = "Arabic Corpus: Alarab",
year = "2023",
publisher = "",
location = "",
}
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 2,680 | [
[
-0.04937744140625,
-0.0399169921875,
0.0071868896484375,
0.01525115966796875,
-0.0234832763671875,
-0.0009326934814453125,
0.008209228515625,
-0.0131072998046875,
0.042633056640625,
0.019622802734375,
-0.027923583984375,
-0.06927490234375,
-0.047210693359375,
... |
Mehaki/formal_casual | 2023-10-24T20:19:28.000Z | [
"task_categories:text-generation",
"language:en",
"region:us"
] | Mehaki | null | null | 0 | 18 | 2023-10-24T17:07:23 | ---
language:
- en
task_categories:
- text-generation
---
[
{
"source_sentence": "Title: The Impact of Technology on Modern Education\n\nThe integration of technology in education has transformed the way students learn. Digital resources, online learning platforms, and interactive tools enhance the educational experience, making learning more accessible and engaging.",
"target_sentence": "Title: Learning in the Digital Age: Embrace the Tech Revolution\n\nHey, digital explorers! Let's chat about the coolest thing in education since sliced bread: technology. We're talking gadgets, gizmos, and learning in your PJs. Get cozy, because the future of education is here, and it's awesome!"
},
{
"source_sentence": "Title: The Importance of Financial Planning for Retirement\n\nPlanning for retirement is a crucial financial milestone. Sound financial planning, including saving, investing, and managing expenses, ensures financial security and a comfortable retirement lifestyle.",
"target_sentence": "Title: Retirement Ready: Your Ticket to Financial Freedom\n\nHey there, future retirees! It's time to get cozy with your retirement plans. We've got tips, tricks, and a side of relaxation for your financial journey. Kick back and let's make that retirement dream a reality!"
},
{
"source_sentence": "Title: The Impact of Climate Change on Ecosystems\n\nClimate change poses a severe threat to global ecosystems. Rising temperatures, sea-level rise, and extreme weather events are disrupting delicate ecological balances. Conservation efforts are essential to protect biodiversity.",
"target_sentence": "Title: Saving Our Planet, One Step at a Time\n\nHey eco-warriors! Time to chat about our favorite green hero, Mother Earth. We've got the 411 on climate change, and we're bringing eco-friendly vibes to your doorstep. Let's make eco-saving a lifestyle, and have a blast while we're at it!"
},
{
"source_sentence": "Title: The Role of Ethics in Business Leadership\n\nEthical leadership is the cornerstone of successful and sustainable businesses. Upholding strong ethical values fosters trust, integrity, and long-term growth. Ethical leaders set the standard for responsible business practices.",
"target_sentence": "Title: Leading with Heart: Navigating the Business Ethics Playground\n\nHey future ethical leaders! We're here to demystify the art of ethical leadership. It's all about doing the right thing, keeping it real, and having a blast in the business world. Let's lead with heart and make a positive impact!"
},
{
"source_sentence": "Title: The Significance of Healthy Lifestyle Choices\n\nAdopting a healthy lifestyle through balanced nutrition and regular exercise is crucial for overall well-being. Making smart choices in diet and physical activity promotes physical fitness and reduces the risk of chronic diseases.",
"target_sentence": "Title: Fit and Fabulous: Your Guide to a Healthy Lifestyle\n\nHey health enthusiasts! We're on a mission to make healthy living a piece of cake (a healthy cake, of course). It's all about delicious eats, fun workouts, and feeling amazing. Join the health party, and let's rock that healthy vibe!"
},
{
"source_sentence": "Title: The Impact of Social Media on Modern Communication\n\nSocial media has revolutionized the way we connect and communicate with one another. It facilitates real-time interactions, information sharing, and global connectivity. The influence of social media on society and communication is profound.",
"target_sentence": "Title: Social Media Unleashed: Connecting in the Digital Era\n\nHey digital citizens! Social media is our playground, and we're here to have a blast. Let's chat about hashtags, selfies, and all things viral. It's a digital world, and we're loving every like, share, and tweet!"
},
{
"source_sentence": "Title: The Role of Cultural Diversity in Global Harmony\n\nCultural diversity enriches our global society by fostering understanding, tolerance, and unity. Embracing diverse cultures promotes peace and cooperation among nations, leading to a harmonious world.",
"target_sentence": "Title: Embracing the Rainbow of Cultures: Our Global Family\n\nHey global citizens! It's time for a cultural fiesta. We're all about embracing differences, sharing stories, and feasting on international flavors. Get ready for a cultural hug, because we're one big global family!"
},
{
"source_sentence": "Title: The Importance of Time Management in Professional Success\n\nEffective time management is a cornerstone of professional success. It maximizes productivity, minimizes stress, and ensures that goals are met efficiently. Mastering time management is a key skill for career advancement.",
"target_sentence": "Title: Time Ninja: Crushing It in Your Daily Quest\n\nHey time warriors! It's time to gear up and conquer your day like a pro. We've got time-saving tricks, epic to-do lists, and a dash of spontaneity to make your journey awesome. Time to be the ninja of your own time saga!"
},
{
"source_sentence": "Title: The Role of Innovation in Business Growth\n\nInnovation drives business growth by fostering creativity, problem-solving, and adaptation to changing markets. Embracing innovation allows companies to stay competitive and seize new opportunities for expansion.",
"target_sentence": "Title: Innovation Playground: Where Ideas Run Wild\n\nHey innovation explorers! Welcome to the playground of creative thinking. We're all about wild ideas, fearless experiments, and making innovation a ton of fun. Let's get our creative juices flowing and change the game!"
},
{
"source_sentence": "Title: The Significance of Volunteering in Community Building\n\nVolunteering plays a crucial role in building strong and resilient communities. It fosters social cohesion, empathy, and a sense of responsibility among citizens. Engaging in volunteer work contributes to the well-being of both individuals and communities.",
"target_sentence": "Title: Volunteer Vibes: Spreading Goodness One Act at a Time\n\nHey changemakers! It's time to roll up our sleeves and make a positive impact. We're all about community love, random acts of kindness, and making volunteering a breeze. Join the volunteer party, and let's spread those good vibes!"
},
{
"source_sentence": "Title: The Impact of Artificial Intelligence on Future Industries\n\nArtificial Intelligence (AI) is reshaping industries by automating tasks, improving efficiency, and driving innovation. Its applications in healthcare, finance, and manufacturing are transforming the way businesses operate and compete.",
"target_sentence": "Title: AI Adventures: Embracing the Tech Marvels\n\nHey tech enthusiasts! It's time to dive headfirst into the AI wonderland. We've got chatbots, smart homes, and gadgets galore to explore. Let's make friends with the robots and have a blast in the world of artificial intelligence!"
},
{
"source_sentence": "Title: The Role of Empathy in Healthcare\n\nEmpathy is a fundamental aspect of patient care in healthcare settings. Healthcare professionals who demonstrate empathy build trust, improve patient outcomes, and provide holistic support to those in their care.",
"target_sentence": "Title: Caring with Heart: Navigating the Healthcare Journey\n\nHey health heroes! It's time to talk about the awesome power of empathy in healthcare. We're all about bedside chats, healing vibes, and making the patient experience as cozy as a warm blanket. Let's bring compassion back to healthcare!"
},
{
"source_sentence": "Title: The Importance of Cybersecurity in Protecting Data\n\nCybersecurity is vital for safeguarding sensitive data from online threats. Robust cybersecurity measures, including encryption and intrusion detection, are essential to protect individuals and organizations from data breaches and cyberattacks.",
"target_sentence": "Title: Digital Defenders: A Fun Guide to Cybersecurity\n\nHey digital defenders! Let's don our virtual capes and protect the digital realm. We've got cybersecurity tips, password heroes, and cyber adventures to embark on. It's time to be the guardians of the digital galaxy!"
},
{
"source_sentence": "Title: The Role of Leadership in Team Dynamics\n\nEffective leadership is essential for fostering collaboration and productivity within teams. Leaders who inspire trust, set clear goals, and encourage open communication contribute to successful team dynamics and overall organizational success.",
"target_sentence": "Title: Leading with Style: Crafting Your Leadership Journey\n\nHey future leaders! It's time to don your leadership capes and embark on a stylish journey. We're all about teamwork, high-fives, and making leadership an epic adventure. Let's lead with flair and have a blast along the way!"
},
{
"source_sentence": "Title: The Significance of Environmental Conservation\n\nEnvironmental conservation is vital for preserving the planet's natural resources and biodiversity. Sustainable practices, habitat preservation, and reduced pollution are key components of successful conservation efforts.",
"target_sentence": "Title: Green Living: Making Every Day Earth Day\n\nHey eco-warriors! It's time to put on your green superhero cape and go on an eco-adventure. We're all about recycling, tree hugging, and making Mother Earth proud. Let's be eco-champions and have a blast while doing it!"
},
{
"source_sentence": "Title: The Role of Ethics in Scientific Research\n\nEthical conduct in scientific research is fundamental for maintaining the integrity of the scientific community. Adhering to ethical standards ensures transparency, credibility, and the responsible pursuit of knowledge.",
"target_sentence": "Title: Science Ethics 101: Navigating the Research Maze\n\nHey future scientists! Let's break down the world of ethical research. We're all about lab adventures, responsible discoveries, and making science a barrel of fun. Join the research party, and let's explore the ethical frontier!"
},
{
"source_sentence": "Title: The Importance of Financial Literacy\n\nFinancial literacy empowers individuals to make informed financial decisions. It includes understanding concepts such as budgeting, saving, investing, and debt management. Developing financial literacy is crucial for long-term financial stability.",
"target_sentence": "Title: Money Matters: The Fun Guide to Financial Freedom\n\nHey financial wizards! It's time to embark on a financial journey filled with money wisdom, budgeting hacks, and wealth-building quests. We're all about making finances fun and helping you take charge of your money story!"
},
{
"source_sentence": "Title: The Impact of Stress on Mental Health\n\nChronic stress can have detrimental effects on mental health, leading to anxiety, depression, and other disorders. Managing stress through relaxation techniques, exercise, and seeking support is crucial for maintaining mental well-being.",
"target_sentence": "Title: Stress-Free Living: Your Guide to Inner Zen\n\nHey stress warriors! Let's talk about keeping the calm in the chaos. We're all about stress-busting tips, relaxation rituals, and making zen a way of life. Join the stress-free party, and let's conquer life's challenges together!"
},
{
"source_sentence": "Title: The Role of Artificial Intelligence in Healthcare\n\nArtificial Intelligence (AI) is revolutionizing the healthcare industry by enhancing diagnosis, treatment, and patient care. AI-driven technologies, such as predictive analytics and medical imaging, are improving healthcare outcomes.",
"target_sentence": "Title: HealthTech Revolution: Embracing AI in Healthcare\n\nHey health innovators! Let's explore the world of AI in healthcare. We're all about smart apps, wearable wonders, and making healthcare a breeze. Join the tech health party, and let's keep you in tip-top shape!"
},
{
"source_sentence": "Title: The Significance of Gender Equality in the Workplace\n\nGender equality in the workplace is essential for promoting diversity and inclusion. Ensuring equal opportunities, pay, and treatment for all employees contributes to a fair and productive work environment.",
"target_sentence": "Title: Work It, Equal Style: Navigating the Workplace Jungle\n\nHey workplace warriors! It's time to talk about equal opportunities, high-fives, and making the office a fantastic place for everyone. We're all about diversity, inclusion, and having a blast in the world of work. Join the equality party!"
},
{
"source_sentence": "Title: The Impact of Renewable Energy on Sustainable Development\n\nThe adoption of renewable energy sources, such as solar and wind power, plays a pivotal role in achieving sustainable development goals. Renewable energy reduces greenhouse gas emissions, fosters energy independence, and promotes environmental conservation.",
"target_sentence": "Title: Go Green, Go Cool: Your Guide to Renewable Energy\n\nHey eco-champions! It's time to talk about clean energy and making the planet a cooler place. We're all about solar smiles, wind-powered high-fives, and embracing the renewable energy revolution. Join the green energy party!"
},
{
"source_sentence": "Title: The Importance of Cybersecurity in a Digital World\n\nIn today's digital era, cybersecurity is critical for safeguarding sensitive information from cyber threats. Protecting data, networks, and systems is essential to ensure privacy, security, and business continuity.",
"target_sentence": "Title: Digital Defenders Unite: A Fun Guide to Cybersecurity\n\nHey digital superheroes! It's time to don your virtual capes and embark on a cyber-adventure. We've got cybersecurity tips, hacker showdowns, and digital escapades to make online safety a blast. Join the digital defense party!"
},
{
"source_sentence": "Title: The Role of Emotional Intelligence in Leadership\n\nEmotional intelligence (EQ) is a crucial trait for effective leadership. Leaders with high EQ can navigate complex emotions, build strong relationships, and inspire teams to achieve their best results.",
"target_sentence": "Title: Leading with Heart: The EQ Guide to Leadership\n\nHey future leaders! Let's chat about emotional intelligence and its superpowers in leadership. We're all about heart-centered leadership, empathy adventures, and making emotional intelligence your leadership super-skill. Join the EQ leadership party!"
},
{
"source_sentence": "Title: The Significance of Cultural Diversity in Education\n\nCultural diversity enriches the educational experience by exposing students to different perspectives and ideas. In diverse learning environments, students develop a global mindset, tolerance, and the ability to collaborate effectively in a multicultural world.",
"target_sentence": "Title: Embrace the World: Your Guide to Cultural Education\n\nHey global learners! Let's talk about the exciting world of cultural diversity in education. We're all about global friendships, international feasts, and making learning a passport to fun. Join the cultural education party!"
},
{
"source_sentence": "Title: The Role of Critical Thinking in Problem-Solving\n\nCritical thinking is a vital skill for effective problem-solving. It involves analyzing information, evaluating evidence, and making informed decisions. Developing critical thinking abilities enhances one's ability to address complex challenges.",
"target_sentence": "Title: Think Smarter, Not Harder: Mastering Critical Thinking\n\nHey critical thinkers! It's time to unlock the secrets of sharp minds and problem-solving prowess. We're all about brainpower boosts, thinking games, and making critical thinking a breeze. Join the critical thinking party!"
},
{
"source_sentence": "Title: The Importance of Time Management for Students\n\nEffective time management is a key skill for student success. Balancing academic responsibilities, extracurricular activities, and personal life requires organization and prioritization. Time management helps students achieve academic goals while maintaining a healthy lifestyle.",
"target_sentence": "Title: Student Life Hacks: Mastering Time Like a Pro\n\nHey student superheroes! It's time to unlock the secrets of time management and rule your student universe. We've got study tips, time-saving tricks, and life hacks to make your student journey a blast. Join the time management party!"
},
{
"source_sentence": "Title: The Role of Teamwork in Project Success\n\nEffective teamwork is essential for achieving project success. When team members collaborate, share ideas, and communicate effectively, they increase productivity, creativity, and the likelihood of meeting project goals within deadlines.",
"target_sentence": "Title: Team Power: Navigating the Collaboration Highway\n\nHey team players! It's time to rev up your teamwork engines and hit the collaboration highway. We're all about high-fives, brainstorming bonanzas, and making teamwork an adventure. Join the collaboration party!"
},
{
"source_sentence": "Title: The Impact of Climate Change on Global Economies\n\nClimate change poses significant risks to global economies through increased costs, supply chain disruptions, and damage to infrastructure. Mitigating climate change and transitioning to sustainable practices are essential for economic resilience.",
"target_sentence": "Title: Greening the World: Your Guide to Climate Action\n\nHey climate champions! Let's dive into the world of climate change and how we can make a difference. We're all about eco-fun, planet-saving tips, and embracing the green revolution. Join the climate action party!"
},
{
"source_sentence": "Title: The Importance of Early Childhood Education\n\nEarly childhood education plays a crucial role in a child's cognitive, social, and emotional development. High-quality early education programs provide a strong foundation for lifelong learning and academic success.",
"target_sentence": "Title: Tiny Explorers: Nurturing Young Minds with Fun\n\nHey early educators! It's time to embrace the joys of teaching tiny tots. We're all about storytime adventures, finger-painting masterpieces, and making early education a delightful journey. Join the early education party!"
},
{
"source_sentence": "Title: The Significance of Effective Communication in Business\n\nEffective communication is a cornerstone of successful business operations. Clear and efficient communication fosters collaboration, minimizes misunderstandings, and enhances decision-making within organizations, ultimately leading to improved profitability and growth.",
"target_sentence": "Title: Business Talk: Your Guide to Rocking Communication\n\nHey future business moguls! It's time to dive into the world of effective communication in business. We're all about elevator pitches, networking high-fives, and making business talk a breeze. Join the business communication party!"
},
{
"source_sentence": "Title: The Role of Diversity and Inclusion in Workplace Innovation\n\nDiversity and inclusion in the workplace lead to increased innovation and creativity. When organizations embrace diverse perspectives, backgrounds, and experiences, they are better equipped to develop groundbreaking ideas, products, and solutions.",
"target_sentence": "Title: Inclusivity Matters: Sparking Innovation in the Workplace\n\nHey innovation enthusiasts! Let's talk about how diversity and inclusion fuel creativity. We're all about idea parties, collaborative sparks, and making workplace innovation a blast. Join the workplace innovation party!"
},
{
"source_sentence": "Title: The Impact of Technology on Education\n\nTechnology has transformed the education landscape by enabling online learning, personalized instruction, and global connectivity. Embracing technology in education enhances access to knowledge and equips learners with 21st-century skills.",
"target_sentence": "Title: Tech-Savvy Learning: Your Guide to Digital Education\n\nHey digital learners! Let's dive into the world of educational technology and online learning. We're all about virtual field trips, interactive lessons, and making education a tech-fueled adventure. Join the digital education party!"
},
{
"source_sentence": "Title: The Importance of Work-Life Balance for Employee Well-being\n\nMaintaining a healthy work-life balance is essential for employee well-being and productivity. Organizations that prioritize work-life balance create a positive work environment, reduce burnout, and retain motivated and satisfied employees.",
"target_sentence": "Title: Life Hacks: Mastering the Art of Work-Life Harmony\n\nHey life enthusiasts! It's time to uncover the secrets of balancing work and play. We're all about relaxation rituals, productivity tips, and making work-life balance a joyful dance. Join the work-life harmony party!"
},
{
"source_sentence": "Title: The Role of Ethics in Artificial Intelligence\n\nEthical considerations are paramount in the development and deployment of artificial intelligence (AI) systems. Ensuring AI aligns with ethical principles prevents harmful consequences and promotes responsible AI innovation.",
"target_sentence": "Title: Ethical AI: Navigating the Moral Tech Landscape\n\nHey tech ethicists! Let's dive into the fascinating world of AI ethics and responsible technology. We're all about ethical coding, digital dilemmas, and making tech a force for good. Join the ethical AI party!"
},
{
"source_sentence": "Title: The Significance of Financial Planning for Retirement\n\nEffective financial planning is crucial for a secure retirement. Planning for retirement involves setting financial goals, creating a savings strategy, and considering investment options to ensure a comfortable and worry-free retirement.",
"target_sentence": "Title: Retire Happy: Your Guide to Financial Freedom\n\nHey future retirees! It's time to embark on a retirement adventure filled with financial wisdom and relaxation plans. We're all about dream vacations, retirement bucket lists, and making financial planning a joyful journey. Join the retirement party!"
},
{
"source_sentence": "Title: The Role of Digital Marketing in Business Growth\n\nDigital marketing is a cornerstone of modern business growth strategies. Leveraging digital channels, such as social media and online advertising, helps businesses reach wider audiences, increase brand visibility, and drive revenue.",
"target_sentence": "Title: Digital Domination: Your Guide to Winning Online\n\nHey digital marketers! Let's dive into the exciting world of digital marketing and online presence. We're all about social media success, content wizardry, and making digital marketing a thrilling adventure. Join the online marketing party!"
},
{
"source_sentence": "Title: The Impact of Artificial Intelligence on Customer Service\n\nArtificial Intelligence (AI) is transforming customer service by enabling chatbots, virtual assistants, and personalized customer experiences. AI-powered solutions enhance efficiency, reduce response times, and improve customer satisfaction.",
"target_sentence": "Title: AI Superheroes: Navigating Customer Service with Tech\n\nHey customer service champs! It's time to explore the world of AI-powered support. We're all about quick resolutions, happy customers, and making AI your customer service sidekick. Join the customer service tech party!"
},
{
"source_sentence": "Title: The Importance of Cybersecurity in a Connected World\n\nCybersecurity is paramount in an increasingly connected world. Protecting sensitive information, safeguarding networks, and ensuring data privacy are critical components of a secure digital landscape.",
"target_sentence": "Title: Digital Defenders Unite: Your Guide to Cybersecurity\n\nHey digital defenders! Let's talk about the exciting world of cybersecurity and how you can protect yourself online. We're all about safe browsing, strong passwords, and making cybersecurity an adventure. Join the digital defense party!"
},
{
"source_sentence": "Title: The Role of Environmental Conservation in Preserving Biodiversity\n\nEnvironmental conservation efforts are essential for preserving the planet's biodiversity. Protecting natural habitats, reducing pollution, and promoting sustainable practices are crucial steps toward ensuring the survival of diverse species and ecosystems.",
"target_sentence": "Title: Earth Heroes Unite: Your Guide to Conservation\n\nHey eco-warriors! It's time to embark on a conservation adventure to protect our planet's biodiversity. We're all about eco-quests, nature explorations, and making conservation a thrilling journey. Join the Earth-saving party!"
},
{
"source_sentence": "Title: The Impact of AI in Healthcare\n\nArtificial Intelligence (AI) is revolutionizing the healthcare industry with applications in diagnosis, treatment, and patient care. AI-driven technologies, such as predictive analytics and medical imaging, are enhancing healthcare outcomes and patient experiences.",
"target_sentence": "Title: HealthTech Revolution: Your Guide to AI in Healthcare\n\nHey health innovators! Let's dive into the world of AI-powered healthcare. We're all about smart apps, wearable wonders, and making healthcare a high-tech adventure. Join the HealthTech revolution party!"
},
{
"source_sentence": "Title: The Importance of Gender Equality in the Workplace\n\nGender equality is a cornerstone of fostering a fair and inclusive workplace. Equal opportunities, pay, and treatment for all employees contribute to a positive work environment and organizational success.",
"target_sentence": "Title: Work It, Equal Style: Your Guide to Workplace Fairness\n\nHey workplace champions! It's time to dive into the world of workplace equality and inclusion. We're all about high-fives, diversity celebrations, and making the office a fantastic place for everyone. Join the workplace fairness party!"
},
{
"source_sentence": "Title: The Significance of Renewable Energy in Addressing Climate Change\n\nRenewable energy sources, such as solar and wind power, play a critical role in mitigating climate change. Transitioning to renewable energy reduces greenhouse gas emissions, promotes sustainability, and supports global efforts to combat climate change.",
"target_sentence": "Title: Go Green, Go Fun: Your Guide to Renewable Energy\n\nHey eco-enthusiasts! It's time to put on your green superhero cape and dive into the world of renewable energy. We're all about solar smiles, wind-powered high-fives, and embracing the renewable energy revolution. Join the green energy party!"
},
{
"source_sentence": "Title: The Role of Cybersecurity in Protecting Digital Assets\n\nCybersecurity is essential for safeguarding digital assets from threats and attacks. Protecting data, networks, and devices ensures data privacy, business continuity, and the integrity of digital operations.",
"target_sentence": "Title: Cybersecurity Demystified: Your Guide to Digital Protection\n\nHey digital defenders! Let's unravel the secrets of cybersecurity and keep your online world safe. We're all about strong passwords, threat showdowns, and making cybersecurity a fun adventure. Join the digital defense party!"
},
{
"source_sentence": "Title: The Impact of Stress on Mental Health\n\nChronic stress can have detrimental effects on mental health, leading to anxiety, depression, and other disorders. Managing stress through relaxation techniques, self-care, and seeking support is crucial for maintaining mental well-being.",
"target_sentence": "Title: Stress-Free Living: Your Guide to Inner Peace\n\nHey stress warriors! Let's chat about keeping calm in the chaos of life. We're all about stress-busting tips, relaxation rituals, and making zen a way of life. Join the stress-free living party!"
},
{
"source_sentence": "Title: The Role of Emotional Intelligence in Leadership\n\nEmotional intelligence (EQ) is a crucial skill for effective leadership. Leaders with high EQ can navigate complex emotions, build strong relationships, and inspire teams to achieve their best results.",
"target_sentence": "Title: Leading with Heart: The EQ Guide to Leadership\n\nHey future leaders! Let's dive into the world of emotional intelligence and its superpowers in leadership. We're all about heart-centered leadership, empathy adventures, and making emotional intelligence your leadership super-skill. Join the EQ leadership party!"
},
{
"source_sentence": "Title: The Significance of Early Childhood Education\n\nEarly childhood education plays a crucial role in a child's cognitive, social, and emotional development. High-quality early education programs provide a strong foundation for lifelong learning and academic success.",
"target_sentence": "Title: Tiny Explorers: Nurturing Young Minds with Fun\n\nHey early educators! It's time to embrace the joys of teaching tiny tots. We're all about storytime adventures, finger-painting masterpieces, and making early education a delightful journey. Join the early education party!"
},
{
"source_sentence": "Title: The Impact of Artificial Intelligence in Education\n\nArtificial Intelligence (AI) is transforming education with personalized learning, adaptive assessments, and data-driven insights. AI-powered educational tools enhance student engagement and improve learning outcomes.",
"target_sentence": "Title: Tech-Savvy Learning: Your Guide to Digital Education\n\nHey digital learners! Let's dive into the world of educational technology and online learning. We're all about virtual field trips, interactive lessons, and making education a tech-fueled adventure. Join the digital education party!"
},
{
"source_sentence": "Title: The Importance of Time Management for Workplace Productivity\n\nEffective time management is crucial for maximizing workplace productivity. Organizing tasks, setting priorities, and minimizing distractions allow employees to complete tasks efficiently and meet deadlines consistently.",
"target_sentence": "Title: Work Smarter, Not Harder: Your Guide to Time Mastery\n\nHey productivity enthusiasts! It's time to unlock the secrets of time management and conquer your workday. We're all about productivity hacks, time-saving tricks, and making work a breeze. Join the time mastery party!"
},
{
"source_sentence": "Title: The Role of Artificial Intelligence in Healthcare\n\nArtificial Intelligence (AI) is revolutionizing healthcare by improving diagnostics, treatment recommendations, and patient care. AI-driven technologies offer the potential to enhance medical outcomes and streamline healthcare processes.",
"target_sentence": "Title: HealthTech Revolution: Your Guide to AI in Healthcare\n\nHey health tech enthusiasts! Let's dive into the world of AI-powered healthcare. We're all about smart health apps, AI-assisted diagnostics, and making healthcare a tech-savvy adventure. Join the HealthTech revolution party!"
},
{
"source_sentence": "Title: The Significance of Cultural Sensitivity in Global Business\n\nCultural sensitivity is essential for successful global business operations. Understanding and respecting cultural differences enhance communication, build trust, and foster positive relationships with international partners and customers.",
"target_sentence": "Title: Global Business 101: Your Guide to Cultural Savvy\n\nHey global entrepreneurs! It's time to embark on a cultural adventure in the business world. We're all about cross-cultural collaborations, international friendships, and making global business a fun journey. Join the cultural savvy party!"
},
{
"source_sentence": "Title: The Importance of Ethical Leadership in Business\n\nEthical leadership is paramount for maintaining integrity and trust within organizations. Leaders who prioritize ethics set a positive example for employees, promote a culture of honesty, and uphold moral standards in decision-making.",
"target_sentence": "Title: Leading with Heart: Your Guide to Ethical Leadership\n\nHey ethical leaders! Let's dive into the world of leadership with integrity and heart. We're all about doing the right thing, fostering trust, and making ethical leadership a joyful journey. Join the ethical leadership party!"
},
{
"source_sentence": "Title: The Significance of Mental Health Awareness in the Workplace\n\nMental health awareness is essential for creating a supportive workplace environment. Recognizing the importance of mental well-being, reducing stigma, and providing resources for employees promote mental health and overall job satisfaction.",
"target_sentence": "Title: Mental Wellness Matters: Your Guide to Workplace Balance\n\nHey mental health advocates! It's time to explore the world of well-being at work. We're all about self-care rituals, stress-busting tips, and making mental health a priority. Join the workplace wellness party!"
},
{
"source_sentence": "Title: The Impact of Artificial Intelligence in Manufacturing\n\nArtificial Intelligence (AI) is transforming the manufacturing industry through automation, predictive maintenance, and quality control. AI-driven processes improve efficiency, reduce costs, and enhance overall manufacturing performance.",
"target_sentence": "Title: Manufacturing Magic: Your Guide to AI-Powered Production\n\nHey manufacturing wizards! Let's dive into the world of AI-driven production and smart factories. We're all about automation wonders, quality assurance fun, and making manufacturing a high-tech adventure. Join the manufacturing magic party!"
},
{
"source_sentence": "Title: The Role of Renewable Energy in a Sustainable Future\n\nRenewable energy sources, such as wind and solar power, are pivotal for achieving a sustainable future. Transitioning to clean energy reduces carbon emissions, mitigates climate change, and promotes environmental stewardship.",
"target_sentence": "Title: Go Green, Go Fun: Your Guide to Renewable Energy\n\nHey eco-enthusiasts! It's time to dive into the world of renewable energy and sustainability. We're all about solar smiles, wind-powered high-fives, and embracing the green energy revolution. Join the clean energy party!"
},
{
"source_sentence": "Title: The Impact of Artificial Intelligence in Finance\n\nArtificial Intelligence (AI) is revolutionizing the finance industry through automated trading, risk assessment, and fraud detection. AI-driven algorithms enhance decision-making, optimize investments, and improve financial outcomes.",
"target_sentence": "Title: FinTech Fun: Your Guide to Smart Money Management\n\nHey finance aficionados! Let's dive into the world of AI-powered finance and smart investments. We're all about digital wallets, budgeting made easy, and making finance a tech-savvy adventure. Join the FinTech fun party!"
},
{
"source_sentence": "Title: The Importance of Diversity and Inclusion in Tech\n\nDiversity and inclusion are critical for fostering innovation and creativity in the tech industry. Embracing diverse perspectives, backgrounds, and experiences leads to better problem-solving, product development, and organizational growth.",
"target_sentence": "Title: Tech Trailblazers: Your Guide to a Diverse and Inclusive Tech Community\n\nHey tech enthusiasts! It's time to join the diverse and inclusive tech revolution. We're all about coding camaraderie, innovation celebrations, and making the tech world a welcoming place for everyone. Join the tech diversity party!"
},
{
"source_sentence": "Title: The Significance of Sustainable Agriculture in Ensuring Food Security\n\nSustainable agriculture practices are vital for ensuring food security and protecting the environment. Implementing sustainable farming techniques reduces soil degradation, conserves water resources, and promotes long-term food production.",
"target_sentence": "Title: Eco-Farming Adventure: Your Guide to Sustainable Agriculture\n\nHey eco-farmers! It's time to embrace the world of sustainable agriculture and grow your food with love. We're all about organic harvests, green practices, and making farming an eco-friendly journey. Join the sustainable farming party!"
},
{
"source_sentence": "Title: The Role of Artificial Intelligence in Transportation\n\nArtificial Intelligence (AI) is reshaping the transportation industry through autonomous vehicles, traffic optimization, and route planning. AI-driven transportation systems enhance safety, efficiency, and overall mobility.",
"target_sentence": "Title: Smart Commutes: Your Guide to AI-Powered Transportation\n\nHey travelers of the future! Let's dive into the world of AI-driven transportation and stress-free journeys. We're all about self-driving car joyrides, smart traffic solutions, and making transportation an effortless adventure. Join the transportainment party!"
},
{
"source_sentence": "Title: The Importance of Cybersecurity Awareness in the Digital Age\n\nCybersecurity awareness is essential for individuals and organizations to protect against online threats. Understanding common cyber risks, practicing safe online behaviors, and staying informed about the latest security trends are key to maintaining digital safety.",
"target_sentence": "Title: Cyber Savvy 101: Your Guide to Digital Security\n\nHey digital explorers! It's time to navigate the cyber realm with confidence. We're all about secure passwords, vigilant clicks, and making cybersecurity a digital adventure. Join the cyber-savvy party!"
},
{
"source_sentence": "Title: The Significance of Green Building Practices in Sustainable Construction\n\nGreen building practices are crucial for sustainable construction and reducing environmental impact. Utilizing eco-friendly materials, energy-efficient designs, and sustainable construction methods contribute to a more environmentally responsible built environment.",
"target_sentence": "Title: Building the Future: Your Guide to Green Construction\n\nHey eco-builders! It's time to embrace sustainable construction and create a greener world. We're all about eco-brick adventures, solar panel smiles, and making construction a planet-friendly journey. Join the green construction party!"
},
{
"source_sentence": "Title: The Impact of Artificial Intelligence in Customer Service\n\nArtificial Intelligence (AI) is revolutionizing customer service through chatbots, virtual assistants, and data-driven support. AI-powered solutions enhance response times, reduce customer wait times, and improve overall service quality.",
"target_sentence": "Title: Customer Care 2.0: Your Guide to AI-Powered Service\n\nHey customer care champions! Let's dive into the world of AI-enhanced customer service and elevate customer experiences. We're all about instant resolutions, happy customers, and making support a tech-savvy adventure. Join the customer care 2.0 party!"
},
{
"source_sentence": "Title: The Importance of Data Privacy in the Digital Age\n\nData privacy is essential for protecting personal information in today's digital landscape. Implementing strong privacy practices, securing data storage, and respecting user consent are fundamental in maintaining data privacy and security.",
"target_sentence": "Title: Data Guardians Unite: Your Guide to Online Privacy\n\nHey digital guardians! It's time to safeguard your online presence and protect your digital identity. We're all about privacy settings, secure browsing, and making data privacy a digital adventure. Join the data privacy party!"
},
{
"source_sentence": "Title: The Role of Renewable Energy in Reducing Carbon Emissions\n\nRenewable energy sources, such as solar and wind power, are key to reducing carbon emissions and combating climate change. Transitioning to clean energy is essential to achieving global emissions reduction targets.",
"target_sentence": "Title: Go Green, Go Fun: Your Guide to Renewable Energy\n\nHey eco-enthusiasts! It's time to dive into the world of renewable energy and sustainability. We're all about solar smiles, wind-powered high-fives, and embracing the green energy revolution. Join the clean energy party!"
},
{
"source_sentence": "Title: The Impact of Artificial Intelligence in Healthcare\n\nArtificial Intelligence (AI) is reshaping the healthcare industry through improved diagnostics, predictive analytics, and personalized treatment plans. AI-driven solutions offer the potential to revolutionize patient care and medical research.",
"target_sentence": "Title: HealthTech Revolution: Your Guide to AI in Healthcare\n\nHey health tech enthusiasts! Let's dive into the world of AI-powered healthcare and wellness. We're all about smart health apps, AI-assisted diagnostics, and making healthcare a tech-savvy adventure. Join the HealthTech revolution party!"
},
{
"source_sentence": "Title: The Importance of Cybersecurity in the Digital Age\n\nCybersecurity is paramount for safeguarding digital assets and sensitive information in today's interconnected world. Implementing robust security measures, educating employees, and staying vigilant against cyber threats are essential for protecting data integrity.",
"target_sentence": "Title: Cybersecurity Demystified: Your Guide to Digital Defense\n\nHey digital defenders! Let's unravel the secrets of cybersecurity and keep your online world safe. We're all about strong passwords, threat showdowns, and making cybersecurity a fun adventure. Join the digital defense party!"
},
{
"source_sentence": "Title: The Role of Emotional Intelligence in Leadership\n\nEmotional intelligence (EQ) is a critical competency for effective leadership. Leaders with high EQ can navigate complex emotions, foster teamwork, and inspire their teams to achieve outstanding results.",
"target_sentence": "Title: Leading with Heart: Your Guide to Emotional Intelligence\n\nHey future leaders! Let's dive into the world of emotional intelligence and leadership with heart. We're all about empathy, relationship building, and making emotional intelligence a superpower for leaders. Join the EQ leadership party!"
},
{
"source_sentence": "Title: The Significance of Early Childhood Education\n\nEarly childhood education plays a pivotal role in a child's cognitive and social development. High-quality early education programs provide a solid foundation for lifelong learning and academic success.",
"target_sentence": "Title: Tiny Explorers: Nurturing Young Minds with Fun\n\nHey early educators! It's time to embark on a joyful journey into the world of early childhood education. We're all about storytime adventures, finger-painting masterpieces, and making early education an unforgettable adventure. Join the early education party!"
},
{
"source_sentence": "Title: The Impact of Artificial Intelligence in Education\n\nArtificial Intelligence (AI) is reshaping education through personalized learning, adaptive assessments, and data-driven insights. AI-powered educational tools enhance student engagement and academic performance.",
"target_sentence": "Title: Tech-Savvy Learning: Your Guide to Digital Education\n\nHey digital learners! Let's dive into the world of AI-powered education and online learning. We're all about virtual field trips, interactive lessons, and making education a tech-savvy adventure. Join the digital education party!"
},
{
"source_sentence": "Title: The Importance of Time Management for Workplace Productivity\n\nEffective time management is a cornerstone of workplace productivity. Organizing tasks, setting priorities, and minimizing distractions enable employees to work efficiently and meet deadlines consistently.",
"target_sentence": "Title: Work Smarter, Not Harder: Your Guide to Time Mastery\n\nHey productivity enthusiasts! Let's unlock the secrets of time management and conquer the workday with ease. We're all about productivity hacks, time-saving tricks, and making work a breeze. Join the time mastery party!"
},
{
"source_sentence": "Title: The Role of Artificial Intelligence in Healthcare\n\nArtificial Intelligence (AI) is transforming healthcare through improved diagnostics, treatment recommendations, and patient care. AI-driven technologies offer the potential to enhance medical outcomes and streamline healthcare processes.",
"target_sentence": "Title: HealthTech Revolution: Your Guide to AI in Healthcare\n\nHey health tech enthusiasts! Let's dive into the world of AI-powered healthcare and wellness. We're all about smart health apps, AI-assisted diagnostics, and making healthcare a tech-savvy adventure. Join the HealthTech revolution party!"
},
{
"source_sentence": "Title: The Significance of Cultural Sensitivity in Global Business\n\nCultural sensitivity is essential for successful global business operations. Understanding and respecting cultural differences enhance communication, build trust, and foster positive relationships with international partners and customers.",
"target_sentence": "Title: Global Business 101: Your Guide to Cultural Savvy\n\nHey global entrepreneurs! It's time to embark on a cultural adventure in the business world. We're all about cross-cultural collaborations, international friendships, and making global business a fun journey. Join the cultural savvy party!"
},
{
"source_sentence": "Title: The Role of Renewable Energy in a Sustainable Future\n\nRenewable energy sources, such as wind and solar power, are pivotal for achieving a sustainable future. Transitioning to clean energy reduces carbon emissions, mitigates climate change, and promotes environmental stewardship.",
"target_sentence": "Title: Go Green, Go Fun: Your Guide to Renewable Energy\n\nHey eco-enthusiasts! It's time to unlock the secrets of renewable energy and join the sustainability party. We're all about solar smiles, wind-powered high-fives, and embracing the green energy revolution. Join the clean energy party!"
},
{
"source_sentence": "Title: The Impact of Artificial Intelligence in Finance\n\nArtificial Intelligence (AI) is revolutionizing the finance industry through automated trading, risk assessment, and fraud detection. AI-driven algorithms enhance decision-making, optimize investments, and improve financial outcomes.",
"target_sentence": "Title: FinTech Fun: Your Guide to Smart Money Management\n\nHey finance aficionados! Let's dive into the world of AI-powered finance and smart investments. We're all about digital wallets, budgeting made easy, and making finance a tech-savvy adventure. Join the FinTech fun party!"
},
{
"source_sentence": "Title: The Importance of Diversity and Inclusion in Tech\n\nDiversity and inclusion are critical for fostering innovation and creativity in the tech industry. Embracing diverse perspectives, backgrounds, and experiences leads to better problem-solving, product development, and organizational growth.",
"target_sentence": "Title: Tech Trailblazers: Your Guide to a Diverse and Inclusive Tech Community\n\nHey tech enthusiasts! It's time to join the diverse and inclusive tech revolution. We're all about coding camaraderie, innovation celebrations, and making the tech world a welcoming place for everyone. Join the tech diversity party!"
},
{
"source_sentence": "Title: The Significance of Sustainable Agriculture in Ensuring Food Security\n\nSustainable agriculture practices are vital for ensuring food security and protecting the environment. Implementing sustainable farming techniques reduces soil degradation, conserves water resources, and promotes long-term food production.",
"target_sentence": "Title: Eco-Farming Adventure: Your Guide to Sustainable Agriculture\n\nHey eco-farmers! It's time to embark on an eco-friendly adventure in sustainable agriculture. We're all about organic harvests, green practices, and making farming a planet-friendly journey. Join the sustainable farming party!"
},
{
"source_sentence": "Title: The Role of Artificial Intelligence in Transportation\n\nArtificial Intelligence (AI) is reshaping the transportation industry through autonomous vehicles, traffic optimization, and route planning. AI-driven transportation systems enhance safety, efficiency, and overall mobility.",
"target_sentence": "Title: Smart Commutes: Your Guide to AI-Powered Transportation\n\nHey travelers of the future! Let's dive into the world of AI-driven transportation and stress-free journeys. We're all about self-driving car joyrides, smart traffic solutions, and making transportation an effortless adventure. Join the transportainment party!"
},
{
"source_sentence": "Title: The Importance of Cybersecurity Awareness in the Digital Age\n\nCybersecurity awareness is essential for individuals and organizations to protect against online threats. Understanding common cyber risks, practicing safe online behaviors, and staying informed about the latest security trends are key to maintaining digital safety.",
"target_sentence": "Title: Cyber Savvy 101: Your Guide to Digital Security\n\nHey digital explorers! It's time to navigate the cyber realm with confidence. We're all about secure passwords, vigilant clicks, and making cybersecurity a digital adventure. Join the cyber-savvy party!"
},
{
"source_sentence": "Title: The Role of Green Building Practices in Sustainable Construction\n\nGreen building practices are essential for sustainable construction and reducing environmental impact. Utilizing eco-friendly materials, energy-efficient designs, and sustainable construction methods contribute to a more environmentally responsible built environment.",
"target_sentence": "Title: Building the Future: Your Guide to Green Construction\n\nHey eco-builders! It's time to join the sustainability revolution in construction. We're all about eco-brick adventures, solar panel smiles, and making construction a planet-friendly journey. Join the green construction party!"
},
{
"source_sentence": "Title: The Impact of Artificial Intelligence in Customer Service\n\nArtificial Intelligence (AI) is reshaping customer service through chatbots, virtual assistants, and data-driven support. AI-powered solutions enhance response times, reduce customer wait times, and improve overall service quality.",
"target_sentence": "Title: Customer Care 2.0: Your Guide to AI-Powered Service\n\nHey customer care champions! Let's dive into the world of AI-enhanced customer service and elevate customer experiences. We're all about instant resolutions, happy customers, and making support a tech-savvy adventure. Join the customer care 2.0 party!"
},
{
"source_sentence": "Title: The Importance of Data Privacy in the Digital Age\n\nData privacy is paramount for protecting personal information in today's digital landscape. Implementing strong privacy practices, securing data storage, and respecting user consent are fundamental in maintaining data privacy and security.",
"target_sentence": "Title: Data Guardians Unite: Your Guide to Online Privacy\n\nHey digital guardians! It's time to safeguard your online presence and protect your digital identity. We're all about privacy settings, secure browsing, and making data privacy a digital adventure. Join the data privacy party!"
},
{
"source_sentence": "Title: The Role of Renewable Energy in Reducing Carbon Emissions\n\nRenewable energy sources, such as solar and wind power, are key to reducing carbon emissions and combating climate change. Transitioning to clean energy is essential to achieving global emissions reduction targets.",
"target_sentence": "Title: Go Green, Go Fun: Your Guide to Renewable Energy\n\nHey eco-enthusiasts! It's time to embrace sustainable energy and create a greener world. We're all about eco-brick adventures, solar panel smiles, and making renewable energy an exciting journey. Join the clean energy party!"
},
{
"source_sentence": "Title: The Impact of Artificial Intelligence in Healthcare\n\nArtificial Intelligence (AI) is reshaping the healthcare industry through improved diagnostics, predictive analytics, and personalized treatment plans. AI-driven solutions offer the potential to revolutionize patient care and medical research.",
"target_sentence": "Title: HealthTech Revolution: Your Guide to AI in Healthcare\n\nHey health tech enthusiasts! Let's dive into the world of AI-powered healthcare and wellness. We're all about self-care apps, AI-assisted diagnostics, and making healthcare a tech-savvy adventure. Join the HealthTech revolution party!"
},
{
"source_sentence": "Title: The Importance of Cybersecurity in the Digital Age\n\nCybersecurity is paramount for safeguarding digital assets and sensitive information in today's interconnected world. Implementing robust security measures, educating employees, and staying vigilant against cyber threats are essential for protecting data integrity.",
"target_sentence": "Title: Cybersecurity Demystified: Your Guide to Digital Defense\n\nHey digital defenders! Let's unravel the mysteries of cybersecurity and keep your online world safe. We're all about strong passwords, threat showdowns, and making cybersecurity a fun adventure. Join the digital defense party!"
},
{
"source_sentence": "Title: The Role of Emotional Intelligence in Leadership\n\nEmotional intelligence (EQ) is a critical competency for effective leadership. Leaders with high EQ can navigate complex emotions, foster teamwork, and inspire their teams to achieve outstanding results.",
"target_sentence": "Title: Leading with Heart: Your Guide to Emotional Intelligence\n\nHey future leaders! Let's dive into the world of emotional intelligence and leadership with heart. We're all about empathy, relationship building, and making emotional intelligence a superpower for leaders. Join the EQ leadership party!"
},
{
"source_sentence": "Title: The Significance of Early Childhood Education\n\nEarly childhood education plays a pivotal role in a child's cognitive and social development. High-quality early education programs provide a solid foundation for lifelong learning and academic success.",
"target_sentence": "Title: Tiny Explorers: Nurturing Young Minds with Fun\n\nHey early educators! It's time to embark on a joyful journey into the world of early childhood education. We're all about storytime adventures, finger-painting masterpieces, and making early education an unforgettable adventure. Join the early education party!"
},
{
"source_sentence": "Title: The Impact of Artificial Intelligence in Education\n\nArtificial Intelligence (AI) is reshaping education through personalized learning, adaptive assessments, and data-driven insights. AI-powered educational tools enhance student engagement and academic performance.",
"target_sentence": "Title: Tech-Savvy Learning: Your Guide to Digital Education\n\nHey digital learners! Let's dive into the world of AI-powered education and online learning. We're all about virtual field trips, interactive lessons, and making education a tech-savvy adventure. Join the digital education party!"
},
{
"source_sentence": "Title: The Importance of Time Management for Workplace Productivity\n\nEffective time management is a cornerstone of workplace productivity. Organizing tasks, setting priorities, and minimizing distractions enable employees to work efficiently and meet deadlines consistently.",
"target_sentence": "Title: Work Smarter, Not Harder: Your Guide to Time Mastery\n\nHey productivity enthusiasts! Let's unlock the secrets of time management and conquer the workday with ease. We're all about productivity hacks, time-saving tricks, and making work a breeze. Join the time mastery party!"
},
{
"source_sentence": "Title: The Role of Artificial Intelligence in Healthcare\n\nArtificial Intelligence (AI) is transforming healthcare through improved diagnostics, treatment recommendations, and patient care. AI-driven technologies offer the potential to enhance medical outcomes and streamline healthcare processes.",
"target_sentence": "Title: HealthTech Revolution: Your Guide to AI in Healthcare\n\nHey health tech enthusiasts! Let's dive into the world of AI-powered healthcare and wellness. We're all about smart health apps, AI-assisted diagnostics, and making healthcare a tech-savvy adventure. Join the HealthTech revolution party!"
},
{
"source_sentence": "Title: The Significance of Cultural Sensitivity in Global Business\n\nCultural sensitivity is essential for successful global business operations. Understanding and respecting cultural differences enhance communication, build trust, and foster positive relationships with international partners and customers.",
"target_sentence": "Title: Global Business 101: Your Guide to Cultural Savvy\n\nHey global entrepreneurs! It's time to embark on a cultural adventure in the business world. We're all about cross-cultural collaborations, international friendships, and making global business a fun journey. Join the cultural savvy party!"
},
{
"source_sentence": "Title: The Role of Green Technology in Environmental Sustainability\n\nGreen technology plays a pivotal role in achieving environmental sustainability goals. Innovations such as renewable energy, energy-efficient appliances, and eco-friendly transportation contribute to reducing carbon footprints and preserving our planet.",
"target_sentence": "Title: Go Green, Go Fun: Your Guide to Eco-Tech\n\nHey eco-enthusiasts! It's time to dive into the world of green technology and embrace a sustainable future. We're all about green gadgets, energy-saving tips, and making tech a planet-friendly adventure. Join the eco-tech party!"
},
{
"source_sentence": "Title: The Impact of Artificial Intelligence in Transportation\n\nArtificial Intelligence (AI) is reshaping transportation through autonomous vehicles, traffic optimization, and smart infrastructure. AI-powered solutions enhance safety, reduce traffic congestion, and improve overall mobility.",
"target_sentence": "Title: Smart Commutes: Your Guide to AI-Powered Transportation\n\nHey travelers of the future! Let's dive into the world of AI-driven transportation and enjoy stress-free journeys. We're all about self-driving car adventures, smart traffic solutions, and making transportation an effortless ride. Join the transportainment party!"
},
{
"source_sentence": "Title: The Importance of Cybersecurity Awareness in the Digital Age\n\nCybersecurity awareness is crucial for protecting personal information and digital assets in today's interconnected world. Practicing safe online behavior, recognizing cyber threats, and implementing security best practices are key to staying secure in the digital age.",
"target_sentence": "Title: Digital Defender's Guide: Your Passport to Cybersecurity\n\nHey digital defenders! It's time to navigate the digital world with confidence. We're all about secure passwords, vigilant clicks, and making cybersecurity a fun digital adventure. Join the cyber defense party!"
},
{
"source_sentence": "Title: The Role of Renewable Energy in Reducing Carbon Emissions\n\nRenewable energy sources, such as solar and wind power, are pivotal for reducing carbon emissions and mitigating climate change. Transitioning to clean energy is a fundamental step in achieving a sustainable and eco-friendly future.",
"target_sentence": "Title: Go Green, Go Fun: Your Guide to Renewable Energy\n\nHey eco-enthusiasts! It's time to embrace sustainable energy and create a greener world. We're all about eco-brick adventures, solar panel smiles, and making renewable energy an exciting journey. Join the clean energy party!"
},
{
"source_sentence": "Title: The Impact of Artificial Intelligence in Healthcare\n\nArtificial Intelligence (AI) is revolutionizing healthcare through improved diagnostics, predictive analytics, and personalized patient care. AI-driven solutions have the potential to enhance medical outcomes and streamline healthcare processes.",
"target_sentence": "Title: HealthTech Revolution: Your Guide to AI in Healthcare\n\nHey health tech enthusiasts! Let's dive into the world of AI-powered healthcare and wellness. We're all about self-care apps, AI-assisted diagnostics, and making healthcare a tech-savvy adventure. Join the HealthTech revolution party!"
},
{
"source_sentence": "Title: The Significance of Cultural Sensitivity in Global Business\n\nCultural sensitivity is a cornerstone of successful global business operations. Understanding and respecting cultural differences facilitate effective communication, build trust, and foster positive relationships with international partners and customers.",
"target_sentence": "Title: Global Business 101: Your Guide to Cultural Savvy\n\nHey global entrepreneurs! It's time to embark on a cultural adventure in the business world. We're all about cross-cultural collaborations, international friendships, and making global business a fun journey. Join the cultural savvy party!"
},
{
"source_sentence": "Title: The Role of Green Technology in Sustainable Urban Development\n\nGreen technology plays a pivotal role in promoting sustainable urban development. Innovations such as eco-friendly building materials, energy-efficient infrastructure, and smart city solutions contribute to creating environmentally conscious and livable cities.",
"target_sentence": "Title: Building Sustainable Cities: Your Guide to Green Urban Living\n\nHey urban enthusiasts! It's time to embrace sustainability and create greener, smarter cities. We're all about eco-friendly neighborhoods, energy-saving innovations, and making urban living an exciting and eco-conscious journey. Join the green urban living party!"
},
{
"source_sentence": "Title: The Impact of Artificial Intelligence in Transportation\n\nArtificial Intelligence (AI) is reshaping transportation through autonomous vehicles, traffic optimization, and smart infrastructure. AI-powered solutions enhance safety, reduce congestion, and improve the overall efficiency of transportation systems.",
"target_sentence": "Title: Smart Commutes: Your Guide to AI-Powered Transportation\n\nHey travelers of the future! Let's dive into the world of AI-driven transportation and enjoy stress-free journeys. We're all about self-driving car adventures, smart traffic solutions, and making transportation an effortless ride. Join the transportainment party!"
},
{
"source_sentence": "Title: The Importance of Cybersecurity Awareness in the Digital Age\n\nCybersecurity awareness is paramount for protecting personal information and digital assets in today's interconnected world. Implementing strong security practices, recognizing online threats, and staying informed about cybersecurity trends are essential for maintaining online safety.",
"target_sentence": "Title: Digital Defender's Guide: Your Passport to Cybersecurity\n\nHey digital defenders! It's time to navigate the digital world with confidence. We're all about secure passwords, vigilant clicks, and making cybersecurity a fun digital adventure. Join the cyber defense party!"
},
{
"source_sentence": "Title: The Role of Renewable Energy in Sustainable Agriculture\n\nRenewable energy sources, such as solar and wind power, play a vital role in promoting sustainable agriculture. The adoption of clean energy solutions enhances farm efficiency, reduces carbon footprints, and supports environmentally conscious farming practices.",
"target_sentence": "Title: Farming the Future: Your Guide to Sustainable Agriculture\n\nHey eco-farmers! It's time to embark on a sustainable farming journey powered by renewable energy. We're all about sunny fields, wind-whispered harvests, and making agriculture an environmentally friendly adventure. Join the sustainable farming party!"
},
{
"source_sentence": "Title: The Impact of Artificial Intelligence in Education\n\nArtificial Intelligence (AI) is revolutionizing education through personalized learning, adaptive assessments, and data-driven insights. AI-powered educational tools are transforming the way students learn and educators teach.",
"target_sentence": "Title: Learning in the Digital Age: Your Guide to AI-Powered Education\n\nHey digital learners! Let's dive into the world of AI-enhanced education and experience learning like never before. We're all about interactive lessons, smart study buddies, and making education a tech-savvy adventure. Join the digital education party!"
},
{
"source_sentence": "Title: The Importance of Emotional Intelligence in Leadership\n\nEmotional Intelligence (EQ) is a critical factor in effective leadership. Leaders with high EQ can navigate complex emotions, build strong relationships, and inspire their teams to achieve exceptional results.",
"target_sentence": "Title: Leading with Empathy: Your Guide to Emotional Intelligence\n\nHey future leaders! It's time to embrace the power of emotional intelligence and lead with empathy. We're all about connecting on a deeper level, fostering team harmony, and making leadership a journey of understanding. Join the EQ leadership party!"
},
{
"source_sentence": "Title: The Role of Green Technology in Climate Change Mitigation\n\nGreen technology is instrumental in mitigating climate change. Technologies such as carbon capture, sustainable agriculture practices, and renewable energy sources are crucial in reducing greenhouse gas emissions and preserving our planet's future.",
"target_sentence": "Title: Green Living 101: Your Guide to Eco-Friendly Tech\n\nHey eco-enthusiasts! It's time to embrace green technology and create a sustainable world. We're all about eco-gadgets, sustainable living tips, and making technology a friendlier ally for our planet. Join the eco-friendly tech party!"
},
{
"source_sentence": "Title: The Impact of Artificial Intelligence in Healthcare\n\nArtificial Intelligence (AI) is transforming healthcare through enhanced diagnostics, predictive analytics, and personalized patient care. AI-driven solutions have the potential to revolutionize medical practices, improve patient outcomes, and streamline healthcare operations.",
"target_sentence": "Title: HealthTech Revolution: Your Guide to AI in Healthcare\n\nHey health tech enthusiasts! Let's dive into the world of AI-powered healthcare and wellness. We're all about self-care apps, AI-assisted diagnostics, and making healthcare a tech-savvy adventure. Join the HealthTech revolution party!"
},
{
"source_sentence": "Title: The Significance of Cultural Sensitivity in Global Business\n\nCultural sensitivity is paramount for successful global business operations. Understanding and respecting cultural differences foster effective communication, build trust, and enable positive relationships with international partners and customers.",
"target_sentence": "Title: Global Business Explorers: Your Guide to Cross-Cultural Success\n\nHey global entrepreneurs! It's time to embark on a cultural adventure in the business world. We're all about cross-cultural collaborations, international friendships, and making global business an exciting journey. Join the cross-cultural success party!"
},
{
"source_sentence": "Title: The Impact of Renewable Energy on Environmental Sustainability\n\nRenewable energy sources, such as solar and wind power, have a profound impact on enhancing environmental sustainability. The adoption of clean energy technologies contributes to reducing carbon emissions and mitigating climate change.",
"target_sentence": "Title: Going Green with Renewable Energy: A Fun Ride\n\nHey folks! Let's talk about going green with renewable energy. It's like upgrading our planet's ride to an eco-friendly rollercoaster. Solar panels and wind turbines are the cool kids on the block!"
},
{
"source_sentence": "Title: The Role of Artificial Intelligence in Healthcare\n\nArtificial Intelligence (AI) is revolutionizing the healthcare sector through improved diagnostics, predictive analytics, and personalized patient care. AI-driven solutions are enhancing medical outcomes and optimizing healthcare operations.",
"target_sentence": "Title: HealthTech Revolution: Your Guide to AI in Healthcare\n\nHey health tech enthusiasts! Let's dive into the world of AI-powered healthcare and wellness. We're all about self-care apps, AI-assisted diagnostics, and making healthcare a tech-savvy adventure. Join the HealthTech revolution party!"
},
{
"source_sentence": "Title: The Importance of Cultural Sensitivity in Global Business\n\nCultural sensitivity is paramount for successful global business operations. Understanding and respecting cultural differences facilitate effective communication, build trust, and enable positive relationships with international partners and customers.",
"target_sentence": "Title: Global Business Explorers: Your Guide to Cross-Cultural Success\n\nHey global entrepreneurs! It's time to embark on a cultural adventure in the business world. We're all about cross-cultural collaborations, international friendships, and making global business an exciting journey. Join the cross-cultural success party!"
},
{
"source_sentence": "Title: The Significance of Emotional Intelligence in Leadership\n\nEmotional Intelligence (EQ) is a critical factor in effective leadership. Leaders with high EQ can navigate complex emotions, build strong relationships, and inspire their teams to achieve exceptional results.",
"target_sentence": "Title: Leading with Empathy: Your Guide to Emotional Intelligence\n\nHey future leaders! It's time to embrace the power of emotional intelligence and lead with empathy. We're all about connecting on a deeper level, fostering team harmony, and making leadership a journey of understanding. Join the EQ leadership party!"
},
{
"source_sentence": "Title: The Role of Green Technology in Sustainable Agriculture\n\nGreen technology plays a pivotal role in promoting sustainable agriculture. Innovations such as eco-friendly building materials, energy-efficient infrastructure, and smart city solutions contribute to creating environmentally conscious and livable cities.",
"target_sentence": "Title: Farming the Future: Your Guide to Sustainable Agriculture\n\nHey eco-farmers! It's time to embark on a sustainable farming journey powered by renewable energy. We're all about sunny fields, wind-whispered harvests, and making agriculture an environmentally friendly adventure. Join the sustainable farming party!"
},
{
"source_sentence": "Title: The Influence of Technology on Modern Education\n\nTechnology has become a driving force in modern education. Digital tools, online learning platforms, and adaptive assessments have transformed the way students access and engage with educational content, leading to improved learning outcomes.",
"target_sentence": "Title: Learning in the Digital Age: Your Guide to Tech-Savvy Education\n\nHey digital learners! Let's dive into the world of tech-powered education and experience learning like never before. We're all about interactive lessons, smart study buddies, and making education a tech-savvy adventure. Join the digital education party!"
},
{
"source_sentence": "Title: The Advancements in Artificial Intelligence and Their Impact on Industry\n\nRecent advancements in artificial intelligence (AI) are reshaping various industries. AI-driven automation, predictive analytics, and natural language processing are revolutionizing manufacturing, healthcare, finance, and beyond, driving efficiency and innovation.",
"target_sentence": "Title: Industry 4.0: Your Guide to the AI Revolution\n\nHey industry innovators! It's time to ride the wave of AI revolutionizing our workplaces. We're all about smart factories, data-driven decisions, and making industry a tech-driven adventure. Join the Industry 4.0 party!"
},
{
"source_sentence": "Title: The Role of Sustainable Practices in Corporate Social Responsibility\n\nIn the realm of corporate social responsibility (CSR), sustainable practices have gained prominence. Companies that prioritize sustainability through eco-friendly initiatives, ethical sourcing, and reduced carbon footprints not only benefit the environment but also enhance their reputation and profitability.",
"target_sentence": "Title: Eco-Conscious Corporations: Your Guide to CSR Adventures\n\nHey corporate world changemakers! Let's embark on an eco-conscious journey in the realm of corporate social responsibility. We're all about sustainable strategies, ethical values, and making CSR a socially responsible adventure. Join the CSR party!"
},
{
"source_sentence": "Title: The Influence of Green Building Design on Sustainable Architecture\n\nGreen building design principles have a profound influence on sustainable architecture. Features such as energy-efficient insulation, natural lighting, and renewable materials are essential elements in constructing environmentally friendly and energy-efficient buildings.",
"target_sentence": "Title: Sustainable Architecture Adventure: Your Guide to Eco-Friendly Buildings\n\nHey architecture enthusiasts! Let's embark on an eco-friendly journey through sustainable architecture. We're all about green rooftops, nature-inspired designs, and making buildings an environmentally conscious adventure. Join the sustainable architecture party!"
},
{
"source_sentence": "Title: The Digital Transformation of Financial Services\n\nThe financial services industry is undergoing a significant digital transformation. Fueled by advancements in fintech, online banking, and blockchain technology, this evolution is changing how consumers access and manage their finances, leading to greater convenience and efficiency.",
"target_sentence": "Title: Money Matters 2.0: Your Guide to Digital Finance\n\nHey savvy spenders! Let's dive into the world of digital finance and make managing money a breeze. We're all about mobile wallets, online investments, and making finance a tech-savvy adventure. Join the digital finance party!"
},
{
"source_sentence": "Title: The Role of Renewable Energy in Reducing Carbon Footprints\n\nRenewable energy sources play a vital role in reducing carbon footprints. Solar, wind, and hydropower technologies offer sustainable alternatives to fossil fuels, significantly contributing to the global effort to combat climate change.",
"target_sentence": "Title: Go Green, Stay Cool: Your Guide to Renewable Energy\n\nHey eco-warriors! It's time to go green and keep it cool with renewable energy. We're all about sunny solutions, wind-powered fun, and making sustainability an adventure. Join the green energy party!"
},
{
"source_sentence": "Title: The Impact of E-Learning on Education Accessibility\n\nE-learning has had a profound impact on improving education accessibility. With online courses, virtual classrooms, and mobile learning apps, students worldwide can access quality education regardless of geographical or physical constraints.",
"target_sentence": "Title: Learning Unleashed: Your Guide to E-Learning Adventures\n\nHey digital scholars! Let's dive into the world of e-learning and unlock a new era of education. We're all about virtual classrooms, interactive lessons, and making learning an online adventure. Join the e-learning party!"
},
{
"source_sentence": "Title: The Importance of Diversity and Inclusion in the Workplace\n\nDiversity and inclusion are crucial elements of a modern workplace. Fostering a diverse workforce and creating an inclusive environment result in higher productivity, innovation, and employee satisfaction, contributing to overall organizational success.",
"target_sentence": "Title: Embrace Diversity at Work: Your Guide to Inclusive Excellence\n\nHey workplace champions! Let's embrace diversity and create an inclusive culture that rocks. We're all about diverse teams, equal opportunities, and making work a place where everyone thrives. Join the inclusion party!"
},
{
"source_sentence": "Title: The Evolution of Artificial Intelligence in Healthcare Diagnosis\n\nThe field of artificial intelligence in healthcare diagnosis has evolved significantly. AI-powered systems now offer accurate and timely diagnostics, aiding healthcare professionals in delivering improved patient care and outcomes.",
"target_sentence": "Title: AI-Enhanced Healthcare: Your Guide to Diagnostics with a Dash of Tech\n\nHey health tech enthusiasts! Let's dive into the world of AI-enhanced healthcare and experience diagnostics with a tech twist. We're all about smart diagnoses, digital health buddies, and making healthcare a tech-savvy adventure. Join the health tech party!"
},
{
"source_sentence": "Title: The Significance of Sustainable Practices in Urban Planning\n\nSustainable practices are of paramount significance in urban planning. Eco-friendly architecture, efficient transportation systems, and green spaces contribute to creating livable cities that prioritize environmental responsibility and quality of life.",
"target_sentence": "Title: Sustainable Cities 101: Your Guide to Greener Urban Living\n\nHey city dwellers! Let's embark on a journey toward greener urban living. We're all about eco-friendly neighborhoods, bike-friendly streets, and making cities a sustainable adventure. Join the urban sustainability party!"
},
{
"source_sentence": "Title: The Role of Technology in Enhancing Customer Experiences\n\nTechnology plays a pivotal role in enhancing customer experiences. From personalized recommendations to efficient customer support systems, businesses leverage technology to create positive interactions and build long-lasting customer relationships.",
"target_sentence": "Title: Tech-Savvy Customer Delight: Your Guide to Exceptional Experiences\n\nHey happy customers! Let's dive into the world of tech-savvy delight and experience customer service like never before. We're all about personalized interactions, speedy resolutions, and making your experience a tech-driven adventure. Join the customer delight party!"
},
{
"source_sentence": "Title: The Importance of Ethical Leadership in Business\n\nEthical leadership holds great importance in the world of business. Leaders who prioritize ethics and moral values set a strong example for their teams, fostering trust, integrity, and a culture of ethical decision-making within their organizations.",
"target_sentence": "Title: Lead with Integrity: Your Guide to Ethical Leadership\n\nHey ethical leaders! It's time to lead with integrity and create a workplace that values ethics and trust. We're all about transparency, fair decisions, and making leadership an ethical adventure. Join the ethical leadership party!"
},
{
"source_sentence": "Title: The Impact of Artificial Intelligence on Financial Markets\n\nArtificial Intelligence (AI) is transforming financial markets with predictive analytics and algorithmic trading. AI-driven insights enable investors and traders to make data-informed decisions, contributing to market efficiency and volatility reduction.",
"target_sentence": "Title: Money Moves with AI: Your Guide to Investing in the Digital Age\n\nHey future investors! Let's dive into the world of AI-powered finance and make money moves like never before. We're all about smart portfolios, robo-advisors, and making investing a tech-savvy adventure. Join the finance tech party!"
},
{
"source_sentence": "Title: The Role of Data Analytics in Healthcare Decision-Making\n\nData analytics plays a critical role in healthcare decision-making. Through the analysis of patient data, healthcare providers can optimize treatment plans, reduce costs, and improve patient outcomes, ushering in a new era of evidence-based medicine.",
"target_sentence": "Title: Healthy Insights: Your Guide to Data-Driven Healthcare\n\nHey wellness enthusiasts! Let's dive into the world of data-driven healthcare and uncover the secrets to a healthier you. We're all about personalized treatments, digital health trackers, and making healthcare a data-driven adventure. Join the health data party!"
},
{
"source_sentence": "Title: The Importance of Sustainable Agriculture in Ensuring Food Security\n\nSustainable agriculture plays a pivotal role in ensuring global food security. By practicing environmentally friendly farming methods, we can meet the growing demand for food while preserving natural resources and reducing the impact of agriculture on the environment.",
"target_sentence": "Title: Farming for the Future: Your Guide to Sustainable Agriculture\n\nHey eco-farmers! Let's embark on a sustainable agriculture journey and grow the future. We're all about organic crops, eco-friendly practices, and making farming an environmentally conscious adventure. Join the sustainable farming party!"
},
{
"source_sentence": "Title: The Impact of Artificial Intelligence on Supply Chain Management\n\nArtificial Intelligence (AI) is revolutionizing supply chain management by optimizing logistics, demand forecasting, and inventory control. AI-powered solutions enhance efficiency, reduce costs, and improve the overall resilience of supply chains.",
"target_sentence": "Title: Smart Supply Chains: Your Guide to Streamlined Logistics\n\nHey supply chain enthusiasts! Let's dive into the world of smart logistics and make supply chains as smooth as a well-oiled machine. We're all about on-time deliveries, efficient warehouses, and making logistics a tech-savvy adventure. Join the supply chain party!"
},
{
"source_sentence": "Title: The Role of Machine Learning in Predictive Maintenance\n\nMachine learning is instrumental in predictive maintenance, allowing industries to predict equipment failures and perform maintenance proactively. This approach minimizes downtime, reduces maintenance costs, and improves overall operational efficiency.",
"target_sentence": "Title: Predictive Maintenance Made Fun: Your Guide to Machine Learning Magic\n\nHey maintenance magicians! Let's dive into the world of predictive maintenance with the magic of machine learning. We're all about equipment longevity, data-driven upkeep, and making maintenance a tech-savvy adventure. Join the maintenance magic party!"
},
{
"source_sentence": "Title: The Importance of Cybersecurity in Protecting Digital Assets\n\nCybersecurity plays a crucial role in safeguarding digital assets and sensitive information. Robust security measures, encryption, and vigilant monitoring are essential components of protecting against cyber threats and data breaches.",
"target_sentence": "Title: Defend Your Data: Your Guide to Cybersecurity Superpowers\n\nHey digital defenders! Let's dive into the world of cybersecurity and become the guardians of the digital realm. We're all about secure connections, vigilant monitoring, and making cybersecurity a cyber-adventure. Join the cybersecurity superhero party!"
},
{
"source_sentence": "Title: The Impact of Green Technology on Sustainable Living\n\nGreen technology is making a significant impact on sustainable living by reducing energy consumption and environmental impact. Sustainable practices, such as energy-efficient appliances and eco-friendly transportation, are becoming integral to modern lifestyles.",
"target_sentence": "Title: Living Green, Living Well: Your Guide to Sustainable Lifestyle\n\nHey eco-enthusiasts! Let's embark on a journey toward sustainable living and make green choices a way of life. We're all about eco-conscious homes, eco-friendly commuting, and making sustainability a lifestyle adventure. Join the sustainable living party!"
},
{
"source_sentence": "Title: The Future of Artificial Intelligence in Healthcare Diagnosis\n\nThe future of healthcare diagnosis lies in the continued advancement of Artificial Intelligence (AI). AI-driven diagnostic tools, predictive analytics, and personalized treatments will reshape the medical landscape, offering more precise and effective healthcare solutions.",
"target_sentence": "Title: Your Health, Your Way: Your Guide to AI-Powered Healthcare\n\nHey health enthusiasts! Let's dive into the world of AI-powered healthcare and take charge of our well-being. We're all about virtual check-ups, personalized treatment plans, and making healthcare an AI-driven adventure. Join the healthcare tech party!"
},
{
"source_sentence": "Title: The Significance of Sustainable Tourism in Preserving Natural Beauty\n\nSustainable tourism is of utmost significance in preserving the natural beauty of our planet. Eco-friendly accommodations, responsible travel practices, and community engagement efforts ensure that future generations can enjoy the world's stunning landscapes and cultural treasures.",
"target_sentence": "Title: Go Green, Travel Smart: Your Guide to Eco-Friendly Adventures\n\nHey wanderlust explorers! Let's embark on eco-friendly adventures and discover the world responsibly. We're all about eco-responsible travel, authentic experiences, and making tourism a sustainable adventure. Join the eco-tourism party!"
},
{
"source_sentence": "Title: The Role of Big Data Analytics in Business Decision-Making\n\nBig Data analytics is playing a pivotal role in shaping business decision-making. By extracting insights from vast data sets, organizations gain a competitive edge, improve operational efficiency, and drive innovation across various industry sectors.",
"target_sentence": "Title: Mastering Business with Big Data: Your Guide to Data-Driven Success\n\nHey business enthusiasts! Let's dive into the world of data-driven success and unlock the potential of Big Data. We're all about actionable insights, smart decisions, and making business a data-driven adventure. Join the data-driven business party!"
},
{
"source_sentence": "Title: The Impact of Digital Marketing on Brand Awareness\n\nDigital marketing has a profound impact on brand awareness in the digital age. Strategic online campaigns, social media engagement, and personalized content enhance a brand's visibility, connecting businesses with their target audiences more effectively.",
"target_sentence": "Title: Boost Your Brand Online: Your Guide to Digital Marketing Adventures\n\nHey brand builders! Let's dive into the world of digital marketing and skyrocket your brand's visibility online. We're all about engaging content, social media buzz, and making marketing a digital adventure. Join the digital marketing party!"
},
{
"source_sentence": "Title: The Future of Electric Vehicles in Sustainable Transportation\n\nIn the realm of sustainable transportation, the future belongs to electric vehicles (EVs). With advancements in battery technology and charging infrastructure, EVs are poised to become the cornerstone of eco-friendly commuting. They offer reduced emissions, lower operational costs, and a cleaner, greener tomorrow.",
"target_sentence": "Title: Ride the Green Wave: Your Guide to Electric Vehicle Adventures\n\nHey eco-travelers! Let's ride the green wave and embark on electric vehicle adventures. We're all about emission-free journeys, fast-charging pit stops, and making sustainable commuting an electrifying adventure. Join the EV party!"
},
{
"source_sentence": "Title: The Importance of Biodiversity in Ecosystem Resilience\n\nBiodiversity is a linchpin of ecosystem resilience. Diverse ecosystems are better equipped to withstand environmental changes and disturbances. They provide critical services such as pollination, water purification, and climate regulation, making biodiversity conservation paramount to the health of our planet.",
"target_sentence": "Title: Explore the Wild Side: Your Guide to Biodiversity Conservation\n\nHey nature enthusiasts! Let's explore the wild side and dive into biodiversity conservation. We're all about protecting species, preserving habitats, and making nature conservation an adventure. Join the biodiversity party!"
},
{
"source_sentence": "Title: The Role of Artificial Intelligence in Environmental Monitoring\n\nArtificial Intelligence (AI) has a pivotal role in environmental monitoring. AI-driven systems analyze vast data sets from satellites, sensors, and drones to track changes in ecosystems, air quality, and natural disasters. These insights enable informed decision-making for environmental protection and disaster management.",
"target_sentence": "Title: Eco Guardians Unite: Your Guide to AI-Powered Environmental Stewardship\n\nHey eco-warriors! Let's unite as eco guardians and use AI to protect our planet. We're all about tracking wildlife, monitoring air quality, and making environmental stewardship a tech-savvy adventure. Join the eco guardians party!"
},
{
"source_sentence": "Title: The Influence of Social Media on Modern Politics\n\nSocial media platforms have a significant influence on modern politics. They serve as powerful tools for political campaigns, allowing candidates to reach a broad audience, engage with voters, and disseminate their policy messages. Social media has reshaped political communication and campaigning strategies.",
"target_sentence": "Title: Social Politics 101: Your Guide to Navigating the Digital Political Landscape\n\nHey political enthusiasts! Let's dive into the world of social politics and discover the digital realm of political engagement. We're all about online activism, political discourse, and making politics a digital adventure. Join the social politics party!"
}
] | 89,534 | [
[
-0.0211334228515625,
-0.072509765625,
0.0191802978515625,
0.01165008544921875,
-0.0243072509765625,
0.0201416015625,
0.0110321044921875,
-0.043212890625,
0.04681396484375,
0.041534423828125,
-0.061279296875,
-0.0250396728515625,
-0.027435302734375,
0.0138168... |
fia24/lemma41k | 2023-10-26T11:54:24.000Z | [
"region:us"
] | fia24 | null | null | 0 | 18 | 2023-10-26T11:40:36 | ---
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: Inflected_Word
dtype: string
- name: Lemma
dtype: string
splits:
- name: train
num_bytes: 2048941.980288042
num_examples: 32995
- name: test
num_bytes: 256156.5591358743
num_examples: 4125
- name: val
num_bytes: 256094.4605760838
num_examples: 4124
download_size: 1387988
dataset_size: 2561193.0000000005
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: val
path: data/val-*
---
# Dataset Card for "lemma41k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 757 | [
[
-0.05841064453125,
-0.00843048095703125,
0.01377105712890625,
0.0120086669921875,
-0.030181884765625,
-0.0090179443359375,
0.01363372802734375,
-0.00867462158203125,
0.06048583984375,
0.035064697265625,
-0.07177734375,
-0.0582275390625,
-0.0467529296875,
-0.... |
atmallen/qm_bob_grader_last_1.0e_0.0p_finetuning | 2023-10-27T04:47:37.000Z | [
"region:us"
] | atmallen | null | null | 0 | 18 | 2023-10-27T04:47:35 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: statement
dtype: string
- name: choices
sequence: string
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
- name: true_label
dtype: bool
splits:
- name: train
num_bytes: 11540039
num_examples: 200000
- name: validation
num_bytes: 1159666
num_examples: 20000
- name: test
num_bytes: 1159811
num_examples: 20000
download_size: 3316946
dataset_size: 13859516
---
# Dataset Card for "qm_bob_grader_last_1.0e_0.0p_finetuning"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 876 | [
[
-0.033538818359375,
-0.02239990234375,
0.01080322265625,
0.00690460205078125,
-0.0124359130859375,
0.006591796875,
0.0269012451171875,
0.01329803466796875,
0.0416259765625,
0.041839599609375,
-0.038482666015625,
-0.06280517578125,
-0.034027099609375,
-0.0214... |
ayushtues/scalecrafter | 2023-10-27T06:35:59.000Z | [
"region:us"
] | ayushtues | null | null | 0 | 18 | 2023-10-27T05:32:20 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
KendrickPham/fine-tuning-csv | 2023-10-27T07:47:00.000Z | [
"region:us"
] | KendrickPham | null | null | 0 | 18 | 2023-10-27T07:45:07 | ---
# For reference on dataset card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | 4,563 | [
[
-0.04034423828125,
-0.0419921875,
0.009765625,
0.0178070068359375,
-0.0300445556640625,
-0.00893402099609375,
-0.0026874542236328125,
-0.048431396484375,
0.043212890625,
0.059478759765625,
-0.05938720703125,
-0.069580078125,
-0.042205810546875,
0.00993347167... |
recoilme/aesthetic_photos_xs | 2023-10-29T15:20:31.000Z | [
"size_categories:1K<n<10K",
"art",
"region:us"
] | recoilme | null | null | 0 | 18 | 2023-10-29T14:50:46 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1391150970.57
num_examples: 1010
download_size: 1391377501
dataset_size: 1391150970.57
tags:
- art
pretty_name: aesthetic photos xs
size_categories:
- 1K<n<10K
---
# aesthetic_photos_xs
- 1k manually selected photos from unsplash
- captioned with BLIP model large caption && SmilingWolf/wd-v1-4-convnext-tagger-v2
# repositories
- https://github.com/recoilme/unsplash_dwn
- https://github.com/kohya-ss/sd-scripts
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 783 | [
[
-0.044586181640625,
-0.0007109642028808594,
0.0177459716796875,
0.022430419921875,
-0.037017822265625,
0.0190887451171875,
0.0005230903625488281,
-0.0279541015625,
0.04791259765625,
0.048309326171875,
-0.06915283203125,
-0.055206298828125,
-0.0243988037109375,
... |
ContextualAI/trivia_qa_bge_neighbors_nprobe100 | 2023-10-30T23:50:05.000Z | [
"region:us"
] | ContextualAI | null | null | 0 | 18 | 2023-10-30T23:44:22 | ---
dataset_info:
features:
- name: question
dtype: string
- name: question_id
dtype: string
- name: question_source
dtype: string
- name: entity_pages
sequence:
- name: doc_source
dtype: string
- name: filename
dtype: string
- name: title
dtype: string
- name: wiki_context
dtype: string
- name: search_results
sequence:
- name: description
dtype: string
- name: filename
dtype: string
- name: rank
dtype: int32
- name: title
dtype: string
- name: url
dtype: string
- name: search_context
dtype: string
- name: answer
struct:
- name: aliases
sequence: string
- name: normalized_aliases
sequence: string
- name: matched_wiki_entity_name
dtype: string
- name: normalized_matched_wiki_entity_name
dtype: string
- name: normalized_value
dtype: string
- name: type
dtype: string
- name: value
dtype: string
- name: neighbor
dtype: string
splits:
- name: validation
num_bytes: 9756868
num_examples: 7993
download_size: 5797345
dataset_size: 9756868
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "trivia_qa_bge_neighbors_nprobe100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,461 | [
[
-0.0478515625,
-0.0218505859375,
0.03338623046875,
0.01473236083984375,
-0.0011444091796875,
0.01251983642578125,
0.0200958251953125,
-0.0108642578125,
0.060089111328125,
0.029266357421875,
-0.04290771484375,
-0.06329345703125,
-0.0292205810546875,
-0.003055... |
hippocrates/CitationGPTv2_train | 2023-11-01T16:25:20.000Z | [
"region:us"
] | hippocrates | null | null | 0 | 18 | 2023-11-01T16:23:06 | ---
dataset_info:
features:
- name: id
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 438996250
num_examples: 118360
- name: valid
num_bytes: 56481934
num_examples: 15080
- name: test
num_bytes: 53275038
num_examples: 14160
download_size: 206800302
dataset_size: 548753222
---
# Dataset Card for "CitationGPTv2_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 636 | [
[
-0.034912109375,
0.0028476715087890625,
0.0110015869140625,
0.033233642578125,
-0.01236724853515625,
-0.0120391845703125,
0.0136566162109375,
0.0009608268737792969,
0.0275115966796875,
0.01114654541015625,
-0.04132080078125,
-0.0248870849609375,
-0.0518493652343... |
sentence-transformers/embedding-training-data | 2021-10-17T17:49:20.000Z | [
"region:us"
] | sentence-transformers | null | null | 52 | 17 | 2022-03-02T23:29:22 | # Training Data for Text Embedding Models
This repository contains training files to train text embedding models, e.g. using [sentence-transformers](https://www.SBERT.net).
## Data Format
All files are in a `jsonl.gz` format: Each line contains a JSON-object that represent one training example.
The JSON objects can come in different formats:
- **Pairs:** `["text1", "text2"]` - This is a positive pair that should be close in vector space.
- **Triplets:** `["anchor", "positive", "negative"]` - This is a triplet: The `positive` text should be close to the `anchor`, while the `negative` text should be distant to the `anchor`.
- **Sets:** `{"set": ["text1", "text2", ...]}` A set of texts describing the same thing, e.g. different paraphrases of the same question, different captions for the same image. Any combination of the elements is considered as a positive pair.
- **Query-Pairs:** `{"query": "text", "pos": ["text1", "text2", ...]}` A query together with a set of positive texts. Can be formed to a pair `["query", "positive"]` by randomly selecting a text from `pos`.
- **Query-Triplets:** `{"query": "text", "pos": ["text1", "text2", ...], "neg": ["text1", "text2", ...]}` A query together with a set of positive texts and negative texts. Can be formed to a triplet `["query", "positive", "negative"]` by randomly selecting a text from `pos` and `neg`.
## Available Datasets
**Note: I'm currently in the process to upload the files. Please check again next week to get the full list of datasets**
We measure the performance for each training dataset by training the [nreimers/MiniLM-L6-H384-uncased](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model on it with [MultipleNegativesRankingLoss](https://www.sbert.net/docs/package_reference/losses.html#multiplenegativesrankingloss), a batch size of 256, for 2000 training steps. The performance is then averaged across 14 sentence embedding benchmark datasets from diverse domains (Reddit, Twitter, News, Publications, E-Mails, ...).
| Dataset | Description | Size (#Lines) | Performance | Reference |
| --- | --- | :---: | :---: | --- |
| [gooaq_pairs.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/gooaq_pairs.jsonl.gz) | (Question, Answer)-Pairs from Google auto suggest | 3,012,496 | 59.06 | [GooAQ](https://github.com/allenai/gooaq)
| [yahoo_answers_title_answer.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/yahoo_answers_title_answer.jsonl.gz) | (Title, Answer) pairs from Yahoo Answers | 1,198,260 | 58.65 | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset)
| [msmarco-triplets.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/msmarco-triplets.jsonl.gz) | (Question, Answer, Negative)-Triplets from MS MARCO Passages dataset | 499,184 | 58.76 | [MS MARCO Passages](https://github.com/microsoft/MSMARCO-Passage-Ranking)
| [stackexchange_duplicate_questions_title_title.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/stackexchange_duplicate_questions_title_title.jsonl.gz) | (Title, Title) pairs of duplicate questions from StackExchange | 304,525 | 58.47 | [Stack Exchange Data API](https://data.stackexchange.com/apple/query/fork/1456963)
| [eli5_question_answer.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/eli5_question_answer.jsonl.gz) | (Question, Answer)-Pairs from ELI5 dataset | 325,475 | 58.24 | [ELI5](https://huggingface.co/datasets/eli5)
| [yahoo_answers_title_question.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/yahoo_answers_title_question.jsonl.gz) | (Title, Question_Body) pairs from Yahoo Answers | 659,896 | 58.05 | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset)
| [squad_pairs.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/squad_pairs.jsonl.gz) | (Question, Answer_Passage) Pairs from SQuAD dataset | 87,599 | 58.02 | [SQuAD](https://huggingface.co/datasets/squad)
| [yahoo_answers_question_answer.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/yahoo_answers_question_answer.jsonl.gz) | (Question_Body, Answer) pairs from Yahoo Answers | 681,164 | 57.74 | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset)
| [wikihow.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/wikihow.jsonl.gz) | (Summary, Text) from WikiHow | 128,542 | 57.67 | [WikiHow](https://github.com/pvl/wikihow_pairs_dataset)
| [amazon_review_2018.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/amazon_review_2018.jsonl.gz) | (Title, review) pairs from Amazon | 87,877,725 | 57.65 | [Amazon review data (2018)](http://deepyeti.ucsd.edu/jianmo/amazon/index.html)
| [NQ-train_pairs.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/NQ-train_pairs.jsonl.gz) | Training pairs (query, answer_passage) from the NQ dataset | 100,231 | 57.48 | [Natural Questions](https://ai.google.com/research/NaturalQuestions)
| [amazon-qa.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/amazon-qa.jsonl.gz) | (Question, Answer) pairs from Amazon | 1,095,290 | 57.48 | [AmazonQA](https://github.com/amazonqa/amazonqa)
| [S2ORC_title_abstract.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/S2ORC_title_abstract.jsonl.gz) | (Title, Abstract) pairs of scientific papers | 41,769,185 | 57.39 | [S2ORC](https://github.com/allenai/s2orc)
| [quora_duplicates.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/quora_duplicates.jsonl.gz) | Duplicate question pairs from Quora | 103,663 | 57.36 | [QQP](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)
| [WikiAnswers.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/WikiAnswers.jsonl.gz) | Sets of duplicates questions | 27,383,151 | 57.34 | [WikiAnswers Corpus](https://github.com/afader/oqa#wikianswers-corpus)
| [searchQA_top5_snippets.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/searchQA_top5_snippets.jsonl.gz) | Question + Top5 text snippets from SearchQA dataset. Top5 | 117,220 | 57.34 | [search_qa](https://huggingface.co/datasets/search_qa)
| [stackexchange_duplicate_questions_title-body_title-body.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/stackexchange_duplicate_questions_title-body_title-body.jsonl.gz) | (Title+Body, Title+Body) pairs of duplicate questions from StackExchange | 250,460 | 57.30 | [Stack Exchange Data API](https://data.stackexchange.com/apple/query/fork/1456963)
| [S2ORC_citations_titles.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/S2ORC_citations_titles.jsonl.gz) | Citation network (paper titles) | 51,030,086 | 57.28 | [S2ORC](https://github.com/allenai/s2orc)
| [stackexchange_duplicate_questions_body_body.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/stackexchange_duplicate_questions_body_body.jsonl.gz) | (Body, Body) pairs of duplicate questions from StackExchange | 250,519 | 57.26 | [Stack Exchange Data API](https://data.stackexchange.com/apple/query/fork/1456963)
| [agnews.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/agnews.jsonl.gz) | (Title, Description) pairs of news articles from the AG News dataset | 1,157,745 | 57.25 | [AG news corpus](http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html)
| [quora_duplicates_triplets.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/quora_duplicates_triplets.jsonl.gz) | Duplicate question pairs from Quora with additional hard negatives (mined & denoised by cross-encoder) | 101,762 | 56.97 | [QQP](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)
| [AllNLI.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/AllNLI.jsonl.gz) | Combination of SNLI + MultiNLI Triplets: (Anchor, Entailment_Text, Contradiction_Text) | 277,230 | 56.57 | [SNLI](https://huggingface.co/datasets/snli) and [MNLI](https://huggingface.co/datasets/multi_nli)
| [npr.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/npr.jsonl.gz) | (Title, Body) pairs from the npr.org website | 594,384 | 56.44 | [Pushshift](https://files.pushshift.io/news/)
| [specter_train_triples.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/specter_train_triples.jsonl.gz) | Triplets (Title, related_title, hard_negative) for Scientific Publications from Specter | 684,100 | 56.32 | [SPECTER](https://github.com/allenai/specter)
| [SimpleWiki.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/SimpleWiki.jsonl.gz) | Matched pairs (English_Wikipedia, Simple_English_Wikipedia) | 102,225 | 56.15 | [SimpleWiki](https://cs.pomona.edu/~dkauchak/simplification/)
| [PAQ_pairs.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/PAQ_pairs.jsonl.gz) | Training pairs (query, answer_passage) from the PAQ dataset | 64,371,441 | 56.11 | [PAQ](https://github.com/facebookresearch/PAQ)
| [altlex.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/altlex.jsonl.gz) | Matched pairs (English_Wikipedia, Simple_English_Wikipedia) | 112,696 | 55.95 | [altlex](https://github.com/chridey/altlex/)
| [ccnews_title_text.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/ccnews_title_text.jsonl.gz) | (Title, article) pairs from the CC News dataset | 614,664 | 55.84 | [CC-News](https://huggingface.co/datasets/cc_news)
| [codesearchnet.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/codesearchnet.jsonl.gz) | CodeSearchNet corpus is a dataset of (comment, code) pairs from opensource libraries hosted on GitHub. It contains code and documentation for several programming languages. | 1,151,414 | 55.80 | [CodeSearchNet](https://huggingface.co/datasets/code_search_net)
| [S2ORC_citations_abstracts.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/S2ORC_citations_abstracts.jsonl.gz) | Citation network (paper abstracts) | 39,567,485 | 55.74 | [S2ORC](https://github.com/allenai/s2orc)
| [sentence-compression.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/sentence-compression.jsonl.gz) | Pairs (long_text, short_text) about sentence-compression | 180,000 | 55.63 | [Sentence-Compression](https://github.com/google-research-datasets/sentence-compression)
| [TriviaQA_pairs.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/TriviaQA_pairs.jsonl.gz) | Pairs (query, answer) from TriviaQA dataset | 73,346 | 55.56 | [TriviaQA](https://huggingface.co/datasets/trivia_qa)
| [cnn_dailymail_splitted.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/cnn_dailymail_splitted.jsonl.gz) | (article, highlight sentence) with individual highlight sentences for each news article | 311,971 | 55.36 | [CNN Dailymail Dataset](https://huggingface.co/datasets/cnn_dailymail)
| [cnn_dailymail.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/cnn_dailymail.jsonl.gz) | (highlight sentences, article) with all highlight sentences as one text for each news article | 311,971 | 55.27 | [CNN Dailymail Dataset](https://huggingface.co/datasets/cnn_dailymail)
| [flickr30k_captions.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/flickr30k_captions.jsonl.gz) | Different captions for the same image from the Flickr30k dataset | 31,783 | 54.68 | [Flickr30k](https://shannon.cs.illinois.edu/DenotationGraph/)
| [xsum.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/xsum.jsonl.gz) | (Summary, News Article) pairs from XSUM dataset | 226,711 | 53.86 | [xsum](https://huggingface.co/datasets/xsum)
| [coco_captions.jsonl.gz](https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/coco_captions.jsonl.gz) | Different captions for the same image | 82,783 | 53.77 | [COCO](https://cocodataset.org/)
**Disclaimer:** We only distribute these datasets in a specific format, but we do not vouch for their quality or fairness, or claim that you have license to use the dataset. It remains the user's responsibility to determine whether you as a user have permission to use the dataset under the dataset's license and to cite the right owner of the dataset. Please check the individual dataset webpages for the license agreements.
If you're a dataset owner and wish to update any part of it, or do not want your dataset to be included in this dataset collection, feel free to contact me.
| 13,733 | [
[
-0.0286712646484375,
-0.06622314453125,
0.0220489501953125,
0.005062103271484375,
-0.006137847900390625,
-0.0070648193359375,
-0.02020263671875,
0.0008249282836914062,
0.0256195068359375,
0.020965576171875,
-0.03375244140625,
-0.052703857421875,
-0.0572204589843... |
UrukHan/t5-russian-spell_I | 2022-03-27T12:53:21.000Z | [
"region:us"
] | UrukHan | null | null | 0 | 17 | 2022-03-27T12:51:48 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
arka0821/multi_document_summarization | 2022-10-20T19:13:26.000Z | [
"task_categories:summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"arxiv:2010.14235",
"region:us"
] | arka0821 | Multi-Document, a large-scale multi-document summarization dataset created from scientific articles. Multi-Document introduces a challenging multi-document summarization task: writing the related-work section of a paper based on its abstract and the articles it references. | @article{lu2020multi,
title={Multi-Document: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles},
author={Arka Das, India},
journal={arXiv preprint arXiv:2010.14235},
year={2022}
} | 4 | 17 | 2022-04-19T15:34:53 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- summarization-other-paper-abstract-generation
paperswithcode_id: multi-document
pretty_name: Multi-Document
---
# Dataset Card for Multi-Document
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Multi-Document repository](https://github.com/arka0821/multi_document_summarization)
- **Paper:** [Multi-Document: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles](https://arxiv.org/abs/2010.14235)
### Dataset Summary
Multi-Document, a large-scale multi-document summarization dataset created from scientific articles. Multi-Document introduces a challenging multi-document summarization task: writing the related-work section of a paper based on its abstract and the articles it references.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text in the dataset is in English
## Dataset Structure
### Data Instances
{"id": "n3ByHGrxH3bvfrvF", "docs": [{"id": "1394519630182457344", "text": "Clover Bio's COVID-19 vaccine candidate shows immune response against SARS-CoV-2 variants in mouse model https://t.co/wNWa9GQux5"}, {"id": "1398154482463170561", "text": "The purpose of the Vaccine is not to stop you from catching COVID 19. The vaccine introduces the immune system to an inactivated form of the SARS-CoV-2 coronavirus or a small part of it. This then equips the body with the ability to fight the virus better in case you get it. https://t.co/Cz9OU6Zi7P"}, {"id": "1354844652520792071", "text": "The Moderna mRNA COVID-19 vaccine appears to be effective against the novel, rapidly spreading variants of SARS-CoV-2.\nResearchers analysed blood samples from vaccinated people and monkeys- Both contained neutralising antibodies against the virus. \nPT1/2\n#COVID19vaccines #biotech https://t.co/ET1maJznot"}, {"id": "1340189698107518976", "text": "@KhandaniM Pfizer vaccine introduces viral surface protein which is constant accross SARS COV 2 variants into the body. Body builds antibodies against this protein, not any virus. These antibodies instructs macrophages & T-Cells to attack & destroy any COVID-19 v variant at infection point"}, {"id": "1374368989581778945", "text": "@DelthiaRicks \" Pfizer and BioNTech\u2019s COVID-19 vaccine is an mRNA vaccine, which does not use the live virus but rather a small portion of the viral sequence of the SARS-CoV-2 virus to instruct the body to produce the spike protein displayed on the surface of the virus.\""}, {"id": "1353354819315126273", "text": "Pfizer and BioNTech Publish Results of Study Showing COVID-19 Vaccine Elicits Antibodies that Neutralize Pseudovirus Bearing the SARS-CoV-2 U.K. Strain Spike Protein in Cell Culture | Pfizer https://t.co/YXcSnjLt8C"}, {"id": "1400821856362401792", "text": "Pfizer-BioNTech's covid-19 vaccine elicits lower levels of antibodies against the SARS-CoV-2\u00a0Delta variant\u00a0(B.1.617.2), first discovered in India, in comparison to other variants, said a research published in\u00a0Lancet\u00a0journal.\n https://t.co/IaCMX81X3b"}, {"id": "1367252963190665219", "text": "New research from UNC-Chapel Hill suggests that those who have previously experienced a SARS-CoV-2 infection develop a significant antibody response to the first dose of mRNA-based COVID-19 vaccine.\nhttps://t.co/B4vR1KUQ0w"}, {"id": "1375949502461394946", "text": "Mechanism of a COVID-19 nanoparticle vaccine candidate that elicits a broadly neutralizing antibody response to SARS-CoV-2 variants https://t.co/nc1L0uvtlI #bioRxiv"}, {"id": "1395428608349548550", "text": "JCI - Efficient maternal to neonatal transfer of antibodies against SARS-CoV-2 and BNT162b2 mRNA COVID-19 vaccine https://t.co/vIBcpPaKFZ"}], "summary": "The COVID-19 vaccine appears to be effective against the novel, rapidly spreading variants of SARS-CoV-2. Pfizer-BioNTech's COVID-19 vaccine use small portion of the viral sequence of the SARS-CoV-2 virus to equip the body with the ability to fight the virus better in case you get it."}
### Data Fields
{'id': text of paper abstract \
'docs': document id \
[
'id': id of text \
'text': text data \
]
'summary': summary text
}
### Data Splits
The data is split into a training, validation and test.
| train | validation | test |
|------:|-----------:|-----:|
| 50 | 10 | 5 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@article{lu2020multi,
title={Multi-Document: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles},
author={Arka Das, India},
journal={arXiv preprint arXiv:2010.14235},
year={2022}
}
```
### Contributions
Thanks to [@arka0821] (https://github.com/arka0821/multi_document_summarization) for adding this dataset.
| 6,671 | [
[
-0.0335693359375,
-0.061431884765625,
-0.01004791259765625,
0.01001739501953125,
-0.0210113525390625,
0.0184326171875,
-0.007671356201171875,
-0.029815673828125,
0.0450439453125,
-0.01535797119140625,
-0.04290771484375,
-0.040557861328125,
-0.047210693359375,
... |
SetFit/amazon_massive_intent_en-US | 2022-05-06T09:08:00.000Z | [
"region:us"
] | SetFit | null | null | 2 | 17 | 2022-05-06T09:07:57 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
peandrew/conceptnet_en_nomalized | 2022-05-08T03:11:02.000Z | [
"region:us"
] | peandrew | null | null | 1 | 17 | 2022-05-08T01:47:33 | This is the English part of the ConceptNet and we have removed the useless information. | 88 | [
[
-0.016571044921875,
-0.0521240234375,
0.0128021240234375,
-0.0147705078125,
-0.041900634765625,
-0.019989013671875,
0.01561737060546875,
-0.032684326171875,
0.06549072265625,
0.058746337890625,
-0.05035400390625,
-0.0228729248046875,
-0.0084228515625,
-0.013... |
tonytins/chat-dataset | 2022-06-10T03:36:25.000Z | [
"region:us"
] | tonytins | null | null | 1 | 17 | 2022-06-08T13:12:08 | # Chat Dataset
Derived from Hitomi Team's [Convo Dataset](https://github.com/hitomi-team/convo-dataset) on Github, the Chat Dataset is a vast dataset with diverse data used for training models to assist in conversation analysis and generation.
## Getting Started
### Prerequisites
- Python
- Git LFS
## DISCLAIMER
**In order to efficiently process the data, this repository contains language that may be offensive! View at your own risk!**
## License
This project is licensed under GNU Public License version 2.0. See [LICENSE](LICENSE) for details.
| 558 | [
[
-0.0133056640625,
-0.061126708984375,
-0.001995086669921875,
0.002658843994140625,
-0.00937652587890625,
0.01021575927734375,
-0.0217437744140625,
-0.0188751220703125,
0.01776123046875,
0.043701171875,
-0.0673828125,
-0.036376953125,
-0.0231475830078125,
-0.... |
phihung/titanic | 2022-06-22T16:25:32.000Z | [
"license:other",
"region:us"
] | phihung | null | null | 1 | 17 | 2022-06-22T16:16:15 | ---
license: other
---
The legendary Titanic dataset from [this](https://www.kaggle.com/competitions/titanic/overview) Kaggle competition | 137 | [
[
-0.0183258056640625,
-0.03179931640625,
0.01242828369140625,
0.00690460205078125,
-0.02105712890625,
0.0159454345703125,
0.04254150390625,
-0.005214691162109375,
0.041534423828125,
0.0704345703125,
-0.0445556640625,
-0.02117919921875,
-0.0110321044921875,
-0... |
Paul/hatecheck-spanish | 2022-07-05T10:27:07.000Z | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:es",
"license:cc-by-4.0",
"arxiv:2206.09917",
"regi... | Paul | null | null | 4 | 17 | 2022-07-05T10:06:37 | ---
annotations_creators:
- crowdsourced
language_creators:
- expert-generated
language:
- es
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Spanish HateCheck
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- hate-speech-detection
---
# Dataset Card for Multilingual HateCheck
## Dataset Description
Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.
For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate.
This allows for targeted diagnostic insights into model performance.
For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work!
- **Paper:** Röttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. https://arxiv.org/abs/2206.09917
- **Repository:** https://github.com/rewire-online/multilingual-hatecheck
- **Point of Contact:** paul@rewire.online
## Dataset Structure
The csv format mostly matches the original HateCheck data, with some adjustments for specific languages.
**mhc_case_id**
The test case ID that is unique to each test case across languages (e.g., "mandarin-1305")
**functionality**
The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations.
**test_case**
The test case text.
**label_gold**
The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label.
**target_ident**
Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages.
**ref_case_id**
For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case.
**ref_templ_id**
The equivalent to ref_case_id, but for template IDs.
**templ_id**
The ID of the template from which the test case was generated.
**case_templ**
The template from which the test case was generated (where applicable).
**gender_male** and **gender_female**
For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ.
**label_annotated**
A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']").
**label_annotated_maj**
The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts.
**disagreement_in_case**
True if label_annotated_maj does not match label_gold for the entry.
**disagreement_in_template**
True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC. | 3,490 | [
[
-0.046630859375,
-0.052032470703125,
-0.0040130615234375,
0.00669097900390625,
-0.00841522216796875,
0.00782012939453125,
-0.002208709716796875,
-0.037078857421875,
0.029052734375,
0.0238037109375,
-0.055145263671875,
-0.056121826171875,
-0.0408935546875,
0.... |
pyronear/openfire | 2022-12-11T22:25:43.000Z | [
"task_categories:image-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"size_categories:1K<n<10K",
"source_datasets:original",
"license:apache-2.0",
"region:us"
] | pyronear | OpenFire is an image classification dataset for wildfire detection, collected
from web searches. | @software{Pyronear_PyroVision_2019,
title={Pyrovision: wildfire early detection},
author={Pyronear contributors},
year={2019},
month={October},
publisher = {GitHub},
url = {https://github.com/pyronear/pyro-vision}
} | 2 | 17 | 2022-07-17T16:11:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language: []
license:
- apache-2.0
multilinguality: []
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- image-classification
task_ids: []
pretty_name: Wildfire image classification dataset collected using images from web
searches.
---
# Dataset Card for OpenFire
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://pyronear.org/pyro-vision/datasets.html#openfire
- **Repository:** https://github.com/pyronear/pyro-vision
- **Point of Contact:** Pyronear <https://pyronear.org/en/>
### Dataset Summary
OpenFire is an image classification dataset for wildfire detection, collected
from web searches.
### Supported Tasks and Leaderboards
- `image-classification`: The dataset can be used to train a model for Image Classification.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image URL and its binary label.
```
{
'image_url': 'https://cdn-s-www.ledauphine.com/images/13C08274-6BA6-4577-B3A0-1E6C1B2A573C/FB1200/photo-1338240831.jpg',
'is_wildfire': true,
}
```
### Data Fields
- `image_url`: the download URL of the image.
- `is_wildfire`: a boolean value specifying whether there is an ongoing wildfire on the image.
### Data Splits
The data is split into training and validation sets. The training set contains 7143 images and the validation set 792 images.
## Dataset Creation
### Curation Rationale
The curators state that the current wildfire classification datasets typically contain close-up shots of wildfires, with limited variations of weather conditions, luminosity and backrgounds,
making it difficult to assess for real world performance. They argue that the limitations of datasets have partially contributed to the failure of some algorithms in coping
with sun flares, foggy / cloudy weather conditions and small scale.
### Source Data
#### Initial Data Collection and Normalization
OpenFire was collected using images publicly indexed by the search engine DuckDuckGo using multiple relevant queries. The images were then manually cleaned to remove errors.
### Annotations
#### Annotation process
Each web search query was designed to yield a single label (with wildfire or without), and additional human verification was used to remove errors.
#### Who are the annotators?
François-Guillaume Fernandez
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
François-Guillaume Fernandez
### Licensing Information
[Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Citation Information
```
@software{Pyronear_PyroVision_2019,
title={Pyrovision: wildfire early detection},
author={Pyronear contributors},
year={2019},
month={October},
publisher = {GitHub},
howpublished = {\url{https://github.com/pyronear/pyro-vision}}
}
```
| 4,218 | [
[
-0.010894775390625,
-0.0175323486328125,
-0.007534027099609375,
0.0272674560546875,
-0.01003265380859375,
-0.02001953125,
-0.0256195068359375,
-0.0178375244140625,
0.0006856918334960938,
0.030303955078125,
-0.05023193359375,
-0.0657958984375,
-0.0287933349609375... |
naver-clova-ix/synthdog-zh | 2022-07-22T06:43:28.000Z | [
"region:us"
] | naver-clova-ix | null | null | 3 | 17 | 2022-07-20T00:42:55 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
luigisaetta/atco2 | 2022-08-29T07:36:28.000Z | [
"region:us"
] | luigisaetta | null | null | 2 | 17 | 2022-08-07T13:27:14 | This dataset contains ATC communication.
It can be used to fine tune an **ASR** model, specialised for Air Traffic Control Communications (ATC)
Its data have been taken from the [ATCO2 site](https://www.atco2.org/data) | 220 | [
[
-0.030364990234375,
-0.0272216796875,
-0.0013761520385742188,
-0.0016298294067382812,
-0.022552490234375,
0.0169525146484375,
0.022491455078125,
-0.040985107421875,
0.007648468017578125,
0.06536865234375,
-0.04296875,
-0.01873779296875,
-0.01096343994140625,
... |
jonathanli/echr | 2022-08-21T23:29:28.000Z | [
"license:cc-by-nc-sa-4.0",
"arxiv:1906.02059",
"region:us"
] | jonathanli | The ECHR Cases dataset is designed for experimentation of neural judgment prediction, as in the original 2019 ACL paper "Neural Legal Judgment Prediction in English". | @inproceedings{chalkidis-etal-2019-neural,
title = "Neural Legal Judgment Prediction in {E}nglish",
author = "Chalkidis, Ilias and
Androutsopoulos, Ion and
Aletras, Nikolaos",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P19-1424",
doi = "10.18653/v1/P19-1424",
pages = "4317--4323",
} | 0 | 17 | 2022-08-15T01:35:16 | ---
license: cc-by-nc-sa-4.0
---
# ECHR Cases
The original data from [Chalkidis et al.](https://arxiv.org/abs/1906.02059), sourced from [archive.org](https://archive.org/details/ECHR-ACL2019).
## Preprocessing
* Order is shuffled
* Fact numbers preceeding each fact are removed (using the python regex `^[0-9]+\. `), as some cases didn't have fact numbers to begin with
* Everything else is the same
| 404 | [
[
-0.03326416015625,
-0.0665283203125,
0.06170654296875,
-0.0202484130859375,
-0.044830322265625,
-0.01265716552734375,
0.0160675048828125,
-0.02679443359375,
0.035125732421875,
0.05059814453125,
-0.04010009765625,
-0.034149169921875,
-0.0340576171875,
0.02209... |
thepurpleowl/codequeries | 2023-06-03T12:50:46.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:code",
"license:apache-2.0",
"neural modeling of code",
"code ques... | thepurpleowl | CodeQueries Ideal setup. | @article{codequeries2022,
title={Learning to Answer Semantic Queries over Code},
author={A, B, C, D, E, F},
journal={arXiv preprint arXiv:<.>},
year={2022}
} | 4 | 17 | 2022-08-24T09:27:43 | ---
annotations_creators:
- expert-generated
language:
- code
language_creators:
- found
multilinguality:
- monolingual
pretty_name: codequeries
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- neural modeling of code
- code question answering
- code semantic understanding
task_categories:
- question-answering
task_ids:
- extractive-qa
license:
- apache-2.0
---
# Dataset Card for CodeQueries
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [How to use](#how-to-use)
- [Data Splits and Data Fields](#data-splits-and-data-fields)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Data](https://huggingface.co/datasets/thepurpleowl/codequeries)
- **Repository:** [Code](https://github.com/thepurpleowl/codequeries-benchmark)
- **Paper:**
### Dataset Summary
CodeQueries is a dataset to evaluate the ability of neural networks to answer semantic queries over code. Given a query and code, a model is expected to identify answer and supporting-fact spans in the code for the query. This is extractive question-answering over code, for questions with a large scope (entire files) and complexity including both single- and multi-hop reasoning.
### Supported Tasks and Leaderboards
Extractive question answering for code, semantic understanding of code.
### Languages
The dataset contains code context from `python` files.
## Dataset Structure
### How to Use
The dataset can be directly used with the huggingface datasets package. You can load and iterate through the dataset for the proposed five settings with the following two lines of code:
```python
import datasets
# in addition to `twostep`, the other supported settings are <ideal/file_ideal/prefix>.
ds = datasets.load_dataset("thepurpleowl/codequeries", "twostep", split=datasets.Split.TEST)
print(next(iter(ds)))
#OUTPUT:
{'query_name': 'Unused import',
'code_file_path': 'rcbops/glance-buildpackage/glance/tests/unit/test_db.py',
'context_block': {'content': '# vim: tabstop=4 shiftwidth=4 softtabstop=4\n\n# Copyright 2010-2011 OpenStack, LLC\ ...',
'metadata': 'root',
'header': "['module', '___EOS___']",
'index': 0},
'answer_spans': [{'span': 'from glance.common import context',
'start_line': 19,
'start_column': 0,
'end_line': 19,
'end_column': 33}
],
'supporting_fact_spans': [],
'example_type': 1,
'single_hop': False,
'subtokenized_input_sequence': ['[CLS]_', 'Un', 'used_', 'import_', '[SEP]_', 'module_', '\\u\\u\\uEOS\\u\\u\\u_', '#', ' ', 'vim', ':', ...],
'label_sequence': [4, 4, 4, 4, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, ...],
'relevance_label': 1
}
```
### Data Splits and Data Fields
Detailed information on the data splits for proposed settings can be found in the paper.
In general, data splits in all the proposed settings have examples with the following fields -
```
- query_name (query name to uniquely identify the query)
- code_file_path (relative source file path w.r.t. ETH Py150 corpus)
- context_blocks (code blocks as context with metadata) [`prefix` setting doesn't have this field and `twostep` has `context_block`]
- answer_spans (answer spans with metadata)
- supporting_fact_spans (supporting-fact spans with metadata)
- example_type (1(positive)) or 0(negative)) example type)
- single_hop (True or False - for query type)
- subtokenized_input_sequence (example subtokens) [`prefix` setting has the corresponding token ids]
- label_sequence (example subtoken labels)
- relevance_label (0 (not relevant) or 1 (relevant) - relevance label of a block) [only `twostep` setting has this field]
```
## Dataset Creation
The dataset is created using [ETH Py150 Open dataset](https://github.com/google-research-datasets/eth_py150_open) as source for code contexts. To get semantic queries and corresponding answer/supporting-fact spans in ETH Py150 Open corpus files, CodeQL was used.
## Additional Information
### Licensing Information
The source code repositories used for preparing CodeQueries are based on the [ETH Py150 Open dataset](https://github.com/google-research-datasets/eth_py150_open) and are redistributable under the respective licenses. A Huggingface dataset for ETH Py150 Open is available [here](https://huggingface.co/datasets/eth_py150_open). The labeling prepared and provided by us as part of CodeQueries is released under the Apache-2.0 license.
| 4,966 | [
[
-0.041046142578125,
-0.06304931640625,
0.01479339599609375,
0.025482177734375,
-0.01255035400390625,
0.000274658203125,
-0.007358551025390625,
-0.01358795166015625,
0.048187255859375,
0.03790283203125,
-0.047943115234375,
-0.058135986328125,
-0.0233001708984375,... |
jamescalam/unsplash-image-text | 2022-09-06T22:37:14.000Z | [
"region:us"
] | jamescalam | This is a dataset that streams photos data from the Unsplash 25K servers. | @InProceedings{huggingface:dataset,
title = {Unsplash Lite Dataset Images},
author={Unsplash},
year={2022}
} | 1 | 17 | 2022-09-02T18:18:10 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
generalization/conv_intent_Full-p_1 | 2022-09-09T05:16:34.000Z | [
"region:us"
] | generalization | null | null | 0 | 17 | 2022-09-08T18:00:56 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
open-source-metrics/pip | 2023-10-26T12:04:09.000Z | [
"region:us"
] | open-source-metrics | null | null | 0 | 17 | 2022-09-27T18:19:45 | ---
dataset_info:
features:
- name: day
dtype: string
- name: num_downloads
dtype: int64
splits:
- name: datasets
num_bytes: 21912
num_examples: 996
- name: transformers
num_bytes: 25960
num_examples: 1180
- name: pytorch_image_models
num_bytes: 25278
num_examples: 1149
- name: huggingface_hub
num_bytes: 22792
num_examples: 1036
- name: safetensors
num_bytes: 7348
num_examples: 334
- name: peft
num_bytes: 6138
num_examples: 279
- name: diffusers
num_bytes: 11286
num_examples: 513
- name: tokenizers
num_bytes: 25278
num_examples: 1149
- name: gradio
num_bytes: 25278
num_examples: 1149
- name: optimum
num_bytes: 16896
num_examples: 768
- name: accelerate
num_bytes: 21912
num_examples: 996
- name: evaluate
num_bytes: 13882
num_examples: 631
download_size: 131596
dataset_size: 223960
configs:
- config_name: default
data_files:
- split: accelerate
path: data/accelerate-*
- split: datasets
path: data/datasets-*
- split: diffusers
path: data/diffusers-*
- split: evaluate
path: data/evaluate-*
- split: gradio
path: data/gradio-*
- split: huggingface_hub
path: data/huggingface_hub-*
- split: optimum
path: data/optimum-*
- split: peft
path: data/peft-*
- split: pytorch_image_models
path: data/pytorch_image_models-*
- split: safetensors
path: data/safetensors-*
- split: tokenizers
path: data/tokenizers-*
- split: transformers
path: data/transformers-*
---
# Dataset Card for "pip"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,738 | [
[
-0.0352783203125,
0.00290679931640625,
-0.0028934478759765625,
0.02508544921875,
-0.01556396484375,
-0.01215362548828125,
0.04248046875,
0.0006451606750488281,
0.057647705078125,
0.033935546875,
-0.055328369140625,
-0.041717529296875,
-0.049591064453125,
-0.... |
maderix/flickr_bw_rgb | 2022-10-12T15:34:25.000Z | [
"task_categories:text-to-image",
"annotations_creators:machine-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:N/A",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | maderix | null | null | 5 | 17 | 2022-10-12T15:09:17 | ---
license: cc-by-nc-sa-4.0
annotations_creators:
- machine-generated
language:
- en
language_creators:
- other
multilinguality:
- monolingual
pretty_name: 'flickr_bw_rgb'
size_categories:
- n<1K
source_datasets:
- N/A
tags: []
task_categories:
- text-to-image
task_ids: []
---
# Dataset Card for Flickr_bw_rgb
_Dataset A image-caption dataset which stores group of black and white and color images with corresponding
captions mentioning the content of the image with a 'colorized photograph of' or 'Black and white photograph of' suffix.
This dataset can then be used for fine-tuning image to text models.. Only a train split is provided.
## Examples
"train/<filename>.jpg" : containing the images in JPEG format
"train/metadata.jsonl" : Contains the metadata and the fields.
Dataset columns:
"file_name"
"caption"
## Citation
If you use this dataset, please cite it as:
```
@misc{maderix2022flickrbwrgb,
author = {maderix: maderix@gmail.com},
title = {flickr_bw_rgb},
year={2022},
howpublished= {\url{https://huggingface.co/datasets/maderix/flickr_bw_rgb/}}
}
``` | 1,110 | [
[
-0.038238525390625,
-0.01070404052734375,
-0.007076263427734375,
0.0289764404296875,
-0.0517578125,
0.002040863037109375,
0.0126495361328125,
-0.00643157958984375,
0.01727294921875,
0.0307159423828125,
-0.054534912109375,
-0.0274810791015625,
-0.03582763671875,
... |
Gazoche/gundam-captioned | 2022-10-15T01:44:59.000Z | [
"task_categories:text-to-image",
"annotations_creators:machine-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:n<2K",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | Gazoche | null | null | 4 | 17 | 2022-10-13T11:51:15 | ---
license: cc-by-nc-sa-4.0
annotations_creators:
- machine-generated
language:
- en
language_creators:
- other
multilinguality:
- monolingual
pretty_name: 'Gundam captioned'
size_categories:
- n<2K
tags: []
task_categories:
- text-to-image
task_ids: []
---
# Dataset Card for captioned Gundam
Scraped from mahq.net (https://www.mahq.net/mecha/gundam/index.htm) and manually cleaned to only keep drawings and "Mobile Suits" (i.e, humanoid-looking machines).
The captions were automatically generated from a generic hardcoded description + the dominant colors as described by [BLIP](https://github.com/salesforce/BLIP). | 622 | [
[
-0.0286865234375,
-0.01267242431640625,
0.00196075439453125,
0.01560211181640625,
-0.03814697265625,
0.006954193115234375,
0.035888671875,
-0.01012420654296875,
0.029998779296875,
0.05255126953125,
-0.070068359375,
-0.03704833984375,
-0.0091705322265625,
0.0... |
jakartaresearch/causalqa | 2022-11-25T12:26:42.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"question-answering",
"english",
"causal",
"region:us"
] | jakartaresearch | null | null | 0 | 17 | 2022-11-25T10:23:48 | ---
annotations_creators:
- found
language:
- en
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: CausalQA
size_categories:
- 1M<n<10M
source_datasets:
- original
tags:
- question-answering
- english
- causal
task_categories:
- question-answering
task_ids:
- extractive-qa
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@alamhanz](https://github.com/alamhanz) and [@andreaschandra](https://github.com/andreaschandra) for adding this dataset.
| 2,836 | [
[
-0.03289794921875,
-0.034912109375,
0.0101470947265625,
0.0207061767578125,
-0.01535797119140625,
0.01514434814453125,
-0.0236358642578125,
-0.0283050537109375,
0.044586181640625,
0.04443359375,
-0.06256103515625,
-0.08404541015625,
-0.05230712890625,
0.0058... |
argilla/uber-reviews | 2022-12-06T12:00:28.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | argilla | null | null | 0 | 17 | 2022-12-06T11:47:18 | ---
language:
- en
license:
- unknown
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
dataset_info:
features:
- name: text
dtype: string
- name: inputs
struct:
- name: text
dtype: string
- name: prediction
list:
- name: label
dtype: string
- name: score
dtype: float64
- name: prediction_agent
dtype: string
- name: annotation
dtype: 'null'
- name: annotation_agent
dtype: 'null'
- name: multi_label
dtype: bool
- name: explanation
dtype: 'null'
- name: id
dtype: string
- name: metadata
dtype: 'null'
- name: status
dtype: string
- name: event_timestamp
dtype: timestamp[us]
- name: metrics
struct:
- name: text_length
dtype: int64
splits:
- name: train
num_bytes: 2761597
num_examples: 2347
download_size: 1691346
dataset_size: 2761597
---
# Dataset Card for "uber-reviews"
## Dataset Description
- **Homepage:** Kaggle Challenge
- **Repository:** https://www.kaggle.com/datasets/jschne61701/uber-rides-costumer-reviews-dataset
- **Paper:** N.A.
- **Leaderboard:** N.A.
- **Point of Contact:** N.A.
### Dataset Summary
Using Python's Beautiful Soup library and Scrappy framework, scraped date, star rating, and comment from all reviews from 2013 - 2019.
### Languages
english
### Citation Information
https://www.kaggle.com/datasets/jschne61701/uber-rides-costumer-reviews-dataset
https://www.sitejabber.com/reviews/uber.com
https://www.consumeraffairs.com/travel/uber.html
https://www.kaggle.com/purvank/uber-rider-reviews-dataset
### Contributions
Thanks to [@davidberenstein1957](https://github.com/davidberenstein1957) for adding this dataset.
| 1,782 | [
[
-0.027130126953125,
-0.0107574462890625,
0.019561767578125,
0.0389404296875,
-0.037261962890625,
-0.0078277587890625,
0.01153564453125,
-0.04046630859375,
0.040008544921875,
0.0290374755859375,
-0.04901123046875,
-0.0584716796875,
-0.0111236572265625,
0.0027... |
lewtun/corgi | 2022-12-19T08:45:20.000Z | [
"region:us"
] | lewtun | null | null | 2 | 17 | 2022-12-19T08:44:51 | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 5590698.0
num_examples: 5
download_size: 5591635
dataset_size: 5590698.0
---
# Dataset Card for "corgi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 348 | [
[
-0.05267333984375,
-0.01953125,
-0.0104827880859375,
0.0243988037109375,
-0.0119171142578125,
-0.002086639404296875,
0.0198822021484375,
-0.025909423828125,
0.059783935546875,
0.0143585205078125,
-0.06646728515625,
-0.037841796875,
-0.03240966796875,
-0.0138... |
orai-nlp/basqueGLUE | 2022-12-21T09:54:32.000Z | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_ids:intent-classification",
"task_ids:natural-language-inference",
"task_ids:sentiment-classification",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:coreference-resolution",
"annot... | orai-nlp | We present BasqueGLUE, the first NLU benchmark for Basque, which has been elaborated from
previously existing datasets and following similar criteria to those used for the construction of
GLUE and SuperGLUE. BasqueGLUE is freely available under an open license. | @InProceedings{urbizu2022basqueglue,
author = {Urbizu, Gorka and San Vicente, Iñaki and Saralegi, Xabier and Agerri, Rodrigo and Soroa, Aitor},
title = {BasqueGLUE: A Natural Language Understanding Benchmark for Basque},
booktitle = {Proceedings of the Language Resources and Evaluation Conference},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {1603--1612},
abstract = {Natural Language Understanding (NLU) technology has improved significantly over the last few years and multitask benchmarks such as GLUE are key to evaluate this improvement in a robust and general way. These benchmarks take into account a wide and diverse set of NLU tasks that require some form of language understanding, beyond the detection of superficial, textual clues. However, they are costly to develop and language-dependent, and therefore they are only available for a small number of languages. In this paper, we present BasqueGLUE, the first NLU benchmark for Basque, a less-resourced language, which has been elaborated from previously existing datasets and following similar criteria to those used for the construction of GLUE and SuperGLUE. We also report the evaluation of two state-of-the-art language models for Basque on BasqueGLUE, thus providing a strong baseline to compare upon. BasqueGLUE is freely available under an open license.},
url = {https://aclanthology.org/2022.lrec-1.172}
} | 1 | 17 | 2022-12-20T14:28:19 | ---
annotations_creators:
- expert-generated
language:
- eu
language_creators:
- expert-generated
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
pretty_name: BasqueGLUE
size_categories:
- 100K<n<1M
source_datasets:
- original
tags: []
task_categories:
- text-classification
- token-classification
task_ids:
- intent-classification
- natural-language-inference
- sentiment-classification
- topic-classification
- named-entity-recognition
- coreference-resolution
configs:
- bec
- bhtc
- coref
- intent
- nerc_id
- nerc_od
- qnli
- slot
- vaxx
- wic
dataset_info:
- config_name: bec
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': N
'1': NEU
'2': P
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 693284
num_examples: 6078
- name: test
num_bytes: 148510
num_examples: 1302
- name: validation
num_bytes: 148377
num_examples: 1302
download_size: 1217803
dataset_size: 990171
- config_name: bhtc
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': Ekonomia
'1': Euskal Herria
'2': Euskara
'3': Gizartea
'4': Historia
'5': Ingurumena
'6': Iritzia
'7': Komunikazioa
'8': Kultura
'9': Nazioartea
'10': Politika
'11': Zientzia
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 2431494
num_examples: 8585
- name: test
num_bytes: 523066
num_examples: 1854
- name: validation
num_bytes: 519555
num_examples: 1857
download_size: 3896312
dataset_size: 3474115
- config_name: coref
features:
- name: text
dtype: string
- name: span1_text
dtype: string
- name: span2_text
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'false'
'1': 'true'
- name: span1_index
dtype: int32
- name: span2_index
dtype: int32
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 365830
num_examples: 986
- name: test
num_bytes: 201378
num_examples: 587
- name: validation
num_bytes: 108632
num_examples: 320
download_size: 855074
dataset_size: 675840
- config_name: intent
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': alarm/cancel_alarm
'1': alarm/modify_alarm
'2': alarm/set_alarm
'3': alarm/show_alarms
'4': alarm/snooze_alarm
'5': alarm/time_left_on_alarm
'6': reminder/cancel_reminder
'7': reminder/set_reminder
'8': reminder/show_reminders
'9': weather/checkSunrise
'10': weather/checkSunset
'11': weather/find
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 182856
num_examples: 3418
- name: test
num_bytes: 56118
num_examples: 1087
- name: validation
num_bytes: 101644
num_examples: 1904
download_size: 595375
dataset_size: 340618
- config_name: nerc_id
features:
- name: tokens
sequence: string
- name: tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-LOC
'4': I-LOC
'5': B-ORG
'6': I-ORG
'7': B-MISC
'8': I-MISC
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 946007
num_examples: 2842
- name: test
num_bytes: 653960
num_examples: 1846
- name: validation
num_bytes: 237464
num_examples: 711
download_size: 1723325
dataset_size: 1837431
- config_name: nerc_od
features:
- name: tokens
sequence: string
- name: tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-LOC
'4': I-LOC
'5': B-ORG
'6': I-ORG
'7': B-MISC
'8': I-MISC
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 1183471
num_examples: 3553
- name: test
num_bytes: 262853
num_examples: 598
- name: validation
num_bytes: 270028
num_examples: 601
download_size: 1613369
dataset_size: 1716352
- config_name: qnli
features:
- name: question
dtype: string
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': not_entailment
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 327189
num_examples: 1764
- name: test
num_bytes: 42569
num_examples: 238
- name: validation
num_bytes: 46359
num_examples: 230
download_size: 532399
dataset_size: 416117
- config_name: slot
features:
- name: tokens
sequence: string
- name: tags
sequence:
class_label:
names:
'0': O
'1': B-datetime
'2': B-location
'3': B-negation
'4': B-alarm/alarm_modifier
'5': B-alarm/recurring_period
'6': B-reminder/noun
'7': B-reminder/todo
'8': B-reminder/reference
'9': B-reminder/recurring_period
'10': B-weather/attribute
'11': B-weather/noun
'12': I-datetime
'13': I-location
'14': I-negation
'15': I-alarm/alarm_modifier
'16': I-alarm/recurring_period
'17': I-reminder/noun
'18': I-reminder/todo
'19': I-reminder/reference
'20': I-reminder/recurring_period
'21': I-weather/attribute
'22': I-weather/noun
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 388774
num_examples: 3418
- name: test
num_bytes: 114876
num_examples: 1088
- name: validation
num_bytes: 214053
num_examples: 1900
download_size: 962250
dataset_size: 717703
- config_name: vaxx
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': AGAINST
'1': NONE
'2': FAVOR
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 176436
num_examples: 864
- name: test
num_bytes: 70947
num_examples: 312
- name: validation
num_bytes: 42795
num_examples: 206
download_size: 333997
dataset_size: 290178
- config_name: wic
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: word
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'false'
'1': 'true'
- name: start1
dtype: int32
- name: start2
dtype: int32
- name: end1
dtype: int32
- name: end2
dtype: int32
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 172847108
num_examples: 408559
- name: test
num_bytes: 589578
num_examples: 1400
- name: validation
num_bytes: 251549
num_examples: 600
download_size: 22938354
dataset_size: 173688235
---
# Dataset Card for BasqueGLUE
## Table of Contents
* [Table of Contents](#table-of-contents)
* [Dataset Description](#dataset-description)
* [Dataset Summary](#dataset-summary)
* [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
* [Languages](#languages)
* [Dataset Structure](#dataset-structure)
* [Data Instances](#data-instances)
* [Data Fields](#data-fields)
* [Data Splits](#data-splits)
* [Dataset Creation](#dataset-creation)
* [Curation Rationale](#curation-rationale)
* [Source Data](#source-data)
* [Annotations](#annotations)
* [Personal and Sensitive Information](#personal-and-sensitive-information)
* [Considerations for Using the Data](#considerations-for-using-the-data)
* [Social Impact of Dataset](#social-impact-of-dataset)
* [Discussion of Biases](#discussion-of-biases)
* [Other Known Limitations](#other-known-limitations)
* [Additional Information](#additional-information)
* [Dataset Curators](#dataset-curators)
* [Licensing Information](#licensing-information)
* [Citation Information](#citation-information)
* [Contributions](#contributions)
## Dataset Description
* **Repository:** <https://github.com/orai-nlp/BasqueGLUE>
* **Paper:** [BasqueGLUE: A Natural Language Understanding Benchmark for Basque](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.172.pdf)
* **Point of Contact:** [Contact Information](https://github.com/orai-nlp/BasqueGLUE#contact-information)
### Dataset Summary
Natural Language Understanding (NLU) technology has improved significantly over the last few years, and multitask benchmarks such as GLUE are key to evaluate this improvement in a robust and general way. These benchmarks take into account a wide and diverse set of NLU tasks that require some form of language understanding, beyond the detection of superficial, textual clues. However, they are costly to develop and language-dependent, and therefore they are only available for a small number of languages.
We present BasqueGLUE, the first NLU benchmark for Basque, which has been elaborated from previously existing datasets and following similar criteria to those used for the construction of GLUE and SuperGLUE. BasqueGLUE is freely available under an open license.
| Dataset | \|Train\| | \|Val\| | \|Test\| | Task | Metric | Domain |
|----------------|----------:|--------:|---------:|------------------------|:------:|-----------------|
| NERCid | 51,539 | 12,936 | 35,855 | NERC | F1 | News |
| NERCood | 64,475 | 14,945 | 14,462 | NERC | F1 | News, Wikipedia |
| FMTODeu_intent | 3,418 | 1,904 | 1,087 | Intent classification | F1 | Dialog system |
| FMTODeu_slot | 19,652 | 10,791 | 5,633 | Slot filling | F1 | Dialog system |
| BHTCv2 | 8,585 | 1,857 | 1,854 | Topic classification | F1 | News |
| BEC2016eu | 6,078 | 1,302 | 1,302 | Sentiment analysis | F1 | Twitter |
| VaxxStance | 864 | 206 | 312 | Stance detection | MF1* | Twitter |
| QNLIeu | 1,764 | 230 | 238 | QA/NLI | Acc | Wikipedia |
| WiCeu | 408,559 | 600 | 1,400 | WSD | Acc | Wordnet |
| EpecKorrefBin | 986 | 320 | 587 | Coreference resolution | Acc | News |
### Supported Tasks and Leaderboards
This benchmark comprises the following tasks:
#### NERCid
This dataset contains sentences from the news domain with manually annotated named entities. The data is the merge of EIEC (a dataset of a collection of news wire articles from Euskaldunon Egunkaria newspaper, (Alegria et al. 2004)), and newly annotated data from naiz.eus. The data is annotated following the BIO annotation scheme over four categories: person, organization, location, and miscellaneous.
#### NERCood
This dataset contains sentences with manually annotated named entities. The training data is the merge of EIEC (a dataset of a collection of news wire articles from Euskaldunon Egunkaria newspaper, (Alegria et al. 2004)), and newly annotated data from naiz.eus. The data is annotated following the BIO annotation scheme over four categories: person, organization, location, and miscellaneous. For validation and test sets, sentences from Wikipedia were annotated following the same annotation guidelines.
#### FMTODeu_intent
This dataset contains utterance texts and intent annotations drawn from the manually-annotated Facebook Multilingual Task Oriented Dataset (FMTOD) (Schuster et al. 2019). Basque translated data was drawn from the datasets created for Building a Task-oriented Dialog System for languages with no training data: the Case for Basque (de Lacalle et al. 2020). The examples are annotated with one of 12 different intent classes corresponding to alarm, reminder or weather related actions.
#### FMTODeu_slot
This dataset contains utterance texts and sequence intent argument annotations designed for slot filling tasks, drawn from the manually-annotated Facebook Multilingual Task Oriented Dataset (FMTOD) (Schuster et al. 2019). Basque translated data was drawn from the datasets created for Building a Task-oriented Dialog System for languages with no training data: the Case for Basque (de Lacalle et al. 2020). The task is a sequence labelling task similar to NERC, following BIO annotation scheme over 11 categories.
#### BHTCv2
The corpus contains 12,296 news headlines (brief article descriptions) from the Basque weekly newspaper [Argia](https://www.argia.eus). Topics are classified uniquely according to twelve thematic categories.
#### BEC2016eu
The Basque Election Campaign 2016 Opinion Dataset (BEC2016eu) is a new dataset for the task of sentiment analysis, a sequence classification task, which contains tweets about the campaign for the Basque elections from 2016. The crawling was carried out during the election campaign period (2016/09/09-2016/09/23), by monitoring the main parties and their respective candidates. The tweets were manually annotated as positive, negative or neutral.
#### VaxxStance
The VaxxStance (Agerri et al., 2021) dataset originally provides texts and stance annotations for social media texts around the anti-vaccine movement. Texts are given a label indicating whether they express an AGAINST, FAVOR or NEUTRAL stance towards the topic.
#### QNLIeu
This task includes the QA dataset ElkarHizketak (Otegi et al. 2020), a low resource conversational Question Answering (QA) dataset for Basque created by native speaker volunteers. The dataset is built on top of Wikipedia sections about popular people and organizations, and it contains around 400 dialogues and 1600 question and answer pairs. The task was adapted into a sentence-pair binary classification task, following the design of QNLI for English (Wang et al. 2019). Each question and answer pair are given a label indicating whether the answer is entailed by the question.
#### WiCeu
Word in Context or WiC (Pilehvar and Camacho-Collados 2019) is a word sense disambiguation (WSD) task, designed as a particular form of sentence pair binary classification. Given two text snippets and a polyse mous word that appears in both of them (the span of the word is marked in both snippets), the task is to determine whether the word has the same sense in both sentences. This dataset is based on the EPEC-EuSemcor (Pociello et al. 2011) sense-tagged corpus.
#### EpecKorrefBin
EPEC-KORREF-Bin is a dataset derived from EPEC-KORREF (Soraluze et al. 2012), a corpus of Basque news documents with manually annotated mentions and coreference chains, which we have been converted into a binary classification task. In this task, the model has to predict whether two mentions from a text, which can be pronouns, nouns or noun phrases, are referring to the same entity.
#### Leaderboard
Results obtained for two BERT base models as a baseline for the Benchmark.
| | AVG | NERC | F_intent | F_slot | BHTC | BEC | Vaxx | QNLI | WiC | coref |
|------------------------------------------------------------|:-----:|:-----:|:---------:|:-------:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|
| Model | | F1 | F1 | F1 | F1 | F1 | MF1 | acc | acc | acc |
|[BERTeus](https://huggingface.co/ixa-ehu/berteus-base-cased)| 73.23 | 81.92 | 82.52 | 74.34 | 78.26 | 69.43 | 59.30 | 74.26 | 70.71 | 68.31 |
|[ElhBERTeu](https://huggingface.co/elh-eus/ElhBERTeu) | 73.71 | 82.30 | 82.24 | 75.64 | 78.05 | 69.89 | 63.81 | 73.84 | 71.71 | 65.93 |
The results obtained on NERC are the average of in-domain and out-of-domain NERC.
### Languages
Data are available in Basque (BCP-47 `eu`)
## Dataset Structure
### Data Instances
#### NERCid/NERCood
An example of 'train' looks as follows:
```
{
"idx": 0,
"tags": ["O", "O", "O", "O", "B-ORG", "O", ...],
"tokens": ["Greba", "orokorrera", "deitu", "du", "EHk", "27rako", ...]
}
```
#### FMTODeu_intent
An example of 'train' looks as follows:
```
{
"idx": 0,
"label": "alarm/modify_alarm",
"text": "aldatu alarma 7am-tik 7pm-ra , mesedez"
}
```
#### FMTODeu_slot
An example of 'train' looks as follows:
```
{
"idx": 923,
"tags": ["O", "B-reminder/todo", "I-datetime", "I-datetime", "B-reminder/todo"],
"tokens": ["gogoratu", "zaborra", "gaur", "gauean", "ateratzea"]
}
```
#### BHTCv2
An example of 'test' looks as follows:
```
{
"idx": 0,
"label": "Gizartea",
"text": "Genero berdintasunaz, hezkuntzaz eta klase gizarteaz hamar liburu baino gehiago..."
}
```
#### BEC2016eu
An example of 'test' looks as follows:
```
{
"idx": 0,
"label": "NEU",
"text": '"Emandako hitza bete egingo dut" Urkullu\nBa galdeketa enegarrenez daramazue programan (ta zuen AHTa...)\n#I25debatea #URL"'
}
```
#### VaxxStance
An example of 'train' looks as follows:
```
{
"idx": 0,
"label": "FAVOR",
"text": "\"#COVID19 Oraingo datuak, izurriaren dinamika, txertoaren eragina eta birusaren..
}
```
#### QNLIeu
An example of 'train' looks as follows:
```
{
"idx": 1,
"label": "not_entailment",
"question": "Zein posiziotan jokatzen du Busquets-ek?",
"sentence": "Busquets 23 partidatan izan zen konbokatua eta 2 gol sartu zituen."
}
```
#### WiCeu
An example of 'test' looks as follows:
```
{
"idx": 16,
"label": false,
"word": "udal",
"sentence1": "1a . Lekeitioko udal mugarteko Alde Historikoa Birgaitzeko Plan Berezia behin...",
"sentence2": "Diezek kritikatu egin zuen EAJk zenbait udaletan EH gobernu taldeetatik at utzi...",
"start1": 16,
"start2": 40,
"end1": 21,
"end2": 49
}
```
#### EpecKorrefBin
An example of 'train' looks as follows:
```
{
"idx": 6,
"label": false,
"text": "Isuntza da faborito nagusia Elantxobeko banderan . ISUNTZA trainerua da faborito nagusia bihar Elantxoben jokatuko den bandera irabazteko .",
"span1_text": "Elantxobeko banderan",
"span2_text": "ISUNTZA trainerua",
"span1_index": 4,
"span2_index": 8
}
```
### Data Fields
#### NERCid
* `tokens`: a list of `string` features
* `tags`: a list of entity labels, with possible values including `person` (PER), `location` (LOC), `organization` (ORG), `miscellaneous` (MISC)
* `idx`: an `int32` feature
#### NERCood
* `tokens`: a list of `string` features
* `tags`: a list of entity labels, with possible values including `person` (PER), `location` (LOC), `organization` (ORG), `miscellaneous` (MISC)
* `idx`: an `int32` feature
#### FMTODeu_intent
* `text`: a `string` feature
* `label`: an intent label, with possible values including:
* `alarm/cancel_alarm`
* `alarm/modify_alarm`
* `alarm/set_alarm`
* `alarm/show_alarms`
* `alarm/snooze_alarm`
* `alarm/time_left_on_alarm`
* `reminder/cancel_reminder`
* `reminder/set_reminder`
* `reminder/show_reminders`
* `weather/checkSunrise`
* `weather/checkSunset`
* `weather/find`
* `idx`: an `int32` feature
#### FMTODeu_slot
* `tokens`: a list of `string` features
* `tags`: a list of intent labels, with possible values including:
* `datetime`
* `location`
* `negation`
* `alarm/alarm_modifier`
* `alarm/recurring_period`
* `reminder/noun`
* `reminder/todo`
* `reminder/reference`
* `reminder/recurring_period`
* `weather/attribute`
* `weather/noun`
* `idx`: an `int32` feature
#### BHTCv2
* `text`: a `string` feature
* `label`: a polarity label, with possible values including `neutral` (NEU), `negative` (N), `positive` (P)
* `idx`: an `int32` feature
#### BEC2016eu
* `text`: a `string` feature
* `label`: a topic label, with possible values including:
* `Ekonomia`
* `Euskal Herria`
* `Euskara`
* `Gizartea`
* `Historia`
* `Ingurumena`
* `Iritzia`
* `Komunikazioa`
* `Kultura`
* `Nazioartea`
* `Politika`
* `Zientzia`
* `idx`: an `int32` feature
#### VaxxStance
* `text`: a `string` feature
* `label`: a stance label, with possible values including `AGAINST`, `FAVOR`, `NONE`
* `idx`: an `int32` feature
#### QNLIeu
* `question`: a `string` feature
* `sentence`: a `string` feature
* `label`: an entailment label, with possible values including `entailment`, `not_entailment`
* `idx`: an `int32` feature
#### WiCeu
* `word`: a `string` feature
* `sentence1`: a `string` feature
* `sentence2`: a `string` feature
* `label`: a `boolean` label indicating sense agreement, with possible values including `true`, `false`
* `start1`: an `int` feature indicating character position where word occurence begins in first sentence
* `start2`: an `int` feature indicating character position where word occurence begins in second sentence
* `end1`: an `int` feature indicating character position where word occurence ends in first sentence
* `end2`: an `int` feature indicating character position where word occurence ends in second sentence
* `idx`: an `int32` feature
#### EpecKorrefBin
* `text`: a `string` feature.
* `label`: a `boolean` coreference label, with possible values including `true`, `false`.
* `span1_text`: a `string` feature
* `span2_text`: a `string` feature
* `span1_index`: an `int` feature indicating token index where `span1_text` feature occurs in `text`
* `span2_index`: an `int` feature indicating token index where `span2_text` feature occurs in `text`
* `idx`: an `int32` feature
### Data Splits
| Dataset | \|Train\| | \|Val\| | \|Test\| |
|---------|--------:|------:|-------:|
| NERCid | 51,539 | 12,936 | 35,855 |
| NERCood | 64,475 | 14,945 | 14,462 |
| FMTODeu_intent | 3,418 | 1,904 | 1,087 |
| FMTODeu_slot | 19,652 | 10,791 | 5,633 |
| BHTCv2 | 8,585 | 1,857 | 1,854 |
| BEC2016eu | 6,078 | 1,302 | 1,302 |
| VaxxStance | 864 | 206 | 312 |
| QNLIeu | 1,764 | 230 | 238 |
| WiCeu | 408,559 | 600 | 1,400 |
| EpecKorrefBin | 986 | 320 | 587 |
## Dataset Creation
### Curation Rationale
We believe that BasqueGLUE is a significant contribution towards developing NLU tools in Basque, which we believe will facilitate the technological advance for the Basque language. In order to create BasqueGLUE we took as a reference the GLUE and SuperGLUE frameworks. When possible, we re-used existing datasets for Basque, adapting them to the corresponding task formats if necessary. Additionally, BasqueGLUE also includes six new datasets that have not been published before. In total, BasqueGLUE consists of nine Basque NLU tasks and covers a wide range of tasks with different difficulties across several domains. As with the original GLUE benchmark, the training data for the tasks vary in size, which allows to measure the performance of how the models transfer knowledge across tasks.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Gorka Urbizu [1], Iñaki San Vicente [1], Xabier Saralegi [1], Rodrigo Agerri [2] and Aitor Soroa [2]
Affiliation of the authors:
[1] orai NLP Technologies
[2] HiTZ Center - Ixa, University of the Basque Country UPV/EHU
### Licensing Information
Each dataset of the BasqueGLUE benchmark has it's own license (due to most of them being or being derived from already existing datasets). See their respective README files for details.
Here we provide a brief summary of their licenses:
| Dataset | License |
|---------|---------|
| NERCid | CC BY-NC-SA 4.0 |
| NERCood | CC BY-NC-SA 4.0 |
| FMTODeu_intent | CC BY-NC-SA 4.0 |
| FMTODeu_slot | CC BY-NC-SA 4.0 |
| BHTCv2 | CC BY-NC-SA 4.0 |
| BEC2016eu | Twitter's license + CC BY-NC-SA 4.0 |
| VaxxStance | Twitter's license + CC BY 4.0 |
| QNLIeu | CC BY-SA 4.0 |
| WiCeu | CC BY-NC-SA 4.0 |
| EpecKorrefBin | CC BY-NC-SA 4.0 |
For the rest of the files of the benchmark, including the loading and evaluation scripts, the following license applies:
Copyright (C) by Orai NLP Technologies.
This benchmark and evaluation scripts are licensed under the Creative Commons Attribution Share Alike 4.0
International License (CC BY-SA 4.0). To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
### Citation Information
```
@InProceedings{urbizu2022basqueglue,
author = {Urbizu, Gorka and San Vicente, Iñaki and Saralegi, Xabier and Agerri, Rodrigo and Soroa, Aitor},
title = {BasqueGLUE: A Natural Language Understanding Benchmark for Basque},
booktitle = {Proceedings of the Language Resources and Evaluation Conference},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {1603--1612},
abstract = {Natural Language Understanding (NLU) technology has improved significantly over the last few years and multitask benchmarks such as GLUE are key to evaluate this improvement in a robust and general way. These benchmarks take into account a wide and diverse set of NLU tasks that require some form of language understanding, beyond the detection of superficial, textual clues. However, they are costly to develop and language-dependent, and therefore they are only available for a small number of languages. In this paper, we present BasqueGLUE, the first NLU benchmark for Basque, a less-resourced language, which has been elaborated from previously existing datasets and following similar criteria to those used for the construction of GLUE and SuperGLUE. We also report the evaluation of two state-of-the-art language models for Basque on BasqueGLUE, thus providing a strong baseline to compare upon. BasqueGLUE is freely available under an open license.},
url = {https://aclanthology.org/2022.lrec-1.172}
}
```
### Contributions
Thanks to [@richplant](https://github.com/richplant) for adding this dataset to hugginface.
| 26,614 | [
[
-0.037445068359375,
-0.06353759765625,
0.022308349609375,
0.027313232421875,
-0.01415252685546875,
-0.0021381378173828125,
-0.027496337890625,
-0.044189453125,
0.036590576171875,
0.0280609130859375,
-0.0489501953125,
-0.0599365234375,
-0.049560546875,
0.0171... |
Cohere/wikipedia-22-12-ja-embeddings | 2023-03-22T16:55:06.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"multilinguality:multilingual",
"language:ja",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | 1 | 17 | 2023-01-14T03:52:53 | ---
language:
- ja
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# Wikipedia (ja) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (ja)](https://ja.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-ja-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-ja-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-ja-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) | 3,803 | [
[
-0.0511474609375,
-0.051361083984375,
0.01195526123046875,
0.0008993148803710938,
-0.012420654296875,
-0.00685882568359375,
-0.023956298828125,
-0.0188446044921875,
0.04412841796875,
-0.0007801055908203125,
-0.037933349609375,
-0.062225341796875,
-0.045837402343... |
tomekkorbak/pile-detoxify | 2023-02-07T15:31:11.000Z | [
"task_categories:text-classification",
"task_categories:other",
"task_ids:acceptability-classification",
"task_ids:hate-speech-detection",
"task_ids:text-scoring",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"sourc... | tomekkorbak | null | null | 1 | 17 | 2023-01-25T17:32:30 | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- found
license:
- mit
multilinguality:
- monolingual
pretty_name: pile-detoxify
size_categories:
- 1M<n<10M
source_datasets:
- extended|the_pile
tags:
- toxicity
- pretraining-with-human-feedback
task_categories:
- text-classification
- other
task_ids:
- acceptability-classification
- hate-speech-detection
- text-scoring
---
# Dataset Card for pile-pii-scrubadub
## Dataset Description
- **Repository: https://github.com/tomekkorbak/aligned-pretraining-objectives**
- **Paper: Arxiv link to be added**
### Dataset Summary
This dataset contains text from [The Pile](https://huggingface.co/datasets/the_pile), annotated based on the toxicity of each sentence.
Each document (row in the dataset) is segmented into sentences, and each sentence is given a score: the toxicity predicted by the [Detoxify](https://github.com/unitaryai/detoxify).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
This dataset is taken from [The Pile](https://huggingface.co/datasets/the_pile), which is English text.
## Dataset Structure
### Data Instances
1949977
### Data Fields
- texts (sequence): a list of the sentences in the document, segmented using SpaCy
- meta (dict): the section of [The Pile](https://huggingface.co/datasets/the_pile) from which it originated
- scores (sequence): a score for each sentence in the `texts` column indicating the toxicity predicted by [Detoxify](https://github.com/unitaryai/detoxify)
- avg_score (float64): the average of the scores listed in the `scores` column
- num_sents (int64): the number of sentences (and scores) in that document
### Data Splits
Training set only
## Dataset Creation
### Curation Rationale
This is labeled text from [The Pile](https://huggingface.co/datasets/the_pile), a large dataset of text in English. The text is scored for toxicity so that generative language models can be trained to avoid generating toxic text.
### Source Data
#### Initial Data Collection and Normalization
This is labeled text from [The Pile](https://huggingface.co/datasets/the_pile).
#### Who are the source language producers?
Please see [The Pile](https://huggingface.co/datasets/the_pile) for the source of the dataset.
### Annotations
#### Annotation process
Each sentence was scored using [Detoxify](https://github.com/unitaryai/detoxify), which is a toxic comment classifier.
We used the `unbiased` model which is based on the 124M parameter [RoBERTa](https://arxiv.org/abs/1907.11692) and trained on the [Jigsaw Unintended Bias in Toxicity Classification dataset](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification).
#### Who are the annotators?
[Detoxify](https://github.com/unitaryai/detoxify)
### Personal and Sensitive Information
This dataset contains all personal identifable information and toxic text that was originally contained in [The Pile](https://huggingface.co/datasets/the_pile).
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contains examples of toxic text and personal identifiable information.
(A version of this datatset with personal identifiable information annotated is [available here](https://huggingface.co/datasets/tomekkorbak/pile-pii-scrubadub).)
Please take care to avoid misusing the toxic text or putting anybody in danger by publicizing their information.
This dataset is intended for research purposes only. We cannot guarantee that all toxic text has been detected, and we cannot guarantee that models trained using it will avoid generating toxic text.
We do not recommend deploying models trained on this data.
### Discussion of Biases
This dataset contains all biases from The Pile discussed in their paper: https://arxiv.org/abs/2101.00027
### Other Known Limitations
The toxic text in this dataset was detected using imperfect automated detection methods. We cannot guarantee that the labels are 100% accurate.
## Additional Information
### Dataset Curators
[The Pile](https://huggingface.co/datasets/the_pile)
### Licensing Information
From [The Pile](https://huggingface.co/datasets/the_pile): PubMed Central: [MIT License](https://github.com/EleutherAI/pile-pubmedcentral/blob/master/LICENSE)
### Citation Information
Paper information to be added
### Contributions
[The Pile](https://huggingface.co/datasets/the_pile) | 4,406 | [
[
-0.01076507568359375,
-0.03314208984375,
0.0218658447265625,
0.015533447265625,
-0.0242767333984375,
-0.0177001953125,
0.003856658935546875,
-0.02117919921875,
0.0217742919921875,
0.047515869140625,
-0.03173828125,
-0.06524658203125,
-0.049591064453125,
0.02... |
gtfintechlab/finer-ord | 2023-02-23T22:17:44.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-nc-4.0",
"region:us"
] | gtfintechlab | null | null | 4 | 17 | 2023-02-07T22:03:57 | ---
license: cc-by-nc-4.0
task_categories:
- token-classification
language:
- en
pretty_name: FiNER
size_categories:
- 1K<n<10K
multilinguality:
- monolingual
task_ids:
- named-entity-recognition
---
# Dataset Card for "FiNER-ORD"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation and Annotation](#dataset-creation)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contact Information](#contact-information)
## Dataset Description
- **Homepage:** [https://github.com/gtfintechlab/FiNER](https://github.com/gtfintechlab/FiNER)
- **Repository:** [https://github.com/gtfintechlab/FiNER](https://github.com/gtfintechlab/FiNER)
- **Paper:** [Arxiv Link]()
- **Point of Contact:** [Agam A. Shah](https://shahagam4.github.io/)
- **Size of train dataset file:** 1.08 MB
- **Size of validation dataset file:** 135 KB
- **Size of test dataset file:** 336 KB
### Dataset Summary
The FiNER-Open Research Dataset (FiNER-ORD) consists of a manually annotated dataset of financial news articles (in English)
collected from [webz.io] (https://webz.io/free-datasets/financial-news-articles/).
In total, there are 47851 news articles available in this data at the point of writing this paper.
Each news article is available in the form of a JSON document with various metadata information like
the source of the article, publication date, author of the article, and the title of the article.
For the manual annotation of named entities in financial news, we randomly sampled 220 documents from the entire set of news articles.
We observed that some articles were empty in our sample, so after filtering the empty documents, we were left with a total of 201 articles.
We use [Doccano](https://github.com/doccano/doccano), an open-source annotation tool,
to ingest the raw dataset and manually label person (PER), location (LOC), and organization (ORG) entities.
For our experiments, we use the manually labeled FiNER-ORD to benchmark model performance.
Thus, we make a train, validation, and test split of FiNER-ORD.
To avoid biased results, manual annotation is performed by annotators who have no knowledge about the labeling functions for the weak supervision framework.
The train and validation sets are annotated by two separate annotators and validated by a third annotator.
The test dataset is annotated by another annotator. We present a manual annotation guide in the Appendix of the paper detailing the procedures used to create the manually annotated FiNER-ORD.
After manual annotation, the news articles are split into sentences.
We then tokenize each sentence, employing a script to tokenize multi-token entities into separate tokens (e.g. PER_B denotes the beginning token of a person (PER) entity
and PER_I represents intermediate PER tokens). We exclude white spaces when tokenizing multi-token entities.
The descriptive statistics on the resulting FiNER-ORD are available in the Table of [Data Splits](#data-splits) section.
For more details check [information in paper]()
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
- It is a monolingual English dataset
## Dataset Structure
### Data Instances
#### FiNER-ORD
- **Size of train dataset file:** 1.08 MB
- **Size of validation dataset file:** 135 KB
- **Size of test dataset file:** 336 KB
### Data Fields
The data fields are the same among all splits.
#### conll2003
- `doc_idx`: Document ID (`int`)
- `sent_idx`: Sentence ID within each document (`int`)
- `gold_token`: Token (`string`)
- `gold_label`: a `list` of classification labels (`int`). Full tagset with indices:
```python
{'O': 0, 'PER_B': 1, 'PER_I': 2, 'LOC_B': 3, 'LOC_I': 4, 'ORG_B': 5, 'ORG_I': 6}
```
### Data Splits
| **FiNER-ORD** | **Train** | **Validation** | **Test** |
|------------------|----------------|---------------------|---------------|
| # Articles | 135 | 24 | 42 |
| # Tokens | 80,531 | 10,233 | 25,957 |
| # LOC entities | 1,255 | 267 | 428 |
| # ORG entities | 3,440 | 524 | 933 |
| # PER entities | 1,374 | 222 | 466 |
## Dataset Creation and Annotation
[Information in paper ]()
## Additional Information
### Licensing Information
[Information in paper ]()
### Citation Information
```
@article{shah2023finer,
title={FiNER: Financial Named Entity Recognition Dataset and Weak-supervision Model},
author={Agam Shah and Ruchit Vithani and Abhinav Gullapalli and Sudheer Chava},
journal={arXiv preprint arXiv:2302.11157},
year={2023}
}
```
### Contact Information
Please contact Agam Shah (ashah482[at]gatech[dot]edu) or Ruchit Vithani (rvithani6[at]gatech[dot]edu) about any FiNER-related issues and questions.
GitHub: [@shahagam4](https://github.com/shahagam4), [@ruchit2801](https://github.com/ruchit2801)
Website: [https://shahagam4.github.io/](https://shahagam4.github.io/)
| 5,541 | [
[
-0.0360107421875,
-0.0521240234375,
0.01552581787109375,
0.002475738525390625,
-0.0171051025390625,
-0.025390625,
-0.032501220703125,
-0.0408935546875,
0.01392364501953125,
0.0236968994140625,
-0.03680419921875,
-0.0677490234375,
-0.043853759765625,
0.005332... |
sedthh/ubuntu_dialogue_qa | 2023-02-28T20:50:15.000Z | [
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"ubuntu",
"forum",
"linux",
"chat",
"region:us"
] | sedthh | null | null | 1 | 17 | 2023-02-28T20:49:12 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
- name: METADATA
dtype: string
splits:
- name: train
num_bytes: 4021291
num_examples: 16181
download_size: 2157548
dataset_size: 4021291
license: mit
task_categories:
- question-answering
- text-generation
language:
- en
tags:
- ubuntu
- forum
- linux
- chat
pretty_name: Q&A from the Ubuntu Dialogue Corpus
size_categories:
- 10K<n<100K
---
# Dataset Card for "ubuntu_dialogue_qa"
Filtered the Ubuntu dialogue chatlogs from https://www.kaggle.com/datasets/rtatman/ubuntu-dialogue-corpus to include Q&A pairs **ONLY**
**Acknowledgements**
This dataset was ORIGINALLY collected by Ryan Lowe, Nissan Pow , Iulian V. Serban† and Joelle Pineau. It is made available here under the Apache License, 2.0. If you use this data in your work, please include the following citation:
Ryan Lowe, Nissan Pow, Iulian V. Serban and Joelle Pineau, "The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems", SIGDial 2015. URL: http://www.sigdial.org/workshops/conference16/proceedings/pdf/SIGDIAL40.pdf | 1,197 | [
[
-0.0241546630859375,
-0.054290771484375,
0.04681396484375,
0.01320648193359375,
-0.037567138671875,
0.02618408203125,
-0.005107879638671875,
-0.01509857177734375,
0.0380859375,
0.06536865234375,
-0.0584716796875,
-0.037841796875,
-0.01641845703125,
-0.009124... |
OpenBioML/chebi_20 | 2023-03-03T22:27:47.000Z | [
"region:us"
] | OpenBioML | null | null | 0 | 17 | 2023-03-03T22:18:18 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
dmargutierrez/TASTESet | 2023-03-17T09:38:31.000Z | [
"task_categories:text-classification",
"task_categories:token-classification",
"size_categories:1K<n<10K",
"language:en",
"food",
"recipes",
"name-entity-recognition",
"region:us"
] | dmargutierrez | null | null | 0 | 17 | 2023-03-10T14:41:03 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: recipes
sequence: string
- name: prediction_mask
sequence: bool
- name: ner_tags
sequence: string
splits:
- name: train
num_bytes: 652275.4
num_examples: 490
- name: test
num_bytes: 279546.6
num_examples: 210
download_size: 161613
dataset_size: 931822
task_categories:
- text-classification
- token-classification
subtask_categories:
- name-entity-recognition
language:
- en
tags:
- food
- recipes
- name-entity-recognition
pretty_name: TASTESet
size_categories:
- 1K<n<10K
---
# Dataset Card for "TASTESet"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 843 | [
[
-0.041595458984375,
-0.00405120849609375,
0.0159454345703125,
0.0218505859375,
-0.01529693603515625,
-0.00989532470703125,
0.0174560546875,
-0.010345458984375,
0.079345703125,
0.036163330078125,
-0.0609130859375,
-0.04827880859375,
-0.0426025390625,
-0.02899... |
IlyaGusev/librusec | 2023-03-20T16:03:43.000Z | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:ru",
"region:us"
] | IlyaGusev | null | null | 4 | 17 | 2023-03-12T12:57:59 | ---
dataset_info:
features:
- name: id
dtype: uint64
- name: text
dtype: string
splits:
- name: train
num_bytes: 125126513109
num_examples: 223256
download_size: 34905399148
dataset_size: 125126513109
task_categories:
- text-generation
language:
- ru
size_categories:
- 100K<n<1M
---
# Librusec dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Description](#description)
- [Usage](#usage)
## Description
**Summary:** Based on http://panchenko.me/data/russe/librusec_fb2.plain.gz. Uploaded here for convenience. Additional cleaning was performed.
**Script:** [create_librusec.py](https://github.com/IlyaGusev/rulm/blob/master/data_processing/create_librusec.py)
**Point of Contact:** [Ilya Gusev](ilya.gusev@phystech.edu)
**Languages:** Russian.
## Usage
Prerequisites:
```bash
pip install datasets zstandard jsonlines pysimdjson
```
Dataset iteration:
```python
from datasets import load_dataset
dataset = load_dataset('IlyaGusev/librusec', split="train", streaming=True)
for example in dataset:
print(example["text"])
``` | 1,093 | [
[
-0.0258026123046875,
-0.0234527587890625,
0.00920867919921875,
0.015838623046875,
-0.0279083251953125,
-0.009521484375,
-0.005786895751953125,
0.004512786865234375,
0.010650634765625,
0.0272674560546875,
-0.040802001953125,
-0.044097900390625,
-0.0201416015625,
... |
KETI-AIR/coco | 2023-03-22T11:45:13.000Z | [
"task_categories:object-detection",
"size_categories:100K<n<1M",
"language:en",
"license:apache-2.0",
"region:us"
] | KETI-AIR | COCO is a large-scale object detection, segmentation, and
captioning dataset.
Note:
* Some images from the train and validation sets don't have annotations.
* Coco 2014 and 2017 uses the same images, but different train/val/test splits
* The test split don't have any annotations (only images).
* Coco defines 91 classes but the data only uses 80 classes.
* Panotptic annotations defines defines 200 classes but only uses 133. | @article{DBLP:journals/corr/LinMBHPRDZ14,
author = {Tsung{-}Yi Lin and
Michael Maire and
Serge J. Belongie and
Lubomir D. Bourdev and
Ross B. Girshick and
James Hays and
Pietro Perona and
Deva Ramanan and
Piotr Doll{\'{a}}r and
C. Lawrence Zitnick},
title = {Microsoft {COCO:} Common Objects in Context},
journal = {CoRR},
volume = {abs/1405.0312},
year = {2014},
url = {http://arxiv.org/abs/1405.0312},
archivePrefix = {arXiv},
eprint = {1405.0312},
timestamp = {Mon, 13 Aug 2018 16:48:13 +0200},
biburl = {https://dblp.org/rec/bib/journals/corr/LinMBHPRDZ14},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | 0 | 17 | 2023-03-21T11:05:35 | ---
license: apache-2.0
task_categories:
- object-detection
language:
- en
size_categories:
- 100K<n<1M
pretty_name: Coco
---
# Coco dataset loader based on tensorflow dataset coco
## Object Detection
```python
import os
from datasets import load_dataset
from PIL import Image, ImageFont, ImageDraw, ImageColor
def calc_lum(rgb):
return (0.2126*rgb[0] + 0.7152*rgb[1] + 0.0722*rgb[2])
COLOR_MAP = [ImageColor.getrgb(code) for name, code in ImageColor.colormap.items()]
def get_text_bbox(bb, tbb, margin, im_w, im_h, anchor="leftBottom"):
m = margin
l, t, r, b = bb
tl, tt, tr, tb = tbb
bbw, bbh = r - l, b - t
tbbw, tbbh = tr - tl, tb - tt
# bbox (left-top)
if anchor == "leftTop":
ax, ay = l, t
if tbbw*3 > bbw or tbbh*4 > bbh:
# align (text box: left-bottom)
x1, y1 = max(ax, 0), max(ay - tb - 2*m, 0)
x2, y2 = min(x1 + tr + 2*m, im_w), min(y1 + tb + 2*m, im_h)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
else:
# align (text box: left-top)
x1, y1 = max(ax, 0), max(ay, 0)
x2, y2 = min(x1 + tr + 2*m, im_w), min(y1 + tb + 2*m, im_h)
return (( x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
elif anchor == "rightTop":
ax, ay = r, t
if tbbw*3 > bbw or tbbh*4 > bbh:
# align (text box: left-bottom)
x2, y1 = max(ax, 0), max(ay - tb - 2*m, 0)
x1, y2 = max(x2 - tr - 2*m, 0), min(y1 + tb + 2*m, im_h)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
else:
# align (text box: left-top)
x2, y1 = max(ax, 0), max(ay, 0)
x1, y2 = max(x2 - tr - 2*m, 0), min(y1 + tb + 2*m, im_h)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
elif anchor == "rightBottom":
ax, ay = r, b
if tbbw*3 > bbw or tbbh*4 > bbh:
# align (text box: left-top)
x2, y2 = min(ax, im_w), min(ay + tb + 2*m, im_h)
x1, y1 = max(x2 - tr - 2*m, 0), max(y2 - tb - 2*m, 0)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
else:
# align (text box: left-bottom)
x2, y2 = min(ax, im_w), max(ay, 0)
x1, y1 = max(x2 - tr - 2*m, 0), max(y2 - tb - 2*m, 0)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
elif anchor == "leftBottom":
ax, ay = l, b
if tbbw*3 > bbw or tbbh*4 > bbh:
# align (text box: left-top)
x1, y2 = min(ax, im_w), min(ay + tb + 2*m, im_h)
x2, y1 = min(x1 + tr + 2*m, im_w), max(y2 - tb - 2*m, 0)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
else:
# align (text box: left-bottom)
x1, y2 = min(ax, im_w), max(ay, 0)
x2, y1 = min(x1 + tr + 2*m, im_w), max(y2 - tb - 2*m, 0)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
elif anchor == "centerBottom":
ax, ay = (l+r)//2, b
if tbbw*3 > bbw or tbbh*4 > bbh:
# align (text box: left-top)
x1, y2 = min(ax - tr//2 - m, im_w), min(ay + tb + 2*m, im_h)
x2, y1 = min(x1 + tr + 2*m, im_w), max(y2 - tb - 2*m, 0)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
else:
# align (text box: left-bottom)
x1, y2 = min(ax - tr//2 - m, im_w), max(ay, 0)
x2, y1 = min(x1 + tr + 2*m, im_w), max(y2 - tb - 2*m, 0)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
def draw_bbox(image, objects, out_path, label_names=None, font="Roboto-Bold.ttf", fontsize=15, fill=True, opacity=60, width=2, margin=3, anchor="leftBottom"):
fnt = ImageFont.truetype(font, fontsize)
im_w, im_h = image.size
img = image.convert("RGBA")
overlay = Image.new('RGBA', img.size, (0, 0, 0, 0))
draw = ImageDraw.Draw(overlay)
for bb, lbl_id in zip(objects["bbox"], objects["label"]):
c = COLOR_MAP[min(lbl_id, len(COLOR_MAP)-1)]
fill_c = c + (opacity, ) if fill else None
draw.rectangle((bb[0], bb[1], bb[2], bb[3]), outline=c, fill=fill_c, width=width)
text = ""
if label_names is not None:
text = label_names[lbl_id]
tbb = fnt.getbbox(text)
btn_bbox, text_pos = get_text_bbox(bb, tbb, margin, im_w, im_h, anchor)
fc = (0, 0, 0) if calc_lum(c) > 150 else (255, 255, 255)
draw.rectangle(btn_bbox, outline=c, fill=c + (255, ))
draw.text(text_pos, text, font=fnt, fill=fc + (255, ))
img = Image.alpha_composite(img, overlay)
overlay = Image.new('RGBA', img.size, (0, 0, 0, 0))
draw = ImageDraw.Draw(overlay)
img = img.convert("RGB")
img.save(out_path)
raw_datasets = load_dataset(
"coco.py",
"2017",
cache_dir="./huggingface_datasets",
)
train_dataset = raw_datasets["train"]
label_list = raw_datasets["train"].features["objects"].feature['label'].names
for idx, item in zip(range(10), train_dataset):
draw_bbox(item["image"], item["objects"], item["image/filename"], label_list)
```


## Panoptic segmentation
```python
import numpy as np
from datasets import load_dataset
from PIL import Image, ImageFont, ImageDraw, ImageColor
from transformers.image_transforms import (
rgb_to_id,
)
def calc_lum(rgb):
return (0.2126*rgb[0] + 0.7152*rgb[1] + 0.0722*rgb[2])
COLOR_MAP = [ImageColor.getrgb(code) for name, code in ImageColor.colormap.items()]
def get_text_bbox(bb, tbb, margin, im_w, im_h, anchor="leftBottom"):
m = margin
l, t, r, b = bb
tl, tt, tr, tb = tbb
bbw, bbh = r - l, b - t
tbbw, tbbh = tr - tl, tb - tt
# bbox (left-top)
if anchor == "leftTop":
ax, ay = l, t
if tbbw*3 > bbw or tbbh*4 > bbh:
# align (text box: left-bottom)
x1, y1 = max(ax, 0), max(ay - tb - 2*m, 0)
x2, y2 = min(x1 + tr + 2*m, im_w), min(y1 + tb + 2*m, im_h)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
else:
# align (text box: left-top)
x1, y1 = max(ax, 0), max(ay, 0)
x2, y2 = min(x1 + tr + 2*m, im_w), min(y1 + tb + 2*m, im_h)
return (( x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
elif anchor == "rightTop":
ax, ay = r, t
if tbbw*3 > bbw or tbbh*4 > bbh:
# align (text box: left-bottom)
x2, y1 = max(ax, 0), max(ay - tb - 2*m, 0)
x1, y2 = max(x2 - tr - 2*m, 0), min(y1 + tb + 2*m, im_h)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
else:
# align (text box: left-top)
x2, y1 = max(ax, 0), max(ay, 0)
x1, y2 = max(x2 - tr - 2*m, 0), min(y1 + tb + 2*m, im_h)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
elif anchor == "rightBottom":
ax, ay = r, b
if tbbw*3 > bbw or tbbh*4 > bbh:
# align (text box: left-top)
x2, y2 = min(ax, im_w), min(ay + tb + 2*m, im_h)
x1, y1 = max(x2 - tr - 2*m, 0), max(y2 - tb - 2*m, 0)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
else:
# align (text box: left-bottom)
x2, y2 = min(ax, im_w), max(ay, 0)
x1, y1 = max(x2 - tr - 2*m, 0), max(y2 - tb - 2*m, 0)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
elif anchor == "leftBottom":
ax, ay = l, b
if tbbw*3 > bbw or tbbh*4 > bbh:
# align (text box: left-top)
x1, y2 = min(ax, im_w), min(ay + tb + 2*m, im_h)
x2, y1 = min(x1 + tr + 2*m, im_w), max(y2 - tb - 2*m, 0)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
else:
# align (text box: left-bottom)
x1, y2 = min(ax, im_w), max(ay, 0)
x2, y1 = min(x1 + tr + 2*m, im_w), max(y2 - tb - 2*m, 0)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
elif anchor == "centerBottom":
ax, ay = (l+r)//2, b
if tbbw*3 > bbw or tbbh*4 > bbh:
# align (text box: left-top)
x1, y2 = min(ax - tr//2 - m, im_w), min(ay + tb + 2*m, im_h)
x2, y1 = min(x1 + tr + 2*m, im_w), max(y2 - tb - 2*m, 0)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
else:
# align (text box: left-bottom)
x1, y2 = min(ax - tr//2 - m, im_w), max(ay, 0)
x2, y1 = min(x1 + tr + 2*m, im_w), max(y2 - tb - 2*m, 0)
return ((x1, y1, x2, y2), (max(x1+m, 0), max(y1+m, 0)))
# Copied from transformers.models.detr.image_processing_detr.masks_to_boxes
def masks_to_boxes(masks: np.ndarray) -> np.ndarray:
"""
Compute the bounding boxes around the provided panoptic segmentation masks.
Args:
masks: masks in format `[number_masks, height, width]` where N is the number of masks
Returns:
boxes: bounding boxes in format `[number_masks, 4]` in xyxy format
"""
if masks.size == 0:
return np.zeros((0, 4))
h, w = masks.shape[-2:]
y = np.arange(0, h, dtype=np.float32)
x = np.arange(0, w, dtype=np.float32)
# see https://github.com/pytorch/pytorch/issues/50276
y, x = np.meshgrid(y, x, indexing="ij")
x_mask = masks * np.expand_dims(x, axis=0)
x_max = x_mask.reshape(x_mask.shape[0], -1).max(-1)
x = np.ma.array(x_mask, mask=~(np.array(masks, dtype=bool)))
x_min = x.filled(fill_value=1e8)
x_min = x_min.reshape(x_min.shape[0], -1).min(-1)
y_mask = masks * np.expand_dims(y, axis=0)
y_max = y_mask.reshape(x_mask.shape[0], -1).max(-1)
y = np.ma.array(y_mask, mask=~(np.array(masks, dtype=bool)))
y_min = y.filled(fill_value=1e8)
y_min = y_min.reshape(y_min.shape[0], -1).min(-1)
return np.stack([x_min, y_min, x_max, y_max], 1)
def draw_seg(image, panoptic_image, oids, labels, out_path, label_names=None, font="Roboto-Bold.ttf", fontsize=15, opacity=160, anchor="leftBottom"):
fnt = ImageFont.truetype(font, fontsize)
im_w, im_h = image.size
masks = np.asarray(panoptic_image, dtype=np.uint32)
masks = rgb_to_id(masks)
oids = np.array(oids, dtype=np.uint32)
masks = masks == oids[:, None, None]
masks = masks.astype(np.uint8)
bboxes = masks_to_boxes(masks)
img = image.convert("RGBA")
for label, mask, bbox in zip(labels, masks, bboxes):
c = COLOR_MAP[min(label, len(COLOR_MAP)-1)]
cf = np.array(c + (opacity, )).astype(np.uint8)
cmask = mask[:, :, None] * cf[None, None, :]
cmask = Image.fromarray(cmask)
img = Image.alpha_composite(img, cmask)
if label_names is not None:
text = label_names[label]
tbb = fnt.getbbox(text)
btn_bbox, text_pos = get_text_bbox(bbox, tbb, 3, im_w, im_h, anchor=anchor)
overlay = Image.new('RGBA', img.size, (0, 0, 0, 0))
draw = ImageDraw.Draw(overlay)
fc = (0, 0, 0) if calc_lum(c) > 150 else (255, 255, 255)
draw.rectangle(btn_bbox, outline=c, fill=c + (255, ))
draw.text(text_pos, text, font=fnt, fill=fc + (255, ))
img = Image.alpha_composite(img, overlay)
img = img.convert("RGB")
img.save(out_path)
raw_datasets = load_dataset(
"coco.py",
"2017_panoptic",
cache_dir="./huggingface_datasets",
# data_dir="./data",
)
train_dataset = raw_datasets["train"]
label_list = raw_datasets["train"].features["panoptic_objects"].feature['label'].names
for idx, item in zip(range(10), train_dataset):
draw_seg(
item["image"],
item["panoptic_image"],
item["panoptic_objects"]["id"],
item["panoptic_objects"]["label"],
"panoptic_" + item["image/filename"],
label_list)
```


| 12,243 | [
[
-0.022064208984375,
-0.060882568359375,
0.04248046875,
0.0281829833984375,
0.01371002197265625,
-0.0213623046875,
0.01107025146484375,
-0.0188751220703125,
0.01910400390625,
0.02215576171875,
-0.038604736328125,
-0.058685302734375,
-0.0301666259765625,
0.003... |
Multimodal-Fatima/Food101_test | 2023-05-04T06:23:00.000Z | [
"region:us"
] | Multimodal-Fatima | null | null | 0 | 17 | 2023-03-22T01:14:18 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': apple pie
'1': baby back ribs
'2': baklava
'3': beef carpaccio
'4': beef tartare
'5': beet salad
'6': beignets
'7': bibimbap
'8': bread pudding
'9': breakfast burrito
'10': bruschetta
'11': caesar salad
'12': cannoli
'13': caprese salad
'14': carrot cake
'15': ceviche
'16': cheesecake
'17': cheese plate
'18': chicken curry
'19': chicken quesadilla
'20': chicken wings
'21': chocolate cake
'22': chocolate mousse
'23': churros
'24': clam chowder
'25': club sandwich
'26': crab cakes
'27': creme brulee
'28': croque madame
'29': cup cakes
'30': deviled eggs
'31': donuts
'32': dumplings
'33': edamame
'34': eggs benedict
'35': escargots
'36': falafel
'37': filet mignon
'38': fish and chips
'39': foie gras
'40': french fries
'41': french onion soup
'42': french toast
'43': fried calamari
'44': fried rice
'45': frozen yogurt
'46': garlic bread
'47': gnocchi
'48': greek salad
'49': grilled cheese sandwich
'50': grilled salmon
'51': guacamole
'52': gyoza
'53': hamburger
'54': hot and sour soup
'55': hot dog
'56': huevos rancheros
'57': hummus
'58': ice cream
'59': lasagna
'60': lobster bisque
'61': lobster roll sandwich
'62': macaroni and cheese
'63': macarons
'64': miso soup
'65': mussels
'66': nachos
'67': omelette
'68': onion rings
'69': oysters
'70': pad thai
'71': paella
'72': pancakes
'73': panna cotta
'74': peking duck
'75': pho
'76': pizza
'77': pork chop
'78': poutine
'79': prime rib
'80': pulled pork sandwich
'81': ramen
'82': ravioli
'83': red velvet cake
'84': risotto
'85': samosa
'86': sashimi
'87': scallops
'88': seaweed salad
'89': shrimp and grits
'90': spaghetti bolognese
'91': spaghetti carbonara
'92': spring rolls
'93': steak
'94': strawberry shortcake
'95': sushi
'96': tacos
'97': takoyaki
'98': tiramisu
'99': tuna tartare
'100': waffles
- name: id
dtype: int64
- name: Attributes_ViT_L_14_text_davinci_003_full
sequence: string
- name: Attributes_ViT_L_14_text_davinci_003_food101
sequence: string
- name: clip_tags_ViT_L_14_with_openai_classes
sequence: string
- name: clip_tags_ViT_L_14_wo_openai_classes
sequence: string
- name: clip_tags_ViT_L_14_simple_specific
dtype: string
- name: clip_tags_ViT_L_14_ensemble_specific
dtype: string
- name: clip_tags_ViT_B_16_simple_specific
dtype: string
- name: clip_tags_ViT_B_16_ensemble_specific
dtype: string
- name: clip_tags_ViT_B_32_simple_specific
dtype: string
- name: clip_tags_ViT_B_32_ensemble_specific
dtype: string
- name: Attributes_ViT_L_14_descriptors_text_davinci_003_full
sequence: string
- name: Attributes_ViT_B_16_descriptors_text_davinci_003_full
sequence: string
- name: Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B_simple_specific
dtype: string
- name: clip_tags_LAION_ViT_H_14_2B_ensemble_specific
dtype: string
splits:
- name: test
num_bytes: 1317820332.5
num_examples: 25250
download_size: 1263803958
dataset_size: 1317820332.5
---
# Dataset Card for "Food101_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 4,330 | [
[
-0.036529541015625,
-0.0276641845703125,
0.0002734661102294922,
0.01084136962890625,
0.01554107666015625,
-0.012115478515625,
0.0211639404296875,
-0.01209259033203125,
0.070556640625,
0.0235443115234375,
-0.052581787109375,
-0.03692626953125,
-0.041473388671875,... |
RyokoAI/Syosetu711K | 2023-04-05T01:13:44.000Z | [
"task_categories:text-classification",
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:ja",
"license:apache-2.0",
"novel",
"training",
"region:us"
] | RyokoAI | null | null | 8 | 17 | 2023-03-28T23:57:10 | ---
license: apache-2.0
language:
- ja
tags:
- novel
- training
task_categories:
- text-classification
- text-generation
pretty_name: Syosetuka ni Narou 711K
size_categories:
- 100K<n<1M
---
# Dataset Card for Syosetu711K
*The BigKnow2022 dataset and its subsets are not yet complete. Not all information here may be accurate or accessible.*
## Dataset Description
- **Homepage:** (TODO)
- **Repository:** <https://github.com/RyokoAI/BigKnow2022>
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** Ronsor/undeleted <ronsor@ronsor.com>
### Dataset Summary
Syosetu711K is a dataset composed of approximately 711,700 novels scraped from the Japanese novel self-publishing
website Syosetuka ni Narou (JA: 小説家になろう, lit. "Let's Become a Novelist") between March 26 and March 27, 2023.
The dataset contains most if not all novels published on the site, regardless of length or quality; however, we
include metadata so users of this dataset can filter and evaluate its contents.
Syosetu711Kは、日本の小説投稿サイト「小説家になろう」から2023年3月26日から27日にかけてスクレイプされた約711,700冊の小説から
構成されるデータセットです。このデータセットには、長さや品質に関係なく、サイトに掲載されているほとんどの小説が含まれています。ただし、
各小説のIDも含まれているため、小説家になろうAPIを使ってその情報を検索することができます。
### Supported Tasks and Leaderboards
This dataset is primarily intended for unsupervised training of text generation models; however, it may be useful for other purposes.
* text-classification
* text-generation
### Languages
* Japanese
## Dataset Structure
### Data Instances
```json
{
"text": "【小説タイトル】\n焼けて爛れる恋よりも、微睡む優しい愛が欲しい\n【Nコード】\nN5029ID\n【作者名】\n秋暁秋季\n【あらすじ】\n俺の彼女は物凄く気の多い人だった。\nお眼鏡に適う奴が居れば、瞳孔を蕩
けさせる人だった。\nその癖照れ屋で、すぐに目を逸らす。\nな...",
"meta": {
"subset": "syosetu",
"q": 0.6,
"id": "N5029ID",
"author": "秋暁秋季",
"userid": 719797,
"title": "焼けて爛れる恋よりも、微睡む優しい愛が欲しい",
"length": 871,
"points": 0,
"lang": "ja",
"chapters": 1,
"keywords": ["気が多い", "浮気性", "無愛想", "照れる", "嫉妬", "好みではない", "クソデカ感情", "空気のような安心感"],
"isr15": 0,
"genre": 102,
"biggenre": 1
}
}
{
"text": "【小説タイトル】\n【能力者】\n【Nコード】\nN9864IB\n【作者名】\n夢音いちご\n【あらすじ】\n私立アビリティ学園。\n小・中・高・大が一貫となった、大規模な名門校。\nそして、ここは規模の大きさだけ
でなく、ある特殊な制度を設けて\nいることでも有名だ。\nそれ...",
"meta": {
"subset": "syosetu",
"q": 0.6,
"id": "N9864IB",
"author": "夢音いちご",
"userid": 1912777,
"title": "【能力者】",
"length": 2334,
"points": 0,
"lang": "ja",
"chapters": 2,
"keywords": ["ガールズラブ", "身分差", "伝奇", "日常", "青春", "ラブコメ", "女主人公", "学園", "魔法", "超能力"],
"isr15": 0,
"genre": 202,
"biggenre": 2
}
}
```
### Data Fields
* `text`: the actual novel text, all chapters
* `meta`: novel metadata
* `subset`: dataset tag: `syosetu`
* `lang`: dataset language: `ja` (Japanese)
* `id`: novel ID/ncode
* `author`: author name
* `userid`: author user ID
* `title`: novel title
* `length`: novel length in words
* `points`: global points (corresponds to `global_point` from the Syosetu API)
* `q`: q-score (quality score) calculated based on `points`
* `chapters`: number of chapters (corresponds to `general_all_no` from the Syosetu API)
* `keywords`: array of novel keywords (corresponds to `keyword` from the Syosetu API, split on spaces)
* `isr15`: whether the novel is rated R15+
* `genre`: novel genre ID (optional, see Syosetu API documentation)
* `biggenre`: general novel genre ID (optional, see Syosetu API documentation)
* `isr18`: whether the novel is rated R18+
* `nocgenre`: novel genre ID (optional, only available if `isr18` is true, see Syosetu API documentation)
*For further reference, see the Syosetuka ni Narou API documentation: <https://dev.syosetu.com/man/api/> (JA).*
#### Q-Score Distribution
```
0.00: 0
0.10: 0
0.20: 0
0.30: 0
0.40: 0
0.50: 213005
0.60: 331393
0.70: 101971
0.80: 63877
0.90: 1542
1.00: 2
```
### Data Splits
No splitting of the data was performed.
## Dataset Creation
### Curation Rationale
Syosetuka ni Narou is the most popular website in Japan for authors wishing to self-publish their novels online. Many works on
the site been picked up by large commercial publishers. Because of this, we believe that this dataset provides a large corpus
of high-quality, creative content in the Japanese language.
### Source Data
#### Initial Data Collection and Normalization
*More information about any referenced scripts, commands, or programs used may be found in the BigKnow2022 GitHub repository.*
First, metadata for all novels on the site was gathered into a JSON lines (JSONL) file. The Syosetuka ni Narou API was used to
obtain this information.
Second, this listing was used to create a secondary text file containing a list of only the novel "ncodes," or IDs. This
secondary file was distributed to downloader nodes.
Third, the sister site <https://pdfnovels.net> was queried with each novel ID, and the resulting PDF was saved for later processing.
Fourth, the `pdftotext` tool was used to convert the PDF files to text documents. A few other scripts were then used to clean up
the resulting text files.
Finally, the text files and other metadata were converted into the specified data field schema above, and the resulting JSON entries
were concatenated into the Syosetu711K dataset. The version uploaded to this repository, however, is split into multiple files,
numbered 00 through 20 inclusive.
#### Who are the source language producers?
The authors of each novel.
### Annotations
#### Annotation process
Titles and general genre were collected alongside the novel text and IDs.
#### Who are the annotators?
There were no human annotators.
### Personal and Sensitive Information
The dataset contains only works of fiction, and we do not believe it contains any PII.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is intended to be useful for anyone who wishes to train a model to generate "more entertaining" content in Japanese.
It may also be useful for other languages depending on your language model.
### Discussion of Biases
This dataset is composed of fictional works by various authors. Because of this fact, the contents of this dataset will reflect
the biases of those authors. **Additionally, this dataset contains NSFW material and was not filtered. Beware of stereotypes.**
### Other Known Limitations
N/A
## Additional Information
### Dataset Curators
Ronsor Labs
### Licensing Information
Apache 2.0, for all parts of which Ronsor Labs or the Ryoko AI Production Committee may be considered authors. All other material is
distributed under fair use principles.
### Citation Information
```
@misc{ryokoai2023-bigknow2022,
title = {BigKnow2022: Bringing Language Models Up to Speed},
author = {Ronsor},
year = {2023},
howpublished = {\url{https://github.com/RyokoAI/BigKnow2022}},
}
```
### Contributions
Thanks to @ronsor (GH) for gathering this dataset. | 6,876 | [
[
-0.0265350341796875,
-0.0380859375,
0.0272369384765625,
0.0125274658203125,
-0.0251312255859375,
-0.0179901123046875,
-0.0263519287109375,
-0.0227508544921875,
0.04327392578125,
0.035614013671875,
-0.05487060546875,
-0.05670166015625,
-0.03692626953125,
0.04... |
vietgpt/daily_dialog_vi | 2023-06-21T14:11:16.000Z | [
"task_categories:conversational",
"size_categories:10K<n<100K",
"language:vi",
"SFT",
"region:us"
] | vietgpt | null | null | 1 | 17 | 2023-03-29T14:57:48 | ---
dataset_info:
features:
- name: dialog
sequence: string
splits:
- name: train
num_bytes: 7803227
num_examples: 11118
- name: validation
num_bytes: 718575
num_examples: 1000
- name: test
num_bytes: 698896
num_examples: 1000
download_size: 4841457
dataset_size: 9220698
task_categories:
- conversational
language:
- vi
tags:
- SFT
size_categories:
- 10K<n<100K
---
# DailyDialog
- Source: https://huggingface.co/datasets/daily_dialog
- Num examples:
- 11,118 (train)
- 1,000 (validation)
- 1,000 (test)
- Language: Vietnamese
```python
from datasets import load_dataset
load_dataset("vietgpt/daily_dialog_vi")
``` | 669 | [
[
-0.01198577880859375,
-0.0506591796875,
0.01410675048828125,
0.039764404296875,
-0.0184326171875,
-0.031494140625,
0.0108489990234375,
-0.00917816162109375,
-0.0010747909545898438,
0.0433349609375,
-0.0626220703125,
-0.03863525390625,
-0.0266571044921875,
0.... |
anon8231489123/Omegle_logs_dataset | 2023-04-02T23:34:21.000Z | [
"language:en",
"license:apache-2.0",
"region:us"
] | anon8231489123 | null | null | 6 | 17 | 2023-04-02T23:15:21 | ---
license: apache-2.0
language:
- en
---
~10k conversations from Omegle. Scraped using: http://web.archive.org/cdx/search/xd?url=logs.omegle.com/*&fl=timestamp,original,statuscode&output=json. For these logs to have ended up on the cdx, it means the url was posted publicly at some point.
* PII removed by searching for conversations with these words: forbidden_words = ["kik", "telegram", "skype", "wickr", "discord", "dropbox", "insta ", "insta?", "instagram", "snap ", "snapchat"].
* Conversations with racial slurs removed.
* English only.
* Obviously, the dataset still contains a lot of (sometimes extreme) NSFW content. Do not view or use this dataset if you are under 18.
General process for scraping (There are probably other datasets that can be scraped using this method):
1. Go to page in archive.org cdx
2. Check if the page contains a log
3. Download the log image
4. Use OCR to read it
5. Save it to a json file.
This dataset could be useful for training casual conversational AI's but it likely still requires more filtering. Use at your own risk. | 1,068 | [
[
-0.038604736328125,
-0.0777587890625,
0.037017822265625,
0.0161895751953125,
-0.01351165771484375,
0.0012636184692382812,
-0.00970458984375,
-0.059356689453125,
0.00738525390625,
0.06707763671875,
-0.050933837890625,
-0.05828857421875,
-0.02667236328125,
0.0... |
IES-Rafael-Alberti/letras-carnaval-cadiz | 2023-06-04T11:51:32.000Z | [
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:es",
"license:cc-by-sa-4.0",
"lyrics",
"carnival",
"cadiz",
"region:us"
] | IES-Rafael-Alberti | This dataset is a comprehensive collection of lyrics from the Carnaval de Cádiz, a significant cultural heritage of the city of Cádiz, Spain. Despite its cultural importance, there has been a lack of a structured database for these lyrics, hindering research and public access to this cultural heritage. This dataset aims to address this gap.
The dataset was created by the Cádiz AI Learning Community, a branch of the non-profit association Spain AI, and was developed by Iván Romero Reyna and Jesús Federico Franco Medinilla, students of the Specialization Course in Artificial Intelligence and Big Data at IES Rafael Alberti during the 2022-2023 academic year. The project is supervised by Jesús Carlos Avecilla de la Herrán, a computational linguist.
Collaboration is encouraged, with individuals able to verify the different records of the dataset at letrascarnavalcadiz.com, ensuring the transcription of the lyrics and all data are correct. New lyrics can also be added to the dataset. Corrections and additions are not immediately reflected in the dataset but are updated periodically.
For more information or to report a problem, you can write to contacto@letrascarnavalcadiz.com. | @misc{letrascarnavalcadiz2023,
author = {Romero Reyna, Iván and Franco Medinilla, Jesús Federico and Avecilla de la Herrán, Jesús Carlos},
title = {letras-carnaval-cadiz},
year = {2023},
url = {https://huggingface.co/datasets/IES-Rafael-Alberti/letras-carnaval-cadiz}
} | 2 | 17 | 2023-04-04T10:34:51 | ---
annotations_creators:
- no-annotation
language:
- es
language_creators:
- machine-generated
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: letrascarnavalcadiz
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- lyrics
- carnival
- cadiz
task_categories: []
task_ids: []
---
# Dataset Card for Letras Carnaval Cádiz

<h4 align="center">
<p>
<b>English</b> |
<a href="https://huggingface.co/datasets/IES-Rafael-Alberti/letras-carnaval-cadiz/blob/main/README_es.md">Español</a>
<p>
</h4>
## Dataset Description
- **Homepage:** https://letrascarnavalcadiz.com
- **Repository:** https://huggingface.co/datasets/IES-Rafael-Alberti/letras-carnaval-cadiz
- **Point of Contact:** contacto@letrascarnavalcadiz.com
### Changelog
|Release|Description|
|-|-|
|v1.0| Initial release of the dataset. Included more than 1K lyrics. It is necessary to verify the accuracy of the data, especially the subset midaccurate. |
### Dataset Summary
This dataset is a comprehensive collection of lyrics from the Carnaval de Cádiz, a significant cultural heritage of the city of Cádiz, Spain. Despite its cultural importance, there has been a lack of a structured database for these lyrics, hindering research and public access to this cultural heritage. This dataset aims to address this gap.
The dataset was created by the Cádiz AI Learning Community, a branch of the non-profit association Spain AI, and was developed by Iván Romero Reyna and Jesús Federico Franco Medinilla, students of the Specialization Course in Artificial Intelligence and Big Data at IES Rafael Alberti during the 2022-2023 academic year. The project is supervised by Jesús Carlos Avecilla de la Herrán, a computational linguist.
Collaboration is encouraged, with individuals able to verify the different records of the dataset at [letrascarnavalcadiz.com](https://letrascarnavalcadiz.com), ensuring the transcription of the lyrics and all data are correct. New lyrics can also be added to the dataset. Corrections and additions are not immediately reflected in the dataset but are updated periodically.
For more information or to report a problem, you can write to [contacto@letrascarnavalcadiz.com](mailto:contacto@letrascarnavalcadiz.com).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset is in Spanish, reflecting the language of the Carnaval de Cádiz.
## Dataset Structure
### Data Instances
A typical instance in the dataset is formatted in JSON and contains the following fields:
```json
{
"id": "9de8647521b728c45ff45c1c11208708d055397fd7781b31cf91b473dff224d5",
"authors": ["Juan Carlos Aragón Becerra"],
"song_type": 2,
"year": "2018",
"group": "Los Mafiosos",
"group_type": 2,
"lyrics": [
"Mujer va llegando el momento",
"de ser la que lleve la rienda",
"el camino ha sido largo y polvoriento",
"pero ya no habrá varón que te detenga",
"gritad larga vida a la reina",
"que va a comenzar tu gobierno",
"ojalá no heredes nada",
"de aquel macho que te odiaba",
"porque en el fondo sabía",
"que ya tú te le acercabas",
"y el contigo no podía",
"ten en cuenta cuando hagas justicia",
"de volver a nivelar la balanza",
"y aguantar aunque tragando saliva",
"el deseo de venganza",
"de ser oh humano fatal",
"de ser o que puedo entender",
"tan solo con una mirada",
"la llaga que baña tu alma y tu piel",
"que te sirva la experiencia",
"del macho de la manada",
"la fuerza no vale nada",
"si no es con la inteligencia",
"y ojalá que tu conciencia",
"a mí me brinde la suerte",
"de nunca volver a verte",
"con los pies en una iglesia",
"que ella fue quien escribió",
"que ella fue quien escribió",
"la historia contra vosotras",
"y encima se la cobró",
"y encima se la cobró",
"con mil millones de devotas",
"ojalá que tu corona y tu bandera",
"abran paso a una vida nueva",
"como un mundo en primavera",
"ojalá que a ti no te envenene el poder",
"y que no dejes nunca de ser la mujer",
"que siempre fue nuestra gran compañera"
]
}
```
The `id` field uniquely identifies each instance in the dataset, providing a way to reference specific entries. The `authors`, `song_type`, `year`, `group`, and `group_type` fields provide context for the lyrics, while the `lyrics` field itself contains the actual text of the song. The relationships between these fields are implicit in the structure of the dataset, with each instance representing a single song from the Carnaval de Cádiz.
### Data Fields
`id`
Unique identifier for each song in the dataset. A SHA-256 hash calculated from the first four verses of the lyrics and the group name, with all spaces removed and converted to lowercase (string).
`authors`
List of authors who have written the song (string array).
`song_type`
The type of song (1: presentación, 2: pasodoble/tango, 3: cuplé, 4: estribillo, 5: popurrí, 6: cuarteta).
`year`
Year the song was written or performed (string).
`group`
Name of the group that performed the song (string).
`group_type`
The type of the group (1: coro, 2: comparsa, 3: chirigota, 4: cuarteto).
`lyrics`
The lyrics of the song, represented as an array of verses (string array).
### Data Splits
This dataset does not have traditional training, validation, and test splits. Instead, it is divided into two subsets: "accurate" and "midaccurate".
The "accurate" subset contains 958 instances. All fields of first 957 instances in this subset have been obtained through web scraping and have undergone at least one human review for accuracy. The rest have been added by users at [letrascarnavalcadiz.com](https://letrascarnavalcadiz.com).
The "midaccurate" subset contains 226 instances. The 'group' and 'lyrics' fields in this subset were collected through web scraping, but the remaining fields were filled in by querying language models connected to the Internet. Therefore, the data in these fields may not be accurate.
| Subset | Instances |
|-------------|----------:|
| Accurate | 958 |
| Midaccurate | 226 |
Please note that the division into subsets is based on the method and reliability of data collection, rather than a random or stratified split typically used in machine learning tasks. Users of the dataset should consider this when deciding how to use the data.
## Dataset Creation
### Curation Rationale
The dataset was created to address a significant need in the cultural heritage of the city of Cádiz, Spain. The Carnaval de Cádiz is a major cultural event, yet there was no structured database of its lyrics that could be consulted for research or public access. This lack of a structured database hindered the exploration and appreciation of this cultural heritage. The dataset was curated to respond to this need.
### Source Data
#### Initial Data Collection and Normalization
The initial collection of lyrics was carried out through automatic scraping of various websites and multimedia content on the Internet. To maximize the number of records with minimal effort, all collection is being done using different Artificial Intelligence models.
#### Who are the source language producers?
The source language producers of the dataset are the authors and performers of the songs from the Carnaval de Cádiz. These include a wide range of individuals and groups who have participated in the Carnaval over the years. The dataset does not include self-reported demographic or identity information for these individuals or groups.
The data in the dataset was collected from two websites: https://www.alsondelcarnaval.es and http://letrasdesdeelparaiso.blogspot.com. The first 957 instances of "accurate" subset of the dataset was collected from the former, while the "midaccurate" subset was collected from the latter. The data was extracted through automatic web scraping, and in the case of the "midaccurate" subset, some fields were filled in by querying language models connected to the Internet.
The rest of "accurate" subset have been added by users at [letrascarnavalcadiz.com](https://letrascarnavalcadiz.com).
### Personal and Sensitive Information
The only sensitive information in the dataset is the names and surnames of the authors of the lyrics.
## Considerations for Using the Data
### Social Impact of Dataset
The use of this dataset has significant social impact.
Firstly, this dataset can positively contribute to the understanding and preservation of Cadiz's culture and traditions, as the Carnaval de Cádiz is an integral part of the city's cultural identity. By providing an accessible and easily searchable resource for carnival song lyrics, this dataset can assist cultural researchers, linguists, and the general public in better understanding and appreciating the rich tradition of the Carnaval de Cádiz.
Additionally, this dataset can be utilized to enhance natural language processing (NLP) technologies in Spanish, a language that can sometimes be underrepresented in NLP research. By providing a high-quality, culture-specific Spanish text corpus, this dataset can aid in improving the accuracy and cultural relevance of Spanish NLP models.
However, there are also risks associated with the use of this dataset. For instance, if used to train text generation models, these models could generate content that reinforces cultural stereotypes or perpetuates existing biases. Moreover, the automatic interpretation of carnival song lyrics can be challenging due to cultural and linguistic subtleties, and errors in this interpretation could lead to misunderstandings or misrepresentations of Cadiz's culture.
Finally, although this dataset does not contain a low-resource or underrepresented language, it does focus on a specific cultural tradition from a specific region of Spain. Therefore, its use can impact the Cadiz community by helping to preserve and disseminate its unique culture and traditions.
### Discussion of Biases
The dataset is subject to several biases due to the nature of the data collection and the historical context of the Cadiz Carnival.
Firstly, there is a temporal bias in the dataset. More recent lyrics are overrepresented compared to older ones, as there is more information available on the internet about modern groups. This may lead to a skewed understanding of the evolution of the Carnival's themes over time.
Secondly, the dataset exhibits a popularity bias. Lyrics from more popular groups are overrepresented because individuals have chosen to write about them more frequently. This could potentially limit the diversity of styles and themes represented in the dataset.
Thirdly, there is a competition bias. Lyrics from groups that advanced further in the competition stages are overrepresented, resulting in more available lyrics from these groups. This might lead to an overemphasis on the styles and themes that tend to be more successful in the competition.
Lastly, the dataset reflects a gender bias. Given that there have historically been more male authors than female authors in the Cadiz Carnival, the majority of the dataset consists of lyrics written by men. This could potentially limit the representation of diverse perspectives and themes in the lyrics.
To mitigate these biases, we actively encourage the participation of the community. By verifying the different records of the dataset, reviewing the transcription of the lyrics and all the data for accuracy, and adding new lyrics, we hope to broaden the diversity and representation.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- Iván Romero Reyna. Student of the Specialisation Course in Artificial Intelligence and Big Data at [IES Rafael Alberti](https://iesrafaelalberti.es) during the academic year 2022-2023.
- Jesús Federico Franco Medinilla. Student of the Specialisation Course in Artificial Intelligence and Big Data at [IES Rafael Alberti](https://iesrafaelalberti.es) during the academic year 2022-2023.
- Jesús Carlos Avecilla de la Herrán. Promoter in [Cádiz AI](https://www.spain-ai.com).
### Licensing Information
[CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0)
### Citation Information
```
@misc{letrascarnavalcadiz2023,
author = {Romero Reyna, Iván and Franco Medinilla, Jesús Federico and Avecilla de la Herrán, Jesús Carlos},
title = {letras-carnaval-cadiz},
year = {2023},
url = {https://huggingface.co/datasets/IES-Rafael-Alberti/letras-carnaval-cadiz}
}
```
### Contributions
Thanks to [@ivanro](https://huggingface.co/ivanro), [@jframed281](https://huggingface.co/jframed281) for adding this dataset.
Thanks to all the reviewers and contributors at [letrascarnavalcadiz.com](https://letrascarnavalcadiz.com). | 13,072 | [
[
-0.03118896484375,
-0.0218658447265625,
0.01016998291015625,
0.04046630859375,
-0.0255889892578125,
0.0274505615234375,
-0.0223846435546875,
-0.040252685546875,
0.0469970703125,
0.054779052734375,
-0.07171630859375,
-0.0797119140625,
-0.031463623046875,
0.01... |
larryvrh/WikiMatrix-v1-Ja_Zh-filtered | 2023-04-08T05:16:37.000Z | [
"task_categories:translation",
"size_categories:100K<n<1M",
"language:ja",
"language:zh",
"license:cc-by-sa-4.0",
"region:us"
] | larryvrh | null | null | 7 | 17 | 2023-04-08T03:07:25 | ---
license: cc-by-sa-4.0
dataset_info:
features:
- name: ja
dtype: string
- name: zh
dtype: string
splits:
- name: train
num_bytes: 149036235
num_examples: 690095
download_size: 115870646
dataset_size: 149036235
task_categories:
- translation
language:
- ja
- zh
size_categories:
- 100K<n<1M
---
Filtered and modified version of Japanese/Chinese language pair data from [WikiMatrix v1](https://opus.nlpl.eu/WikiMatrix.php).
Process steps:
1. Basic regex based filtering / length checking to remove abnormal pairs.
2. Semantic similarity filtering with a threshold value of 0.6, based on [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE).
3. Convert all Traditional Chinese sentences into Simplified Chinese with [zhconv](https://github.com/gumblex/zhconv).
------
经过过滤和修改的日语/中文语言对数据,来自[WikiMatrix v1](https://opus.nlpl.eu/WikiMatrix.php)。
处理步骤:
1. 基本的基于正则表达式的过滤/长度检查,以删除异常对。
2. 基于[sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE)的语义相似性过滤,阈值为0.6。
3. 使用[zhconv](https://github.com/gumblex/zhconv)将所有繁体中文句子转换为简体中文。
------
以下はフィルタリングされ修正された日本語/中国語のペアデータです。データ元は[WikiMatrix v1](https://opus.nlpl.eu/WikiMatrix.php)です。
処理手順:
1. 正規表現に基づくフィルタリング/長さのチェックを行い、異常なペアを削除します。
2. [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE)に基づくセマンティック類似性フィルタリングを行い、閾値は0.6です。
3. [zhconv](https://github.com/gumblex/zhconv)を使って、すべての繁体字中国語の文を簡体字中国語に変換します。 | 1,457 | [
[
-0.04180908203125,
-0.062255859375,
0.02783203125,
0.0189971923828125,
-0.04193115234375,
-0.0220184326171875,
-0.0222015380859375,
-0.0274658203125,
0.043731689453125,
0.053802490234375,
-0.06817626953125,
-0.0537109375,
-0.020355224609375,
0.0230712890625,... |
AlekseyKorshuk/gpteacher-role-play-chatml | 2023-07-24T22:32:56.000Z | [
"region:us"
] | AlekseyKorshuk | null | null | 6 | 17 | 2023-04-27T20:08:22 | ---
dataset_info:
features:
- name: conversation
list:
- name: content
dtype: string
- name: do_train
dtype: bool
- name: role
dtype: string
splits:
- name: train
num_bytes: 6168190
num_examples: 9111
download_size: 0
dataset_size: 6168190
---
# Dataset Card for "gpteacher-role-play-chatml"
Data preprocessing pipeline: https://github.com/AlekseyKorshuk/chat-data-pipeline | 428 | [
[
-0.01849365234375,
-0.026397705078125,
-0.00251007080078125,
0.0154266357421875,
-0.0110931396484375,
0.0099334716796875,
-0.006015777587890625,
0.00788116455078125,
0.0175323486328125,
0.0426025390625,
-0.072021484375,
-0.079345703125,
-0.034088134765625,
-... |
FreedomIntelligence/huatuo_consultation_qa | 2023-05-17T03:21:36.000Z | [
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:zh",
"license:apache-2.0",
"medical",
"arxiv:2305.01526",
"region:us"
] | FreedomIntelligence | null | null | 8 | 17 | 2023-05-10T11:41:08 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- zh
tags:
- medical
size_categories:
- 1M<n<10M
---
# Dataset Card for huatuo_consultation_qa
## Dataset Description
- **Homepage: https://www.huatuogpt.cn/**
- **Repository: https://github.com/FreedomIntelligence/HuatuoGPT**
- **Paper: https://arxiv.org/abs/2305.01526**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
We collected data from a website for medical consultation , consisting of many online consultation records by medical experts. Each record is a QA pair: a patient raises a question and a medical doctor answers the question. The basic information of doctors (including name, hospital organization, and department) was recorded.
We directly crawl patient’s questions and doctor’s answers as QA pairs, getting 32,708,346 pairs. Subsequently, we removed the QA pairs containing special characters and removed the repeated pairs. Finally, we got 25,341,578 QA pairs.
**Please note that for some reasons we cannot directly provide text data, so the answer part of our data set is url. If you want to use text data, you can refer to the other two parts of our open source datasets ([huatuo_encyclopedia_qa](https://huggingface.co/datasets/FreedomIntelligence/huatuo_encyclopedia_qa)、[huatuo_knowledge_graph_qa](https://huggingface.co/datasets/FreedomIntelligence/huatuo_knowledge_graph_qa)), or use url for data collection.**
## Dataset Creation
### Source Data
....
## Citation
```
@misc{li2023huatuo26m,
title={Huatuo-26M, a Large-scale Chinese Medical QA Dataset},
author={Jianquan Li and Xidong Wang and Xiangbo Wu and Zhiyi Zhang and Xiaolong Xu and Jie Fu and Prayag Tiwari and Xiang Wan and Benyou Wang},
year={2023},
eprint={2305.01526},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 1,859 | [
[
-0.0198974609375,
-0.037628173828125,
0.0266265869140625,
0.0068206787109375,
-0.032562255859375,
-0.022186279296875,
0.0038280487060546875,
-0.032623291015625,
0.027374267578125,
0.0361328125,
-0.0204620361328125,
-0.059478759765625,
-0.016510009765625,
0.0... |
lighteval/truthfulqa_helm | 2023-05-12T11:42:58.000Z | [
"region:us"
] | lighteval | null | null | 0 | 17 | 2023-05-12T11:42:54 | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: gold_index
dtype: int64
splits:
- name: train
num_bytes: 59000
num_examples: 163
- name: valid
num_bytes: 218075
num_examples: 654
download_size: 130906
dataset_size: 277075
---
# Dataset Card for "truthfulqa_helm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 493 | [
[
-0.040008544921875,
-0.0173187255859375,
0.0144805908203125,
0.0165863037109375,
-0.0106964111328125,
0.0009374618530273438,
0.015167236328125,
-0.0211181640625,
0.03753662109375,
0.042205810546875,
-0.06292724609375,
-0.06988525390625,
-0.037261962890625,
-... |
Finnish-NLP/mc4_3.1.0_fi_cleaned | 2023-05-19T16:20:51.000Z | [
"region:us"
] | Finnish-NLP | null | null | 0 | 17 | 2023-05-15T19:54:32 | ---
dataset_info:
features:
- name: text
dtype: string
- name: timestamp
dtype: timestamp[s]
- name: url
dtype: string
- name: perplexity_kenlm
dtype: int64
- name: label_identity_attack
dtype: float64
- name: label_insult
dtype: float64
- name: label_obscene
dtype: float64
- name: label_severe_toxicity
dtype: float64
- name: label_threat
dtype: float64
- name: label_toxicity
dtype: float64
splits:
- name: train
num_bytes: 103354369732
num_examples: 26468761
- name: validation
num_bytes: 101931416
num_examples: 26149
download_size: 7141130482
dataset_size: 103456301148
---
# Dataset Card for "mc4_3.1.0_fi_cleaned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 842 | [
[
-0.05731201171875,
-0.013916015625,
0.01360321044921875,
-0.0024890899658203125,
-0.021331787109375,
-0.0010423660278320312,
0.03533935546875,
-0.00989532470703125,
0.058929443359375,
0.059295654296875,
-0.0628662109375,
-0.042205810546875,
-0.0213165283203125,
... |
Pranavkpba2000/skin_cancer_small_dataset | 2023-05-16T11:12:18.000Z | [
"region:us"
] | Pranavkpba2000 | null | null | 0 | 17 | 2023-05-16T11:12:00 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': AK
'1': BCC
'2': BKL
'3': DF
'4': MEL
'5': NV
'6': SCC
'7': VASC
splits:
- name: train
num_bytes: 66578294.72
num_examples: 11360
- name: test
num_bytes: 17394813.72
num_examples: 2840
download_size: 83755065
dataset_size: 83973108.44
---
# Dataset Card for "skin_cancer_small_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 653 | [
[
-0.01519775390625,
-0.0186004638671875,
0.022918701171875,
-0.00322723388671875,
-0.0207366943359375,
-0.004985809326171875,
0.0194244384765625,
-0.01316070556640625,
0.06671142578125,
0.046051025390625,
-0.050048828125,
-0.06744384765625,
-0.040283203125,
-... |
aalksii/ml-arxiv-papers | 2023-05-19T11:47:18.000Z | [
"task_categories:summarization",
"task_categories:text2text-generation",
"language:en",
"arxiv",
"ML",
"region:us"
] | aalksii | null | null | 1 | 17 | 2023-05-17T11:13:50 | ---
dataset_info:
features:
- name: title
dtype: string
- name: abstract
dtype: string
splits:
- name: train
num_bytes: 130808836.19633989
num_examples: 105832
- name: test
num_bytes: 14535413.803660113
num_examples: 11760
download_size: 81252051
dataset_size: 145344250
language:
- en
pretty_name: ML ArXiv Papers
task_categories:
- summarization
- text2text-generation
tags:
- arxiv
- ML
---
# Dataset Card for "ml-arxiv-papers"
This is a dataset containing ML ArXiv papers. The dataset is a version of the original one from [CShorten](https://huggingface.co/datasets/CShorten/ML-ArXiv-Papers), which is a part of the ArXiv papers dataset from [Kaggle](https://www.kaggle.com/datasets/Cornell-University/arxiv).
Three steps are made to process the source data:
1. useless columns removal;
2. train-test split;
3. '\n' removal and trimming spaces on sides of the text.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,044 | [
[
-0.036865234375,
-0.027923583984375,
0.006992340087890625,
0.00510406494140625,
-0.02716064453125,
-0.0038890838623046875,
0.01285552978515625,
-0.004779815673828125,
0.0308380126953125,
0.052276611328125,
-0.034332275390625,
-0.047576904296875,
-0.0349731445312... |
gorilla-llm/APIBench | 2023-05-29T06:31:49.000Z | [
"language:en",
"license:apache-2.0",
"api",
"arxiv:2305.15334",
"region:us"
] | gorilla-llm | null | null | 32 | 17 | 2023-05-29T06:21:06 | ---
license: apache-2.0
language:
- en
tags:
- api
---
# Gorilla: Large Language Model Connected with Massive APIs
By Shishir G. Patil, Tianjun Zhang, Xin Wang, and Joseph E. Gonzalez ([Project Website](https://shishirpatil.github.io/gorilla/))
[](https://arxiv.org/abs/2305.15334) [](https://discord.gg/3apqwwME) [](https://colab.research.google.com/drive/1DEBPsccVLF_aUnmD0FwPeHFrtdC0QIUP?usp=sharing)
`Gorilla` enables LLMs to use tools by invoking APIs. Given a natural language query, Gorilla can write a semantically- and syntactically- correct API to invoke. With Gorilla, we are the first to demonstrate how to use LLMs to invoke 1,600+ (and growing) API calls accurately while reducing hallucination. We also release APIBench, the largest collection of APIs, curated and easy to be trained on! Join us, as we try to expand the largest API store and teach LLMs how to write them! Hop on our Discord, or open a PR, or email us if you would like to have your API incorporated as well.
### Dataset Date
05/28/2023
### Organization
Gorilla LLM (UC Berkeley)
---
license: apache-2.0
--- | 1,372 | [
[
-0.037628173828125,
-0.06793212890625,
0.021484375,
0.045806884765625,
0.007080078125,
0.020233154296875,
-0.03253173828125,
-0.053741455078125,
0.01364898681640625,
0.035888671875,
-0.046966552734375,
-0.04339599609375,
-0.0364990234375,
0.00297737121582031... |
whu9/mediasum_postprocess | 2023-06-03T06:02:12.000Z | [
"region:us"
] | whu9 | null | null | 0 | 17 | 2023-06-03T06:01:07 | ---
dataset_info:
features:
- name: source
dtype: string
- name: summary
dtype: string
- name: source_num_tokens
dtype: int64
- name: summary_num_tokens
dtype: int64
splits:
- name: train
num_bytes: 3913935357
num_examples: 443511
- name: validation
num_bytes: 86873579
num_examples: 9999
- name: test
num_bytes: 88635215
num_examples: 9997
download_size: 2335096802
dataset_size: 4089444151
---
# Dataset Card for "mediasum_postprocess"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 632 | [
[
-0.04241943359375,
-0.00800323486328125,
0.00428009033203125,
0.01415252685546875,
-0.01824951171875,
-0.01116943359375,
0.00833892822265625,
0.00751495361328125,
0.06610107421875,
0.046051025390625,
-0.059906005859375,
-0.047271728515625,
-0.06396484375,
-0... |
vietgpt/c4_vi | 2023-06-22T06:38:28.000Z | [
"region:us"
] | vietgpt | null | null | 0 | 17 | 2023-06-12T19:24:23 | ---
dataset_info:
features:
- name: text
dtype: string
- name: timestamp
dtype: string
- name: url
dtype: string
- name: id
dtype: string
- name: perplexity
dtype: float64
splits:
- name: train
num_bytes: 74501968937.28577
num_examples: 16203296
download_size: 40109713280
dataset_size: 74501968937.28577
---
# Dataset Card for "c4_vi"
Num tokens: 14,998,688,762 tokens | 418 | [
[
-0.01617431640625,
-0.007053375244140625,
0.004558563232421875,
0.033233642578125,
-0.041473388671875,
0.006198883056640625,
0.00653076171875,
0.0033779144287109375,
0.03082275390625,
0.0289764404296875,
-0.002468109130859375,
-0.05718994140625,
-0.0379333496093... |
KaiLv/UDR_E2E | 2023-06-21T12:38:25.000Z | [
"region:us"
] | KaiLv | null | null | 0 | 17 | 2023-06-21T12:38:13 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: gem_id
dtype: string
- name: gem_parent_id
dtype: string
- name: question
dtype: string
- name: target
dtype: string
- name: references
dtype: string
splits:
- name: train
num_bytes: 3627637
num_examples: 12563
- name: validation
num_bytes: 1009818
num_examples: 1483
- name: test
num_bytes: 1240499
num_examples: 1847
download_size: 1727722
dataset_size: 5877954
---
# Dataset Card for "UDR_E2E"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 664 | [
[
-0.03826904296875,
-0.01340484619140625,
0.007598876953125,
0.0094757080078125,
-0.00921630859375,
-0.00823211669921875,
0.03179931640625,
-0.0234222412109375,
0.049468994140625,
0.026153564453125,
-0.048431396484375,
-0.043701171875,
-0.035980224609375,
-0.... |
yuhsinchan/nmsqa_seg | 2023-06-25T18:11:59.000Z | [
"region:us"
] | yuhsinchan | null | null | 0 | 17 | 2023-06-25T18:11:10 | ---
dataset_info:
features:
- name: context_code
sequence: int16
- name: context_cnt
sequence: int16
- name: question_code
sequence: int16
- name: question_cnt
sequence: int16
- name: start_idx
dtype: int64
- name: end_idx
dtype: int64
splits:
- name: train
num_bytes: 159406324
num_examples: 87075
- name: dev
num_bytes: 19749204
num_examples: 10493
download_size: 56905169
dataset_size: 179155528
---
# Dataset Card for "nmsqa_seg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 630 | [
[
-0.0413818359375,
-0.0122222900390625,
0.00909423828125,
0.00725555419921875,
-0.024444580078125,
-0.0013580322265625,
0.0304412841796875,
0.009033203125,
0.0732421875,
0.041778564453125,
-0.0653076171875,
-0.060760498046875,
-0.04095458984375,
-0.0086517333... |
commaai/commavq | 2023-09-19T21:38:38.000Z | [
"size_categories:100K<n<1M",
"license:mit",
"region:us"
] | commaai | TODO | null | 9 | 17 | 2023-06-27T04:43:38 | ---
license: mit
size_categories:
- 100K<n<1M
---
# commaVQ
commaVQ is a dataset of 100,000 heavily compressed driving videos for Machine Learning research. A heavily compressed driving video like this is useful to experiment with GPT-like video prediction models. This repo includes an encoder/decoder and an example of a video prediction model.
Examples and trained models can be found here: https://github.com/commaai/commavq
# Overview
A VQ-VAE [1,2] was used to heavily compress each frame into 128 "tokens" of 10 bits each. Each entry of the dataset is a "segment" of compressed driving video, i.e. 1min of frames at 20 FPS. Each file is of shape 1200x8x16 and saved as int16.
Note that the compressor is extremely lossy on purpose. It makes the dataset smaller and easy to play with (train GPT with large context size, fast autoregressive generation, etc.). We might extend the dataset to a less lossy version when we see fit.
<video title="source" controls>
<source src="https://github.com/commaai/commavq/assets/29985433/91894bf7-592b-4204-b3f2-3e805984045c" type="video/mp4">
</video>
<video title="compressed" controls>
<source src="https://github.com/commaai/commavq/assets/29985433/3a799ac8-781e-461c-bf14-c15cea42b985" type="video/mp4">
</video>
<video title="imagined" controls>
<source src="https://github.com/commaai/commavq/assets/29985433/f6f7699b-b6cb-4f9c-80c9-8e00d75fbfae" type="video/mp4">
</video>
# References
[1] Van Den Oord, Aaron, and Oriol Vinyals. "Neural discrete representation learning." Advances in neural information processing systems 30 (2017).
[2] Esser, Patrick, Robin Rombach, and Bjorn Ommer. "Taming transformers for high-resolution image synthesis." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021. | 1,797 | [
[
-0.033660888671875,
-0.047088623046875,
0.0083160400390625,
-0.005779266357421875,
-0.0198974609375,
0.0083465576171875,
-0.0191802978515625,
0.00830078125,
-0.0026988983154296875,
0.02294921875,
-0.0687255859375,
-0.01317596435546875,
-0.0552978515625,
-0.0... |
SillyL12324/girls | 2023-07-04T21:03:51.000Z | [
"region:us"
] | SillyL12324 | null | null | 1 | 17 | 2023-07-04T20:06:04 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 26971470920.248
num_examples: 343222
download_size: 10458353483
dataset_size: 26971470920.248
---
# Dataset Card for "girls"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 402 | [
[
-0.0246124267578125,
-0.03204345703125,
0.0171966552734375,
0.019744873046875,
-0.004161834716796875,
-0.0034160614013671875,
0.04046630859375,
-0.00807952880859375,
0.044281005859375,
0.0295562744140625,
-0.07537841796875,
-0.0557861328125,
-0.048309326171875,
... |
Isotonic/OpenOrca-deduped | 2023-08-24T13:21:18.000Z | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:conversational",
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:table-question-answering",
"task_categories:zero-shot-classification",
"task_categories:question-... | Isotonic | null | null | 3 | 17 | 2023-07-12T22:29:09 | ---
license: mit
dataset_info:
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
- name: reward
dtype: float32
splits:
- name: train
num_bytes: 3274600633.90245
num_examples: 2409134
- name: test
num_bytes: 409325419.048775
num_examples: 301142
- name: validation
num_bytes: 409325419.048775
num_examples: 301142
download_size: 2268645581
dataset_size: 4093251472.0000005
task_categories:
- text-generation
- text2text-generation
- conversational
- text-classification
- token-classification
- table-question-answering
- zero-shot-classification
- question-answering
- summarization
- feature-extraction
language:
- en
size_categories:
- 1M<n<10M
arxiv:
- 2301.13688
- 2306.02707
---
# Dataset Card for Isotonic/OpenOrca-deduped
## Dataset Summary
This dataset is a deduplicated version of [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca)
*MinHash Deduplication with Jaccard Threshold = 0.80*
```
Original dataset size: 4233923
Number of duplicate clusters: 522077
Files in duplicate cluster: 2115143
Unique files in duplicate cluster: 892638
Filtered dataset size: 3011418
``` | 1,256 | [
[
-0.0303497314453125,
-0.029541015625,
-0.005649566650390625,
0.013946533203125,
-0.0635986328125,
-0.0367431640625,
-0.01084136962890625,
-0.02923583984375,
0.037689208984375,
0.05621337890625,
-0.0175323486328125,
-0.0672607421875,
-0.0301971435546875,
0.00... |
Gustrd/dolly-15k-libretranslate-pt | 2023-07-18T02:04:29.000Z | [
"task_categories:question-answering",
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:pt",
"license:cc-by-sa-3.0",
"region:us"
] | Gustrd | null | null | 2 | 17 | 2023-07-13T12:30:13 | ---
license: cc-by-sa-3.0
task_categories:
- question-answering
- summarization
language:
- pt
size_categories:
- 10K<n<100K
---
# Summary
databricks-dolly-15k ( https://huggingface.co/datasets/databricks/databricks-dolly-15k/ ) is an open source dataset of instruction-following records generated by thousands of Databricks employees in several of the behavioral categories outlined in the InstructGPT paper, including brainstorming, classification, closed QA, generation, information extraction, open QA, and summarization.
This is a portuguese translation done with libretranslate ( https://github.com/LibreTranslate/LibreTranslate ).
This dataset can be used for any purpose, whether academic or commercial, under the terms of the Creative Commons Attribution-ShareAlike 3.0 Unported License.
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
Languages: Portuguese
Version: 1.0
---
# Original Readme
Dataset Overview
databricks-dolly-15k is a corpus of more than 15,000 records generated by thousands of Databricks employees to enable large language models to exhibit the magical interactivity of ChatGPT. Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories, including the seven outlined in the InstructGPT paper, as well as an open-ended free-form category. The contributors were instructed to avoid using information from any source on the web with the exception of Wikipedia (for particular subsets of instruction categories), and explicitly instructed to avoid using generative AI in formulating instructions or responses. Examples of each behavior were provided to motivate the types of questions and instructions appropriate to each category.
Halfway through the data generation process, contributors were given the option of answering questions posed by other contributors. They were asked to rephrase the original question and only select questions they could be reasonably expected to answer correctly.
For certain categories contributors were asked to provide reference texts copied from Wikipedia. Reference text (indicated by the context field in the actual dataset) may contain bracketed Wikipedia citation numbers (e.g. [42]) which we recommend users remove for downstream applications.
Intended Uses
While immediately valuable for instruction fine tuning large language models, as a corpus of human-generated instruction prompts, this dataset also presents a valuable opportunity for synthetic data generation in the methods outlined in the Self-Instruct paper. For example, contributor--generated prompts could be submitted as few-shot examples to a large open language model to generate a corpus of millions of examples of instructions in each of the respective InstructGPT categories.
Likewise, both the instructions and responses present fertile ground for data augmentation. A paraphrasing model might be used to restate each prompt or short responses, with the resulting text associated to the respective ground-truth sample. Such an approach might provide a form of regularization on the dataset that could allow for more robust instruction-following behavior in models derived from these synthetic datasets.
Dataset
Purpose of Collection
As part of our continuing commitment to open source, Databricks developed what is, to the best of our knowledge, the first open source, human-generated instruction corpus specifically designed to enable large language models to exhibit the magical interactivity of ChatGPT. Unlike other datasets that are limited to non-commercial use, this dataset can be used, modified, and extended for any purpose, including academic or commercial applications.
Sources
Human-generated data: Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories.
Wikipedia: For instruction categories that require an annotator to consult a reference text (information extraction, closed QA, summarization) contributors selected passages from Wikipedia for particular subsets of instruction categories. No guidance was given to annotators as to how to select the target passages.
Annotator Guidelines
To create a record, employees were given a brief description of the annotation task as well as examples of the types of prompts typical of each annotation task. Guidelines were succinct by design so as to encourage a high task completion rate, possibly at the cost of rigorous compliance to an annotation rubric that concretely and reliably operationalizes the specific task. Caveat emptor.
The annotation guidelines for each of the categories are as follows:
Creative Writing: Write a question or instruction that requires a creative, open-ended written response. The instruction should be reasonable to ask of a person with general world knowledge and should not require searching. In this task, your prompt should give very specific instructions to follow. Constraints, instructions, guidelines, or requirements all work, and the more of them the better.
Closed QA: Write a question or instruction that requires factually correct response based on a passage of text from Wikipedia. The question can be complex and can involve human-level reasoning capabilities, but should not require special knowledge. To create a question for this task include both the text of the question as well as the reference text in the form.
Open QA: Write a question that can be answered using general world knowledge or at most a single search. This task asks for opinions and facts about the world at large and does not provide any reference text for consultation.
Summarization: Give a summary of a paragraph from Wikipedia. Please don't ask questions that will require more than 3-5 minutes to answer. To create a question for this task include both the text of the question as well as the reference text in the form.
Information Extraction: These questions involve reading a paragraph from Wikipedia and extracting information from the passage. Everything required to produce an answer (e.g. a list, keywords etc) should be included in the passages. To create a question for this task include both the text of the question as well as the reference text in the form.
Classification: These prompts contain lists or examples of entities to be classified, e.g. movie reviews, products, etc. In this task the text or list of entities under consideration is contained in the prompt (e.g. there is no reference text.). You can choose any categories for classification you like, the more diverse the better.
Brainstorming: Think up lots of examples in response to a question asking to brainstorm ideas.
Personal or Sensitive Data
This dataset contains public information (e.g., some information from Wikipedia). To our knowledge, there are no private person’s personal identifiers or sensitive information.
Known Limitations
Wikipedia is a crowdsourced corpus and the contents of this dataset may reflect the bias, factual errors and topical focus found in Wikipedia
Some annotators may not be native English speakers
Annotator demographics and subject matter may reflect the makeup of Databricks employees
---
license: cc-by-sa-3.0
--- | 7,233 | [
[
-0.032867431640625,
-0.0828857421875,
0.0140533447265625,
0.020538330078125,
-0.00934600830078125,
-0.00829315185546875,
-0.020263671875,
-0.01309967041015625,
-0.00034999847412109375,
0.039031982421875,
-0.0491943359375,
-0.045867919921875,
-0.018798828125,
... |
jaimevera1107/similarity-sentences-spanish | 2023-07-24T14:11:43.000Z | [
"task_categories:sentence-similarity",
"size_categories:10K<n<100K",
"language:es",
"license:mit",
"region:us"
] | jaimevera1107 | null | null | 1 | 17 | 2023-07-16T05:35:49 | ---
license: mit
task_categories:
- sentence-similarity
language:
- es
size_categories:
- 10K<n<100K
pretty_name: SimilaritySpanishDataset
---
# similarity-sentences-spanish (SSS)
### Dataset Summary
This dataset comprises a collection of sentences generated using Chat GPT-3, covering various general topics.
The dataset also includes sentences from two existing datasets, STS-ES and STSB-Multi-MT, as well as SICK, which were used as additional sources.
The sentences in this dataset were generated to exhibit varying levels of similarity based on randomly divided prompts.
| **Source** | **Share (rows)** | **Count (rows)** | **Score (avg)** |
|-----------|-----------------|------------------|----------------|
| GPT | 22.71 % | 3982 | 0.50 |
| STBS | 49.21 % | 8628 | 0.53 |
| STS | 17.69 % | 3102 | 0.42 |
| SICK | 10.38 % | 1820 | 0.51 |
| **Total** | 100% | 17532 | 0.49 |
### Objective
The purpose of creating this dataset using Chat GPT-3 was to generate diverse text samples covering various topics and to ensure a balanced distribution of scores both overall and across different themes. By leveraging Chat GPT-3, the dataset aims to provide a wide range of sentence pairs with varying degrees of similarity for further analysis and research purposes.
### Languages
Spanish
## Dataset Structure
### Data Fields
- Sentence 1: The first sentence to be compared.
- Sentence 2: The second sentence to be compared.
- Score: A number between 0 and 1 indicating the similarity between Sentence 1 and Sentence 2, with 1 indicating high similarity.
- Source: The source of the information, represented by its abbreviation.
## Dataset Biases
This dataset inherits the biases present in the two existing datasets and the biases inherent in a text generation model like Chat GPT-3.
### Source Data
The dataset was created using the following sources:
1. Already existing datasets:
- STS-ES ([STSB](https://huggingface.co/datasets/stsb_multi_mt))
- STSB-Multi-MT ([STS](https://huggingface.co/datasets/PlanTL-GOB-ES/sts-es))
2. Newly generated data:
- Chat GPT-3: The sentences were generated using Chat GPT-3 for various general topics.
The dataset includes sentences from various themes, such as:
- Alimentación y Cocina (Food and Cooking)
- Arte y Cultura (Art and Culture)
- Ciencia y Tecnología (Science and Technology)
- Cine y Televisión (Film and Television)
- Deportes (Sports)
- Economía (Economy)
- Educación (Education)
- Estadística (Statistics)
- Filosofía (Philosophy)
- Finanzas (Finance)
- Historia (History)
- Literatura (Literature)
- Medicina (Medicine)
- Medio Ambiente y Sostenibilidad (Environment and Sustainability)
- Moda y Estilo (Fashion and Style)
- Música (Music)
- Organizacional (Organizational)
- Política y Gobierno (Politics and Government)
- Psicología (Psychology)
- Religión y Espiritualidad (Religion and Spirituality)
- Salud y Bienestar (Health and Wellness)
Please note that these themes are not exhaustive.
The prompts for each label (score) are as follows:
```python
descripciones_similaridad = {
"0.0": "Rewrite the following sentence in a new sentence about a completely different topic, without any apparent connection to the original sentence. The two sentences must be completely distinct and should not share any thematic similarity.",
"0.1": "Rewrite the following sentence in a new sentence about a topic completely different from the original sentence. Make sure the two sentences are entirely different and do not share any thematic similarity. At least 90% of the information level should change.",
"0.2": "Rewrite the following sentence in a new sentence about the same topic as the original sentence, but not an exact copy. You can express different ideas, but the general theme should be similar. Ensure at least 80% of the information level is different.",
"0.3": "Rewrite the following sentence in a new sentence about a topic related to the original sentence, though not equivalent. Both sentences must share a common theme or general idea, but they can express different viewpoints. At least 70% of the information level should change.",
"0.4": "Rewrite the following sentence in a new sentence that is not equivalent to the original, but has some similar details or elements. Ensure at least 60% of the information level is different.",
"0.5": "Rewrite the following sentence in a new sentence that is not equivalent to the original, but is related to some extent. Both sentences should have some details in common and be thematically related at least 50% of the information level.",
"0.6": "Rewrite the following sentence in a new sentence that is approximately equivalent to the original, but may differ in important information or have certain missing elements. The changes should slightly affect the meaning, and at least 60% of the information level should be preserved.",
"0.7": "Rewrite the following sentence in a new sentence that is approximately equivalent to the original, but may differ in important information or have some missing elements. Ensure at least 70% of the information level remains the same.",
"0.8": "Rewrite the following sentence in a new sentence that is mostly equivalent to the original, but may differ in some unimportant details. The changes should affect a maximum of 20% of the information level.",
"0.9": "Rewrite the following sentence in a new sentence that is nearly equivalent to the original, but may have some differences in minor details that do not significantly impact its meaning. The changes should affect a maximum of 10% of the information level.",
"1.0": "Rewrite the following sentence in a new sentence that is completely equivalent to the original, as they express exactly the same idea or meaning. The two sentences must share 100% of the information level.",
}
```
- SICK ([SICK Dataset](https://huggingface.co/datasets/sick))
The dataset also includes translated and sampled sentences from the SICK dataset using Helsinki ([helsinki - EN -ES](https://huggingface.co/datasets/sick)) as the translation tool to achieve an average score close to 0.5 with the entire dataset.
To maintain a balanced representation and avoid excessive prominence of translated data that was not originally written in Spanish and has not been reviewed in Spanish, the intention is to have scores generally centered around 0.5. | 6,582 | [
[
-0.0147857666015625,
-0.06915283203125,
0.03302001953125,
0.035430908203125,
-0.029571533203125,
-0.004573822021484375,
-0.01297760009765625,
-0.0221099853515625,
0.046844482421875,
0.036865234375,
-0.04791259765625,
-0.051727294921875,
-0.036773681640625,
0... |
bigheiniuJ/ChatGPTAug | 2023-07-23T00:06:08.000Z | [
"region:us"
] | bigheiniuJ | null | null | 0 | 17 | 2023-07-23T00:01:55 | ---
dataset_info:
features:
- name: label
dtype: string
- name: instance_text
dtype: string
- name: seed
dtype: string
- name: split
dtype: string
- name: task
dtype: string
- name: id
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: dev
num_bytes: 263432
num_examples: 2205
- name: test
num_bytes: 6590715
num_examples: 45315
- name: train
num_bytes: 278076
num_examples: 2250
download_size: 3148358
dataset_size: 7132223
---
# Dataset Card for "ChatGPTAug"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 691 | [
[
-0.04949951171875,
-0.035797119140625,
-0.00911712646484375,
0.020721435546875,
-0.007495880126953125,
0.014251708984375,
-0.002086639404296875,
-0.01299285888671875,
0.0587158203125,
0.02947998046875,
-0.0582275390625,
-0.053985595703125,
-0.04681396484375,
... |
seungheondoh/LP-MusicCaps-MSD | 2023-08-01T04:06:49.000Z | [
"size_categories:100K<n<1M",
"language:en",
"art",
"music",
"text-to-music",
"music-to-text",
"arxiv:2307.16372",
"region:us"
] | seungheondoh | null | null | 7 | 17 | 2023-07-26T12:33:38 | ---
language:
- en
tags:
- art
- music
- text-to-music
- music-to-text
pretty_name: LP-MusicCaps-MSD
size_categories:
- 100K<n<1M
---
======================================
**!important**: Be careful when using `caption_attribute_prediction` (We don't recommend to use)!
======================================
# Dataset Card for LP-MusicCaps-MSD
## Dataset Description
- **Repository:** [LP-MusicCaps repository](https://github.com/seungheondoh/lp-music-caps)
- **Paper:** [ArXiv](https://arxiv.org/abs/2307.16372)
## Dataset Summary
**LP-MusicCaps** is a Large Language Model based Pseudo Music Caption dataset for `text-to-music` and `music-to-text` tasks. We construct the music-to-caption pairs with tag-to-caption generation (using three existing multi-label tag datasets and four task instructions). The data sources are MusicCaps, Magnatagtune, and Million Song Dataset ECALS subset.
- **LP-MusicCaps MSD (This Repo)**: 0.5M Audio with 2.2M Caption. We utilize 1054 unique tags in the [MSD-ECALS](https://github.com/SeungHeonDoh/msd-subsets) to perform tag-to-caption generation through LLM.
- [LP-MusicCaps MTT](https://huggingface.co/datasets/seungheondoh/LP-MusicCaps-MTT): 22k Audio with 88k Caption
- [LP-MusicCaps MC](https://huggingface.co/datasets/seungheondoh/LP-MusicCaps-MC): 6k Audio with 22k Caption.
## Data Instances
Each instance in LP-MusicCaps MSD (This Repo) represents multiple image-text pair information with meta-attributes:
```
{
'track_id': 'TRIHXPZ128F1466744',
'title': 'In The Sunshine',
'artist_name': 'ARRESTED DEVELOPMENT',
'release': 'Zingalamaduni',
'year': 1994,
'tag': ['laid back mellow',
'hip hop',
'rnb',
'amiable good natured',
'rap',
'urban',
'gentle',
'political rap',
'soul',
'calm peaceful',
'summery',
'cheerful',
'alternative rap'
],
'caption_writing': 'An amiable and laid back alternative rap tune, this summery and cheerful song blends elements of soul and R&B with a gentle, mellow rap flow to create a calm and peaceful urban vibe that is both hip hop and political in its message.',
'caption_summary': 'This summery, alternative rap song is a mellow and gentle blend of hip hop, RnB, and political rap with a cheerful and amiable good natured vibe.',
'caption_paraphrase': 'This laid back mellow rap song infuses soulful and urban elements while showcasing a gentle and amiable good natured vibe, perfect for a summery day. With hints of cheerful R&B and hip hop, the alternative political rap lyrics bring balance to this peaceful and calming tune.',
'caption_attribute_prediction': 'This mellow, soulful tune is a perfect blend of rap and RnB, with a gentle beat and smooth flow that will transport you to the laid-back urban vibes of a sunny summertime day. The amiable good-natured lyrics touch on political themes, while the alternative rap style adds a cheerful, upbeat twist to the message. Overall, this is a hip-hop gem thats sure to put you in a peaceful, calm state of mind.',
'path': '3/0/303545.clip.mp3'
}
```
## Pseudo Caption Example:
Input Tags:
*"video game theme, no singer, instrumental, analog sounding, small keyboard, beatboxing, playful, cheerful, groovy"*
Output Pseudo Captions
*"instrumental track has a joyful and playful vibe, perfect for a video game theme. With no singer, the analog-sounding music features a small keyboard and beatboxing, creating a groovy and cheerful atmosphere"*
[More Information for pseudo caption generation](https://github.com/seungheondoh/lp-music-caps/blob/main/lpmc/llm_captioning/generate.py)
## Data Fields
| Name | Type | Description |
|------------------------------|-----------------|----------------------------------------------------------------------|
| track_id | string | Unique identifier for the track |
| title | string | Title of the song |
| artist_name | string | Name of the artist performing the song |
| release | string | Release name or album name of the song |
| year | integer | Year of the song's release |
| tag | list of strings | List of tags associated with the song |
| caption_writing | string | Pseudo caption generated through a writing instruction |
| caption_summary | string | Pseudo caption generated through a summary instruction |
| caption_paraphrase | string | Pseudo caption generated through a paraphrase instruction |
| caption_attribute_prediction | string | Pseudo caption generated through an attribute_prediction instruction |
| path | string | File path or location of the audio clip |
## Data Splits
- train: 444865
- valid: 34481
- test: 34631
## Considerations for Using the Data
The LP-MusicCaps dataset is recommended to be used for research purposes. Due to the wrong labeling issue, we recommend not using caption_attribute_prediction and pseudo_attribute unless it is specifically for large-scale pretraining. Additionally, the field "is_crawled" indicates the samples used in the reference paper mentioned below.
## Discussion of Biases
It will be described in a paper to be released soon.
## Other Known Limitations
It will be described in a paper to be released soon. | 5,814 | [
[
-0.044708251953125,
-0.036285400390625,
0.01523590087890625,
0.027984619140625,
-0.02947998046875,
0.013427734375,
-0.0186614990234375,
-0.0318603515625,
0.05072021484375,
0.041107177734375,
-0.07720947265625,
-0.06494140625,
-0.0279083251953125,
-0.00108337... |
izumi-lab/wikipedia-en-20230720 | 2023-07-29T03:06:05.000Z | [
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | izumi-lab | null | null | 6 | 17 | 2023-07-28T09:15:57 | ---
dataset_info:
features:
- name: curid
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 16118978135
num_examples: 6650632
download_size: 9566993111
dataset_size: 16118978135
license: cc-by-sa-3.0
language:
- en
---
# Dataset Card for "wikipedia-en-20230720"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 482 | [
[
-0.064208984375,
-0.0131683349609375,
0.0172271728515625,
0.0208740234375,
-0.016326904296875,
-0.022430419921875,
0.00542449951171875,
-0.024871826171875,
0.06201171875,
0.03192138671875,
-0.0623779296875,
-0.046417236328125,
-0.03411865234375,
0.0043601989... |
ixarchakos/tops_laydown | 2023-08-22T15:06:04.000Z | [
"region:us"
] | ixarchakos | null | null | 0 | 17 | 2023-08-04T11:24:21 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
mrSoul7766/ECTSum | 2023-08-07T16:59:19.000Z | [
"region:us"
] | mrSoul7766 | null | null | 0 | 17 | 2023-08-07T16:57:45 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ds4sd/FinTabNet_OTSL | 2023-08-31T16:01:59.000Z | [
"task_categories:object-detection",
"task_categories:table-to-text",
"size_categories:10K<n<100K",
"license:other",
"table-structure-recognition",
"table-understanding",
"PDF",
"arxiv:2305.03393",
"region:us"
] | ds4sd | null | null | 2 | 17 | 2023-08-10T07:52:33 | ---
license: other
pretty_name: FinTabNet-OTSL
size_categories:
- 10K<n<100K
tags:
- table-structure-recognition
- table-understanding
- PDF
task_categories:
- object-detection
- table-to-text
---
# Dataset Card for FinTabNet_OTSL
## Dataset Description
- **Homepage:** https://ds4sd.github.io
- **Paper:** https://arxiv.org/pdf/2305.03393
### Dataset Summary
This dataset is a conversion of the original [FinTabNet](https://developer.ibm.com/exchanges/data/all/fintabnet/) into the OTSL format presented in our paper "Optimized Table Tokenization for Table Structure Recognition". The dataset includes the original annotations amongst new additions.
### Dataset Structure
* cells: origunal dataset cell groundtruth (content).
* otsl: new reduced table structure token format
* html: original dataset groundtruth HTML (structure).
* html_restored: generated HTML from OTSL.
* cols: grid column length.
* rows: grid row length.
* image: PIL image
### OTSL Vocabulary:
**OTSL**: new reduced table structure token format
More information on the OTSL table structure format and its concepts can be read from our paper.
Format of this dataset extends work presented in a paper, and introduces slight modifications:
* "fcel" - cell that has content in it
* "ecel" - cell that is empty
* "lcel" - left-looking cell (to handle horizontally merged cells)
* "ucel" - up-looking cell (to handle vertically merged cells)
* "xcel" - 2d span cells, in this dataset - covers entire area of a merged cell
* "nl" - new line token
### Data Splits
The dataset provides three splits
- `train`
- `val`
- `test`
## Additional Information
### Dataset Curators
The dataset is converted by the [Deep Search team](https://ds4sd.github.io/) at IBM Research.
You can contact us at [deepsearch-core@zurich.ibm.com](mailto:deepsearch-core@zurich.ibm.com).
Curators:
- Maksym Lysak, [@maxmnemonic](https://github.com/maxmnemonic)
- Ahmed Nassar, [@nassarofficial](https://github.com/nassarofficial)
- Christoph Auer, [@cau-git](https://github.com/cau-git)
- Nikos Livathinos, [@nikos-livathinos](https://github.com/nikos-livathinos)
- Peter Staar, [@PeterStaar-IBM](https://github.com/PeterStaar-IBM)
### Citation Information
```bib
@misc{lysak2023optimized,
title={Optimized Table Tokenization for Table Structure Recognition},
author={Maksym Lysak and Ahmed Nassar and Nikolaos Livathinos and Christoph Auer and Peter Staar},
year={2023},
eprint={2305.03393},
archivePrefix={arXiv},
primaryClass={cs.CV}
}``` | 2,535 | [
[
-0.022125244140625,
-0.027618408203125,
0.02716064453125,
-0.004383087158203125,
-0.045013427734375,
-0.00879669189453125,
0.002048492431640625,
-0.02825927734375,
0.03997802734375,
0.0222930908203125,
-0.025115966796875,
-0.0693359375,
-0.0144500732421875,
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.