id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
nlplabtdtu/people_data_only_chatgpt | 2023-08-10T15:11:58.000Z | [
"region:us"
] | nlplabtdtu | null | null | 0 | 45 | 2023-08-10T15:03:39 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
nlplabtdtu/people_qa_short_answer | 2023-08-10T16:11:48.000Z | [
"region:us"
] | nlplabtdtu | null | null | 0 | 45 | 2023-08-10T15:15:54 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
SaiedAlshahrani/Wikipedia-Corpora-Report | 2023-10-30T09:44:27.000Z | [
"size_categories:1K<n<10K",
"license:mit",
"region:us"
] | SaiedAlshahrani | null | null | 0 | 45 | 2023-08-19T02:28:29 | ---
license: mit
pretty_name: Wikipedia-Corpora-Report
size_categories:
- 1K<n<10K
---
# Dataset Card for "Wikipedia-Corpora-Report"
This dataset is used as a metadata database for the online [WIKIPEDIA CORPORA META REPORT](https://wikipedia-corpora-report.streamlit.app/) dashboard that illustrates how humans and bots generate or edit Wikipedia editions and provides metrics for “pages” and “edits” for all Wikipedia editions (320 languages). The “pages” metric counts articles and non-articles, while the “edits” metric tallies edits on articles and non-articles, all categorized by contributor type: humans or bots. The metadata is downloaded from [Wikimedia Statistics](https://stats.wikimedia.org/#/all-projects), then processed and uploaded to the Hugging Face Hub as a dataset.
For more details about the dataset, please **read** and **cite** our paper:
```bash
@inproceedings{alshahrani-etal-2023-implications,
title = "{{Performance Implications of Using Unrepresentative Corpora in Arabic Natural Language Processing}}",
author = "Alshahrani, Saied and Alshahrani, Norah and Dey, Soumyabrata and Matthews, Jeanna",
booktitle = "Proceedings of the The First Arabic Natural Language Processing Conference (ArabicNLP 2023)",
month = dec,
year = "2023",
address = "Singapore (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://webspace.clarkson.edu/~alshahsf/unrepresentative_corpora.pdf",
doi = "#################",
pages = "###--###",
abstract = "Wikipedia articles are a widely used source of training data for Natural Language Processing (NLP) research, particularly as corpora for low-resource languages like Arabic. However, it is essential to understand the extent to which these corpora reflect the representative contributions of native speakers, especially when many entries in a given language are directly translated from other languages or automatically generated through automated mechanisms. In this paper, we study the performance implications of using inorganic corpora that are not representative of native speakers and are generated through automated techniques such as bot generation or automated template-based translation. The case of the Arabic Wikipedia editions gives a unique case study of this since the Moroccan Arabic Wikipedia edition (ARY) is small but representative, the Egyptian Arabic Wikipedia edition (ARZ) is large but unrepresentative, and the Modern Standard Arabic Wikipedia edition (AR) is both large and more representative. We intrinsically evaluate the performance of two main NLP upstream tasks, namely word representation and language modeling, using word analogy evaluations and fill-mask evaluations using our two newly created datasets: Arab States Analogy Dataset (ASAD) and Masked Arab States Dataset (MASD). We demonstrate that for good NLP performance, we need both large and organic corpora; neither alone is sufficient. We show that producing large corpora through automated means can be a counter-productive, producing models that both perform worse and lack cultural richness and meaningful representation of the Arabic language and its native speakers.",
}
| 3,203 | [
[
-0.054290771484375,
-0.034149169921875,
0.0013437271118164062,
0.006591796875,
-0.024993896484375,
0.00965118408203125,
-0.04388427734375,
-0.05859375,
0.0163421630859375,
0.0258941650390625,
-0.0269775390625,
-0.056854248046875,
-0.045623779296875,
0.041015... |
mtc/swisstext23-20min-annotation-data | 2023-08-25T08:22:10.000Z | [
"region:us"
] | mtc | null | null | 0 | 45 | 2023-08-21T09:12:26 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: int64
- name: titleHeader
dtype: string
- name: title
dtype: string
- name: lead
dtype: string
- name: article
dtype: string
- name: summary
dtype: string
- name: article_sentence_count
dtype: int64
- name: summary_sentence_count
dtype: int64
- name: __index_level_0__
dtype: int64
- name: url
dtype: string
- name: paragraphs
sequence: string
splits:
- name: test
num_bytes: 998931
num_examples: 200
download_size: 0
dataset_size: 998931
---
# Dataset Card for "swisstext23-20min-annotation-data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 844 | [
[
-0.042266845703125,
-0.0308837890625,
0.021697998046875,
0.058258056640625,
-0.0306549072265625,
0.00127410888671875,
-0.0026493072509765625,
-0.0394287109375,
0.04791259765625,
0.0455322265625,
-0.07586669921875,
-0.04571533203125,
-0.031219482421875,
-0.00... |
mlabonne/Evol-Instruct-Python-1k | 2023-08-25T16:31:50.000Z | [
"region:us"
] | mlabonne | null | null | 1 | 45 | 2023-08-25T16:28:23 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 5465833
num_examples: 1000
download_size: 2322359
dataset_size: 5465833
---
# Evol-Instruct-Python-1k
Subset of the [`mlabonne/Evol-Instruct-Python-26k`](https://huggingface.co/datasets/mlabonne/Evol-Instruct-Python-26k) dataset with only 1000 samples.
It was made by filtering out a few rows (instruction + output) with more than 2048 tokens, and then by keeping the 1000 longest samples.
Here is the distribution of the number of tokens in each row using Llama's tokenizer:
 | 756 | [
[
-0.0430908203125,
-0.038482666015625,
0.01407623291015625,
0.0254058837890625,
-0.0309600830078125,
0.00014781951904296875,
0.01337432861328125,
-0.01654052734375,
0.0455322265625,
0.032470703125,
-0.050567626953125,
-0.042938232421875,
-0.026580810546875,
0... |
DataProvenanceInitiative/Commercially-Verified-Licenses | 2023-10-18T17:44:49.000Z | [
"region:us"
] | DataProvenanceInitiative | null | null | 0 | 45 | 2023-09-18T04:31:20 |
# Dataset Card for **Data Provenance Initiative - Commercially-Verified-Licenses**
## Dataset Description
- **Homepage:** https://github.com/Data-Provenance-Initiative/Data-Provenance-Collection
- **Repository:** https://github.com/Data-Provenance-Initiative/Data-Provenance-Collection
- **Paper:** https://genlaw.github.io/CameraReady/20.pdf
- **Point of Contact:** data.provenance.init@gmail.com
- **NOTE:** Licenses for these datasets are "self-reported" and collected by best-effort volunteers on a per dataset basis. Please find more details in the paper linked above.
### Legal Disclaimer / Notice
Collected License Information is **NOT** Legal Advice.
It is important to note we collect self-reported licenses, from the papers and repositories that released these datasets, and categorize them according to our best efforts, as a volunteer research and transparency initiative.
The information provided by any of our works and any outputs of the Data Provenance Initiative do not, and are not intended to, constitute legal advice; instead, all information, content, and materials are for general informational purposes only.
Readers and users should seek their own legal advice from counsel in their relevant jurisdiction.
### Dataset Summary
A wave of recent language models have been powered by large collections of natural language datasets. The sudden race to train models on these disparate collections of incorrectly, ambiguously, or under-documented datasets has left practitioners unsure of the legal and qualitative characteristics of the models they train. To remedy this crisis in data transparency and understanding, in a joint effort between experts in machine learning and the law, we’ve compiled the most detailed and reliable metadata available for data licenses, sources, and provenance, as well as fine-grained characteristics like language, text domains, topics, usage, collection time, and task compositions. Beginning with nearly 40 popular instruction (or “alignment”) tuning collections, we release a suite of open source tools for downloading, filtering, and examining this training data. Our analysis sheds light on the fractured state of data transparency, particularly with data licensing, and we hope our tools will empower more informed and responsible data-centric development of future language models.
### What does **Commercial** mean here?
- `Commercial` includes datasets that are compatible with commercial usage, meaning commercial usage of this dataset is permitted as per its license.
### Constituent Data Collections
- Following table shows each constituent data collection this Dataset along with original source from where each data collection is derived from.
| # | Collection Name | Description | Source |
| --------------- | --------------- | --------------- | --------------- |
| 1 | Anthropic HH-RLHF | Human preference data about helpfulness and harmlessness & Human-generated and annotated red teaming dialogues. | https://huggingface.co/datasets/Anthropic/hh-rlhf |
| 2 | CommitPackFT | CommitPackFT is a 2GB filtered version of CommitPack to contain only high-quality commit messages that resemble natural language instructions. | https://huggingface.co/datasets/bigcode/commitpackft |
| 3 | Dolly 15k | Databricks Dolly 15k is a dataset containing 15,000 high-quality human-generated prompt / response pairs specifically designed for instruction tuning large language models. | https://huggingface.co/datasets/databricks/databricks-dolly-15k |
| 4 | Flan Collection (Chain-of-Thought) | Chain-of-Thought sub-mixture in Flan collection dataset. | https://huggingface.co/datasets/conceptofmind/cot_submix_original |
| 5 | Flan Collection (Dialog) | Chain-of-Thought sub-mixture in Flan collection dataset. | https://huggingface.co/datasets/conceptofmind/dialog_submix_original |
| 6 | Flan Collection (Flan 2021) | Flan 2021 sub-mixture in Flan collection dataset. | https://huggingface.co/datasets/conceptofmind/flan2021_submix_original |
| 7 | Flan Collection (P3) | P3 sub-mixture in Flan collection dataset. | https://huggingface.co/datasets/conceptofmind/t0_submix_original |
| 8 | Flan Collection (Super-NaturalInstructions) | Super-Natural Instructions in Flan collection dataset. | https://huggingface.co/datasets/conceptofmind/niv2_submix_original |
| 9 | Joke Explanation | Corpus for testing whether your LLM can explain the joke well. | https://huggingface.co/datasets/theblackcat102/joke_explaination |
| 10 | OIG | Open Instruction Generalist is a large instruction dataset of medium quality along with a smaller high quality instruction dataset (OIG-small-chip2). | https://huggingface.co/datasets/laion/OIG |
| 11 | Open Assistant | OpenAssistant Conversations (OASST1) is a human-generated, human-annotated assistant-style conversation corpus consisting of 161,443 messages in 35 different languages, annotated with 461,292 quality ratings, resulting in over 10,000 fully annotated conversation trees. | https://huggingface.co/datasets/OpenAssistant/oasst1 |
| 12 | Open Assistant OctoPack | Filtered version of OpenAssistant Conversations (OASST1) to focus only on high-quality conversation trees as used in OctoPack paper. | https://huggingface.co/datasets/bigcode/oasst-octopack |
| 13 | Tasksource Symbol-Tuning | Tasksource datasets converted for symbol-tuning. | https://github.com/sileod/tasksource |
| 14 | Tasksource Instruct | Tasksource datasets as instructions for instruction-tuning. | https://github.com/sileod/tasksource |
| 15 | xp3x | xP3x is a collection of prompts & datasets across 277 languages & 16 NLP tasks. It contains all of xP3 + much more. | https://huggingface.co/datasets/Muennighoff/xP3x |
| 16 | StarCoder Self-Instruct | Dataset generated by prompting starcoder to generate new instructions based on some human-written seed instructions. | https://huggingface.co/datasets/codeparrot/self-instruct-starcoder |
### Data Instances
[More Information Needed]
### Data Fields
The following snippet shows the fields in a row in each data collection in this dataset:
```
[
{"from": "user", "text": input_text.strip(), "parent": dset},
{"from": "assistant", "text": target_text.strip(), "parent": 0},
...
]
```
with fields:
- from: indicates the originator of the text in this conversation. This can be either "user" or "assistant", where "assistant" indicates the model and text will be model's response to user's text.
- text: indicates text that originator wants to communicate to receiver.
- parent: field indicating the parent for tracing the conversation hierarchy.
Here each row contains one or more json objects indicating user-assistant interaction dialogue with text messages exchanged between them. You can leverager `parent` field in json object to follow the tree structure of interactions.
### Downloading Dataset
You can load the entire dataset by using the following code:
```python
import os
from datasets import load_dataset
# If the dataset is gated/private, make sure you have run huggingface-cli login
dataset = load_dataset("DataProvenanceInitiative/Commercially-Verified-Licenses")
```
You can load a specific dataset subset such as Dolly 15k using the following code:
```python
import os
from datasets import load_dataset
subset = load_dataset(
"DataProvenanceInitiative/Commercially-Verified-Licenses",
split="train",
num_proc = os.cpu_count(),
revision="main",
data_files="data/commercial/commitpack_ft_commercial"
)
```
### Data Splits
[More Information Needed]
[TODO: Add each dataset and add # of samples in train/dev]
## Dataset Creation
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{dpi,
title={The Data Provenance Project},
author={Longpre, Shayne and Mahari, Robert and Muennighoff, Niklas and Chen, Anthony and Perisetla, Kartik and Brannon, William and Kabbara, Jad and Villa, Luis and Hooker, Sara},
journal={arXiv preprint [TODO]add id},
year={2023}
}
```
### Contributions
Thanks to [data.provenance.init@gmail.com](mailto:data.provenance.init@gmail.com) for adding this dataset. | 8,632 | [
[
-0.029449462890625,
-0.053192138671875,
0.01715087890625,
0.008636474609375,
-0.004138946533203125,
-0.0010156631469726562,
-0.01324462890625,
-0.043670654296875,
0.007328033447265625,
0.05316162109375,
-0.05218505859375,
-0.053192138671875,
-0.0335693359375,
... |
Goorm-AI-04/Drone_Doppler | 2023-09-28T06:21:27.000Z | [
"region:us"
] | Goorm-AI-04 | null | null | 0 | 45 | 2023-09-28T06:10:15 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
sequence:
sequence: float64
- name: label
dtype: int64
- name: type
dtype: string
splits:
- name: train
num_bytes: 75993012
num_examples: 13988
- name: test
num_bytes: 18998253
num_examples: 3497
download_size: 96723379
dataset_size: 94991265
---
# Dataset Card for "Drone_Doppler"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 632 | [
[
-0.04931640625,
0.0015020370483398438,
0.004589080810546875,
0.0007710456848144531,
-0.0168914794921875,
0.0146026611328125,
0.01739501953125,
-0.017547607421875,
0.047332763671875,
0.01666259765625,
-0.050384521484375,
-0.03790283203125,
-0.031585693359375,
... |
ttagu99/ko_f_1871 | 2023-10-04T15:15:26.000Z | [
"region:us"
] | ttagu99 | null | null | 0 | 45 | 2023-10-04T15:15:21 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 6462525
num_examples: 1871
download_size: 3201114
dataset_size: 6462525
---
# Dataset Card for "ko_f_1871"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 427 | [
[
-0.038055419921875,
-0.0179443359375,
0.0280303955078125,
0.0263519287109375,
-0.018585205078125,
-0.010101318359375,
0.03668212890625,
-0.007755279541015625,
0.0533447265625,
0.037322998046875,
-0.03973388671875,
-0.050140380859375,
-0.039031982421875,
-0.0... |
kewu93/sketch_100 | 2023-10-06T00:24:00.000Z | [
"region:us"
] | kewu93 | null | null | 0 | 45 | 2023-10-06T00:23:57 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1431746.0
num_examples: 100
- name: val
num_bytes: 1431746.0
num_examples: 100
download_size: 2858841
dataset_size: 2863492.0
---
# Dataset Card for "sketch_100"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 571 | [
[
-0.03887939453125,
-0.02001953125,
0.0179290771484375,
0.0277557373046875,
-0.0100250244140625,
0.00029850006103515625,
0.022918701171875,
-0.0036678314208984375,
0.07537841796875,
0.037353515625,
-0.066162109375,
-0.044464111328125,
-0.0289306640625,
-0.022... |
hk-kaden-kim/uzh-hs23-etsp-eval-single-base-bar | 2023-10-08T10:52:59.000Z | [
"region:us"
] | hk-kaden-kim | null | null | 0 | 45 | 2023-10-08T10:45:41 | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: test
num_bytes: 5223052.0
num_examples: 100
download_size: 5179034
dataset_size: 5223052.0
---
# Dataset Card for "uzh-hs23-etsp-eval-single-base-bar"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 414 | [
[
-0.045562744140625,
-0.0305023193359375,
0.0107574462890625,
0.0179290771484375,
-0.031646728515625,
0.0216827392578125,
0.007152557373046875,
0.0134429931640625,
0.053436279296875,
0.050506591796875,
-0.0460205078125,
-0.06085205078125,
-0.0152130126953125,
... |
Rijgersberg/mc4_en_cleaned_tiny | 2023-10-11T04:52:17.000Z | [
"region:us"
] | Rijgersberg | null | null | 0 | 45 | 2023-10-11T04:52:01 | ---
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
dataset_info:
features:
- name: text
dtype: string
- name: timestamp
dtype: string
- name: url
dtype: string
splits:
- name: validation
num_bytes: 36826237
num_examples: 16189
download_size: 22788389
dataset_size: 36826237
---
# Dataset Card for "mc4_en_cleaned_tiny"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 536 | [
[
-0.048797607421875,
-0.01000213623046875,
0.026519775390625,
-0.0089111328125,
-0.0228729248046875,
-0.005161285400390625,
0.0108795166015625,
-0.007114410400390625,
0.06829833984375,
0.045623779296875,
-0.057220458984375,
-0.041046142578125,
-0.02667236328125,
... |
sayan1101/sum_only_sft | 2023-10-16T10:54:12.000Z | [
"region:us"
] | sayan1101 | null | null | 0 | 45 | 2023-10-15T08:24:11 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- config_name: sayan1101
data_files:
- split: train
path: sayan1101/train-*
- split: test
path: sayan1101/test-*
dataset_info:
- config_name: default
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 1257397090
num_examples: 287113
download_size: 739106590
dataset_size: 1257397090
- config_name: sayan1101
features:
- name: prompt
dtype: string
- name: Instruction
dtype: string
- name: Response
dtype: string
splits:
- name: train
num_bytes: 440245021.64652944
num_examples: 50000
- name: test
num_bytes: 44024502.16465294
num_examples: 5000
download_size: 294043268
dataset_size: 484269523.8111824
---
# Dataset Card for "sum_only_sft"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 967 | [
[
-0.030517578125,
-0.024017333984375,
0.01092529296875,
0.03704833984375,
-0.0186309814453125,
0.00464630126953125,
0.0240478515625,
0.005611419677734375,
0.07562255859375,
0.053314208984375,
-0.0682373046875,
-0.037506103515625,
-0.03179931640625,
-0.0142211... |
vitaliy-sharandin/depression-instruct | 2023-10-25T13:24:11.000Z | [
"region:us"
] | vitaliy-sharandin | null | null | 0 | 45 | 2023-10-25T13:22:41 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 12872
num_examples: 51
download_size: 10500
dataset_size: 12872
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "depression-instruct"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 483 | [
[
-0.046295166015625,
-0.023406982421875,
0.031219482421875,
0.0330810546875,
-0.008880615234375,
-0.009429931640625,
0.00852203369140625,
0.0007066726684570312,
0.061065673828125,
0.021697998046875,
-0.0665283203125,
-0.05999755859375,
-0.05059814453125,
-0.0... |
raphael0202/ingredient-detection-layout-dataset | 2023-11-01T16:22:36.000Z | [
"region:us"
] | raphael0202 | null | null | 0 | 45 | 2023-10-29T12:49:48 | ---
dataset_info:
features:
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-ING
'2': I-ING
- name: words
sequence: string
- name: bboxes
sequence:
sequence: int64
- name: image
dtype: image
- name: text
dtype: string
- name: offsets
sequence:
sequence: int64
- name: meta
struct:
- name: barcode
dtype: string
- name: image_id
dtype: string
- name: url
dtype: string
- name: id
dtype: string
- name: in_test_split
dtype: bool
splits:
- name: train
num_bytes: 2059533770.875
num_examples: 5065
- name: test
num_bytes: 244591039.0
num_examples: 556
download_size: 2271205424
dataset_size: 2304124809.875
---
# Dataset Card for "ingredient-detection-layout-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 984 | [
[
-0.03875732421875,
-0.019012451171875,
0.03125,
0.01499176025390625,
0.0089874267578125,
-0.002368927001953125,
0.02166748046875,
-0.0005369186401367188,
0.07318115234375,
0.040496826171875,
-0.066650390625,
-0.06951904296875,
-0.03631591796875,
-0.017395019... |
sunghuncsa/wow | 2023-11-01T07:49:46.000Z | [
"region:us"
] | sunghuncsa | null | null | 0 | 45 | 2023-11-01T07:49:28 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01496124267578125,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.0465087890625,
0.052490234375,
0.005039215087890625,
0.051361083984375,
0.0169830322265625,
-0.052093505859375,
-0.01497650146484375,
-0.06036376953125,
0.0379333... |
Champion/vpc2020_clear_anon_speech | 2021-10-12T14:19:45.000Z | [
"region:us"
] | Champion | null | null | 0 | 44 | 2022-03-02T23:29:22 | Repo to share original and anonymized speech of vpc2020
| 56 | [
[
-0.02093505859375,
0.00045490264892578125,
0.0016908645629882812,
0.036590576171875,
-0.044586181640625,
0.0261688232421875,
-0.0252227783203125,
-0.034820556640625,
0.0272979736328125,
0.1075439453125,
-0.03167724609375,
-0.036468505859375,
-0.0406494140625,
... |
albertvillanova/dummy_libri2mix | 2021-07-28T09:44:02.000Z | [
"region:us"
] | albertvillanova | null | null | 0 | 44 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01496124267578125,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.0465087890625,
0.052490234375,
0.005039215087890625,
0.051361083984375,
0.0169830322265625,
-0.052093505859375,
-0.01497650146484375,
-0.06036376953125,
0.0379333... |
gigant/horse2zebra | 2022-10-24T17:37:53.000Z | [
"task_categories:image-to-image",
"license:cc",
"GAN",
"unpaired-image-to-image-translation",
"arxiv:1703.10593",
"region:us"
] | gigant | Two unpaired sets of photos of respectively horses and zebras, designed for unpaired image-to-image translation, as seen in the paper introducing CycleGAN | @inproceedings{CycleGAN2017,
title={Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks},
author={Zhu, Jun-Yan and Park, Taesung and Isola, Phillip and Efros, Alexei A},
booktitle={Computer Vision (ICCV), 2017 IEEE International Conference on},
year={2017}
} | 1 | 44 | 2022-03-11T09:59:03 | ---
license: cc
task_categories:
- image-to-image
task_ids: []
pretty_name: Horse2Zebra
tags:
- GAN
- unpaired-image-to-image-translation
---
## Dataset Description
- **Homepage:** https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/
- **Paper:** https://arxiv.org/abs/1703.10593
### Dataset Summary
This dataset was obtained from the original CycleGAN Datasets directory available on [Berkeley's website](https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/).
For more details about the dataset you can refer to the [original CycleGAN publication](https://arxiv.org/abs/1703.10593).
### How to use
You can easily load the dataset with the following lines :
```python
from datasets import load_dataset
data_horses = load_dataset("gigant/horse2zebra", name="horse", split="train")
data_zebras = load_dataset("gigant/horse2zebra", name="zebra", split="train")
```
Two splits are available, `"train"` and `"test"`
### Citation Information
```
@inproceedings{CycleGAN2017,
title={Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks},
author={Zhu, Jun-Yan and Park, Taesung and Isola, Phillip and Efros, Alexei A},
booktitle={Computer Vision (ICCV), 2017 IEEE International Conference on},
year={2017}
}
``` | 1,274 | [
[
-0.02166748046875,
-0.01508331298828125,
0.00420379638671875,
0.0117034912109375,
-0.034271240234375,
-0.0140533447265625,
-0.0169677734375,
-0.04095458984375,
0.00733184814453125,
0.044189453125,
-0.04144287109375,
-0.043060302734375,
-0.037139892578125,
0.... |
marksverdhei/clickbait_title_classification | 2022-03-29T21:25:01.000Z | [
"license:mit",
"arxiv:1610.09786",
"region:us"
] | marksverdhei | null | null | 3 | 44 | 2022-03-29T21:02:09 | ---
license: mit
---
Dataset introduced in [Stop Clickbait: Detecting and Preventing Clickbaits in Online News Media](https://arxiv.org/abs/1610.09786)
by Abhijnan Chakraborty, Bhargavi Paranjape, Sourya Kakarla, Niloy Ganguly
Abhijnan Chakraborty, Bhargavi Paranjape, Sourya Kakarla, and Niloy Ganguly. "Stop Clickbait: Detecting and Preventing Clickbaits in Online News Media”. In Proceedings of the 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), San Fransisco, US, August 2016.
Cite:
```
@inproceedings{chakraborty2016stop,
title={Stop Clickbait: Detecting and preventing clickbaits in online news media},
author={Chakraborty, Abhijnan and Paranjape, Bhargavi and Kakarla, Sourya and Ganguly, Niloy},
booktitle={Advances in Social Networks Analysis and Mining (ASONAM), 2016 IEEE/ACM International Conference on},
pages={9--16},
year={2016},
organization={IEEE}
}
```
| 963 | [
[
-0.01314544677734375,
-0.054168701171875,
-0.002330780029296875,
0.0203857421875,
-0.024871826171875,
0.0006160736083984375,
-0.01493072509765625,
-0.0206146240234375,
0.0272674560546875,
0.038299560546875,
-0.0249481201171875,
-0.0447998046875,
-0.0465087890625... |
taln-ls2n/inspec | 2022-07-21T14:14:59.000Z | [
"task_categories:text-generation",
"annotations_creators:unknown",
"language_creators:unknown",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:unknown",
"region:us"
] | taln-ls2n | Inspec benchmark dataset for keyphrase extraction an generation. | @inproceedings{hulth2003improved,
title={Improved automatic keyword extraction given more linguistic knowledge},
author={Hulth, Anette},
booktitle={Proceedings of the 2003 conference on Empirical methods in natural language processing},
pages={216--223},
year={2003}
} | 3 | 44 | 2022-04-12T08:10:45 | ---
annotations_creators:
- unknown
language_creators:
- unknown
language:
- en
license:
- unknown
multilinguality:
- monolingual
task_categories:
- text-mining
- text-generation
task_ids:
- keyphrase-generation
- keyphrase-extraction
size_categories:
- 1K<n<10K
pretty_name: Inspec
---
# Inspec Benchmark Dataset for Keyphrase Generation
## About
Inspec is a dataset for benchmarking keyphrase extraction and generation models.
The dataset is composed of 2,000 abstracts of scientific papers collected from the [Inspec database](https://www.theiet.org/resources/inspec/).
Keyphrases were annotated by professional indexers in an uncontrolled setting (that is, not limited to thesaurus entries).
Details about the inspec dataset can be found in the original paper [(Hulth, 2003)][hulth-2003].
Reference (indexer-assigned) keyphrases are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in [(Boudin and Gallina, 2021)][boudin-2021].
Text pre-processing (tokenization) is carried out using `spacy` (`en_core_web_sm` model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token).
Stemming (Porter's stemmer implementation provided in `nltk`) is applied before reference keyphrases are matched against the source text.
Details about the process can be found in `prmu.py`.
## Content and statistics
The dataset is divided into the following three splits:
| Split | # documents | #words | # keyphrases | % Present | % Reordered | % Mixed | % Unseen |
| :--------- | ----------: | -----: | -----------: | --------: | ----------: | ------: | -------: |
| Train | 1,000 | 141.7 | 9.79 | 78.00 | 9.85 | 6.22 | 5.93 |
| Validation | 500 | 132.2 | 9.15 | 77.96 | 9.82 | 6.75 | 5.47 |
| Test | 500 | 134.8 | 9.83 | 78.70 | 9.92 | 6.48 | 4.91 |
The following data fields are available :
- **id**: unique identifier of the document.
- **title**: title of the document.
- **abstract**: abstract of the document.
- **keyphrases**: list of reference keyphrases.
- **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases.
## References
- (Hulth, 2003) Anette Hulth. 2003.
[Improved automatic keyword extraction given more linguistic knowledge](https://aclanthology.org/W03-1028).
In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, pages 216-223.
- (Boudin and Gallina, 2021) Florian Boudin and Ygor Gallina. 2021.
[Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness](https://aclanthology.org/2021.naacl-main.330/).
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
[hulth-2003]: https://aclanthology.org/W03-1028/
[boudin-2021]: https://aclanthology.org/2021.naacl-main.330/ | 3,089 | [
[
-0.0223541259765625,
-0.02996826171875,
0.0276947021484375,
0.015777587890625,
-0.0201416015625,
0.0204925537109375,
-0.01007080078125,
-0.013580322265625,
0.004589080810546875,
0.0234527587890625,
-0.033905029296875,
-0.053558349609375,
-0.0309600830078125,
... |
VanessaSchenkel/translation-en-pt | 2022-08-06T21:52:26.000Z | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:translation",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"language:pt",
"license:afl-3.0",
"region:us"
] | VanessaSchenkel | null | null | 2 | 44 | 2022-07-06T00:29:28 | ---
annotations_creators:
- found
language:
- en
- pt
language_creators:
- found
license:
- afl-3.0
multilinguality:
- translation
pretty_name: VanessaSchenkel/translation-en-pt
size_categories:
- 100K<n<1M
source_datasets:
- original
tags: []
task_categories:
- translation
task_ids: []
---
How to use it:
```
from datasets import load_dataset
remote_dataset = load_dataset("VanessaSchenkel/translation-en-pt", field="data")
remote_dataset
```
Output:
```
DatasetDict({
train: Dataset({
features: ['id', 'translation'],
num_rows: 260482
})
})
```
Exemple:
```
remote_dataset["train"][5]
```
Output:
```
{'id': '5',
'translation': {'english': 'I have to go to sleep.',
'portuguese': 'Tenho de dormir.'}}
``` | 743 | [
[
-0.0208587646484375,
-0.01195526123046875,
-0.0218353271484375,
0.01290130615234375,
-0.045989990234375,
-0.009735107421875,
-0.036773681640625,
0.001827239990234375,
0.038909912109375,
0.030517578125,
-0.03350830078125,
-0.05938720703125,
-0.0265045166015625,
... |
embedding-data/WikiAnswers | 2022-08-02T03:33:01.000Z | [
"task_categories:sentence-similarity",
"task_ids:semantic-similarity-classification",
"language:en",
"license:mit",
"region:us"
] | embedding-data | null | null | 1 | 44 | 2022-07-09T00:13:25 | ---
license: mit
language:
- en
paperswithcode_id: embedding-data/WikiAnswers
pretty_name: WikiAnswers
task_categories:
- sentence-similarity
- paraphrase-mining
task_ids:
- semantic-similarity-classification
---
# Dataset Card for "WikiAnswers"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/afader/oqa#wikianswers-corpus](https://github.com/afader/oqa#wikianswers-corpus)
- **Repository:** [More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
- **Paper:** [More Information Needed](https://doi.org/10.1145/2623330.2623677)
- **Point of Contact:** [Anthony Fader](https://dl.acm.org/profile/81324489111), [Luke Zettlemoyer](https://dl.acm.org/profile/81100527621), [Oren Etzioni](https://dl.acm.org/profile/99658633129)
### Dataset Summary
The WikiAnswers corpus contains clusters of questions tagged by WikiAnswers users as paraphrases.
Each cluster optionally contains an answer provided by WikiAnswers users. There are 30,370,994 clusters containing an average of 25 questions per cluster. 3,386,256 (11%) of the clusters have an answer.
### Supported Tasks
- [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity.
### Languages
- English.
## Dataset Structure
Each example in the dataset contains 25 equivalent sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value".
```
{"set": [sentence_1, sentence_2, ..., sentence_25]}
{"set": [sentence_1, sentence_2, ..., sentence_25]}
...
{"set": [sentence_1, sentence_2, ..., sentence_25]}
```
This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar sentences.
### Usage Example
Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with:
```python
from datasets import load_dataset
dataset = load_dataset("embedding-data/WikiAnswers")
```
The dataset is loaded as a `DatasetDict` and has the format for `N` examples:
```python
DatasetDict({
train: Dataset({
features: ['set'],
num_rows: N
})
})
```
Review an example `i` with:
```python
dataset["train"][i]["set"]
```
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
#### Who are the source language producers?
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
#### Who are the annotators?
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
### Personal and Sensitive Information
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
### Discussion of Biases
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
### Other Known Limitations
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
### Licensing Information
[More Information Needed](https://github.com/afader/oqa#wikianswers-corpus)
### Citation Information
```
@inproceedings{Fader14,
author = {Anthony Fader and Luke Zettlemoyer and Oren Etzioni},
title = {{Open Question Answering Over Curated and Extracted
Knowledge Bases}},
booktitle = {KDD},
year = {2014}
}
```
### Contributions
| 4,946 | [
[
-0.037200927734375,
-0.05010986328125,
0.01092529296875,
-0.0084228515625,
0.00592041015625,
-0.01282501220703125,
-0.0244140625,
-0.011322021484375,
0.0400390625,
0.041229248046875,
-0.046722412109375,
-0.05548095703125,
-0.060272216796875,
0.02352905273437... |
succinctly/midjourney-prompts | 2022-07-22T01:49:16.000Z | [
"license:apache-2.0",
"region:us"
] | succinctly | null | null | 77 | 44 | 2022-07-21T20:29:49 | ---
license: apache-2.0
---
[Midjourney](https://midjourney.com) is an independent research lab whose broad mission is to "explore new mediums of thought". In 2022, they launched a text-to-image service that, given a natural language prompt, produces visual depictions that are faithful to the description. Their service is accessible via a public [Discord server](https://discord.com/invite/midjourney): users issue a query in natural language, and the Midjourney bot returns AI-generated images that follow the given description. The raw dataset (with Discord messages) can be found on Kaggle: [Midjourney User Prompts & Generated Images (250k)](https://www.kaggle.com/datasets/succinctlyai/midjourney-texttoimage). The authors of the scraped dataset have no affiliation to Midjourney.
This HuggingFace dataset was [processed](https://www.kaggle.com/code/succinctlyai/midjourney-text-prompts-huggingface) from the raw Discord messages to solely include the text prompts issued by the user (thus excluding the generated images and any other metadata). It could be used, for instance, to fine-tune a large language model to produce or auto-complete creative prompts for image generation.
Check out [succinctly/text2image-prompt-generator](https://huggingface.co/succinctly/text2image-prompt-generator), a GPT-2 model fine-tuned on this dataset. | 1,346 | [
[
-0.039825439453125,
-0.06939697265625,
0.0545654296875,
0.026275634765625,
-0.0189361572265625,
-0.003559112548828125,
-0.01355743408203125,
-0.037139892578125,
0.022369384765625,
0.0352783203125,
-0.08831787109375,
-0.025726318359375,
-0.04278564453125,
0.0... |
ganchengguang/resume-5label-classification | 2022-09-04T02:53:22.000Z | [
"region:us"
] | ganchengguang | null | null | 1 | 44 | 2022-09-04T02:37:54 | This is a resume sentence classification dataset constructed based on resume text.(https://www.kaggle.com/datasets/oo7kartik/resume-text-batch)
The dataset have five category.(experience education knowledge project others ) And three element label(header content meta).
Because the dataset is a published paper, if you want to use this dataset in a paper or work, please cite BibTex.
@article{甘程光2021英文履歴書データ抽出システムへの,
title={英文履歴書データ抽出システムへの BERT 適用性の検討},
author={甘程光 and 高橋良英 and others},
journal={2021 年度 情報処理学会関西支部 支部大会 講演論文集},
volume={2021},
year={2021}
} | 579 | [
[
0.0037841796875,
-0.042449951171875,
0.024688720703125,
0.0171966552734375,
-0.0053558349609375,
-0.009429931640625,
-0.0006132125854492188,
-0.00988006591796875,
0.01425933837890625,
0.0616455078125,
-0.0216827392578125,
-0.062225341796875,
-0.021270751953125,
... |
Bingsu/openwebtext_20p | 2022-09-16T02:36:38.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:extended|openwebtext",
"la... | Bingsu | null | null | 4 | 44 | 2022-09-16T02:15:16 | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license:
- cc0-1.0
multilinguality:
- monolingual
paperswithcode_id: openwebtext
pretty_name: openwebtext_20p
size_categories:
- 1M<n<10M
source_datasets:
- extended|openwebtext
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
---
# openwebtext_20p
## Dataset Description
- **Origin:** [openwebtext](https://huggingface.co/datasets/openwebtext)
- **Download Size** 4.60 GiB
- **Generated Size** 7.48 GiB
- **Total Size** 12.08 GiB
first 20% of [openwebtext](https://huggingface.co/datasets/openwebtext) | 644 | [
[
-0.05560302734375,
-0.0355224609375,
-0.0009322166442871094,
0.0540771484375,
-0.0261993408203125,
-0.004474639892578125,
0.00403594970703125,
-0.03375244140625,
0.05291748046875,
0.0226287841796875,
-0.056854248046875,
-0.03228759765625,
-0.047821044921875,
... |
bigbio/bionlp_st_2011_rel | 2022-12-22T15:43:54.000Z | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | bigbio | The Entity Relations (REL) task is a supporting task of the BioNLP Shared Task 2011.
The task concerns the extraction of two types of part-of relations between a
gene/protein and an associated entity. | @inproceedings{10.5555/2107691.2107703,
author = {Pyysalo, Sampo and Ohta, Tomoko and Tsujii, Jun'ichi},
title = {Overview of the Entity Relations (REL) Supporting Task of BioNLP Shared Task 2011},
year = {2011},
isbn = {9781937284091},
publisher = {Association for Computational Linguistics},
address = {USA},
abstract = {This paper presents the Entity Relations (REL) task,
a supporting task of the BioNLP Shared Task 2011. The task concerns
the extraction of two types of part-of relations between a gene/protein
and an associated entity. Four teams submitted final results for
the REL task, with the highest-performing system achieving 57.7%
F-score. While experiments suggest use of the data can help improve
event extraction performance, the task data has so far received only
limited use in support of event extraction. The REL task continues
as an open challenge, with all resources available from the shared
task website.},
booktitle = {Proceedings of the BioNLP Shared Task 2011 Workshop},
pages = {83–88},
numpages = {6},
location = {Portland, Oregon},
series = {BioNLP Shared Task '11}
} | 1 | 44 | 2022-11-13T22:06:59 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: GENIA_PROJECT_LICENSE
pretty_name: BioNLP 2011 REL
homepage: https://github.com/openbiocorpora/bionlp-st-2011-rel
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- RELATION_EXTRACTION
- COREFERENCE_RESOLUTION
---
# Dataset Card for BioNLP 2011 REL
## Dataset Description
- **Homepage:** https://github.com/openbiocorpora/bionlp-st-2011-rel
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,RE,COREF
The Entity Relations (REL) task is a supporting task of the BioNLP Shared Task 2011.
The task concerns the extraction of two types of part-of relations between a
gene/protein and an associated entity.
## Citation Information
```
@inproceedings{10.5555/2107691.2107703,
author = {Pyysalo, Sampo and Ohta, Tomoko and Tsujii, Jun'ichi},
title = {Overview of the Entity Relations (REL) Supporting Task of BioNLP Shared Task 2011},
year = {2011},
isbn = {9781937284091},
publisher = {Association for Computational Linguistics},
address = {USA},
abstract = {This paper presents the Entity Relations (REL) task,
a supporting task of the BioNLP Shared Task 2011. The task concerns
the extraction of two types of part-of relations between a gene/protein
and an associated entity. Four teams submitted final results for
the REL task, with the highest-performing system achieving 57.7%
F-score. While experiments suggest use of the data can help improve
event extraction performance, the task data has so far received only
limited use in support of event extraction. The REL task continues
as an open challenge, with all resources available from the shared
task website.},
booktitle = {Proceedings of the BioNLP Shared Task 2011 Workshop},
pages = {83–88},
numpages = {6},
location = {Portland, Oregon},
series = {BioNLP Shared Task '11}
}
```
| 1,901 | [
[
-0.01407623291015625,
-0.032806396484375,
0.0178985595703125,
0.0057220458984375,
-0.03363037109375,
-0.01042938232421875,
-0.006374359130859375,
-0.07415771484375,
0.04144287109375,
0.0308685302734375,
-0.04998779296875,
-0.04541015625,
-0.02642822265625,
0... |
bigbio/bionlp_st_2013_gro | 2022-12-22T15:44:01.000Z | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | bigbio | GRO Task: Populating the Gene Regulation Ontology with events and
relations. A data set from the bio NLP shared tasks competition from 2013 | @inproceedings{kim-etal-2013-gro,
title = "{GRO} Task: Populating the Gene Regulation Ontology with events and relations",
author = "Kim, Jung-jae and
Han, Xu and
Lee, Vivian and
Rebholz-Schuhmann, Dietrich",
booktitle = "Proceedings of the {B}io{NLP} Shared Task 2013 Workshop",
month = aug,
year = "2013",
address = "Sofia, Bulgaria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W13-2007",
pages = "50--57",
} | 1 | 44 | 2022-11-13T22:07:10 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: GENIA_PROJECT_LICENSE
pretty_name: BioNLP 2013 GRO
homepage: https://github.com/openbiocorpora/bionlp-st-2013-gro
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- EVENT_EXTRACTION
- NAMED_ENTITY_RECOGNITION
- RELATION_EXTRACTION
---
# Dataset Card for BioNLP 2013 GRO
## Dataset Description
- **Homepage:** https://github.com/openbiocorpora/bionlp-st-2013-gro
- **Pubmed:** True
- **Public:** True
- **Tasks:** EE,NER,RE
GRO Task: Populating the Gene Regulation Ontology with events and
relations. A data set from the bio NLP shared tasks competition from 2013
## Citation Information
```
@inproceedings{kim-etal-2013-gro,
title = "{GRO} Task: Populating the Gene Regulation Ontology with events and relations",
author = "Kim, Jung-jae and
Han, Xu and
Lee, Vivian and
Rebholz-Schuhmann, Dietrich",
booktitle = "Proceedings of the {B}io{NLP} Shared Task 2013 Workshop",
month = aug,
year = "2013",
address = "Sofia, Bulgaria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W13-2007",
pages = "50--57",
}
```
| 1,245 | [
[
-0.0017957687377929688,
-0.019683837890625,
0.024444580078125,
0.00824737548828125,
-0.01557159423828125,
-0.0206298828125,
-0.0066070556640625,
-0.04925537109375,
0.03106689453125,
0.03472900390625,
-0.046722412109375,
-0.05303955078125,
-0.0390625,
0.02548... |
souljoy/COVID-19_weibo_emotion | 2022-12-29T09:42:16.000Z | [
"region:us"
] | souljoy | null | null | 2 | 44 | 2022-12-29T09:05:37 | COVID-19 Epidemic Weibo Emotional Dataset, the content of Weibo in this dataset is the epidemic Weibo obtained by using relevant keywords to filter during the epidemic, and its content is related to COVID-19.
Each tweet is labeled as one of the following six categories: neutral (no emotion), happy (positive), angry (angry), sad (sad), fear (fear), surprise (surprise)
The COVID-19 Weibo training dataset includes 8,606 Weibos, the validation set contains 2,000 Weibos, and the test dataset contains 3,000 Weibos.
疫情微博数据集,该数据集内的微博内容是在疫情期间使用相关关键字筛选获得的疫情微博,其内容与新冠疫情相关。
每条微博被标注为以下六个类别之一:neutral(无情绪)、happy(积极)、angry(愤怒)、sad(悲伤)、fear(恐惧)、surprise(惊奇)
疫情微博训练数据集包括8,606条微博,验证集包含2,000条微博,测试数据集包含3,000条微博。 | 704 | [
[
-0.0210418701171875,
-0.051361083984375,
-0.01450347900390625,
0.05224609375,
-0.032440185546875,
0.0011463165283203125,
0.006320953369140625,
-0.0325927734375,
0.03790283203125,
0.00876617431640625,
-0.035308837890625,
-0.04888916015625,
-0.045196533203125,
... |
SotiriosKastanas/difffunsd | 2023-03-08T10:49:01.000Z | [
"region:us"
] | SotiriosKastanas | null | null | 0 | 44 | 2023-03-08T10:12:19 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
IndianaUniversityDatasetsModels/Medical_reports_Splits | 2023-03-10T11:12:02.000Z | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | IndianaUniversityDatasetsModels | null | null | 3 | 44 | 2023-03-08T14:21:15 | ---
dataset_info:
features:
- name: MeSH
dtype: string
- name: Problems
dtype: string
- name: findings
dtype: string
- name: impression
dtype: string
splits:
- name: train
num_bytes: 1046536.8153707596
num_examples: 2831
- name: test
num_bytes: 92417.59231462024
num_examples: 250
- name: validation
num_bytes: 92417.59231462024
num_examples: 250
download_size: 395063
dataset_size: 1231372
task_categories:
- text-generation
- text2text-generation
language:
- en
pretty_name: Indiana University X-Rays and Reports dataset
size_categories:
- 1K<n<10K
---
# Dataset Card for "Medical_reports_Splits"
Orignal Source [openi.nlm.nih.gov](https://openi.nlm.nih.gov/)
Kaggle Source [Chest X-rays (Indiana University)](https://www.kaggle.com/datasets/raddar/chest-xrays-indiana-university)
[For more information](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 976 | [
[
-0.0180816650390625,
-0.01137542724609375,
0.0316162109375,
-0.0101470947265625,
-0.031036376953125,
0.013031005859375,
0.0217132568359375,
-0.031524658203125,
0.05731201171875,
0.038177490234375,
-0.0546875,
-0.05303955078125,
-0.04833984375,
-0.00663375854... |
alpayariyak/LLaVA_calculus_handwriting | 2023-05-24T20:29:57.000Z | [
"region:us"
] | alpayariyak | null | null | 3 | 44 | 2023-05-24T18:47:22 | ---
dataset_info:
features:
- name: image
dtype: image
- name: id
dtype: string
- name: conversations
dtype: string
splits:
- name: train
num_bytes: 9607911271.0
num_examples: 100000
download_size: 9289147010
dataset_size: 9607911271.0
---
# Dataset Card for "LLaVA_calculus_handwriting"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 456 | [
[
0.00019943714141845703,
-0.0270233154296875,
0.0338134765625,
0.0175933837890625,
-0.0191192626953125,
0.0139312744140625,
0.018463134765625,
-0.01222991943359375,
0.06365966796875,
0.04107666015625,
-0.061248779296875,
-0.069091796875,
-0.052520751953125,
-... |
jmichaelov/inverse_scaling_prize-hindsight_neglect | 2023-08-23T05:37:56.000Z | [
"task_categories:multiple-choice",
"language:en",
"license:cc-by-4.0",
"region:us"
] | jmichaelov | null | null | 0 | 44 | 2023-06-22T04:22:43 | ---
license: cc-by-4.0
task_categories:
- multiple-choice
language:
- en
pretty_name: Hindsight Neglect
---
The hindsight-neglect task from the [Inverse Scaling Prize](https://github.com/inverse-scaling/prize) | 210 | [
[
-0.01332855224609375,
-0.02001953125,
0.039703369140625,
0.035400390625,
0.01007080078125,
-0.0261077880859375,
0.01666259765625,
-0.0198974609375,
0.03497314453125,
0.040496826171875,
-0.08660888671875,
-0.0107421875,
-0.03997802734375,
-0.0265655517578125,... |
gabeorlanski/bc-humaneval | 2023-07-18T16:23:28.000Z | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:1K<n<10K",
"source_datasets:original",
"source_datasets:extended|openai_humaneval",
"language:en",
"license:apache-2.0",
"code",
"arxiv:2302.01973",
"arxiv:2107.03374",
"region:us"
] | gabeorlanski | The HumanEval dataset in BabelCode format. | @article{orlanski2023measuring,
title={Measuring The Impact Of Programming Language Distribution},
author={Orlanski, Gabriel and Xiao, Kefan and Garcia, Xavier and Hui, Jeffrey and Howland, Joshua and Malmaud, Jonathan and Austin, Jacob and Singh, Rishah and Catasta, Michele},
journal={arXiv preprint arXiv:2302.01973},
year={2023}
}
@article{chen2021codex,
title={Evaluating Large Language Models Trained on Code},
author={Mark Chen and Jerry Tworek and Heewoo Jun and Qiming Yuan and Henrique Ponde de Oliveira Pinto and Jared Kaplan and Harri Edwards and Yuri Burda and Nicholas Joseph and Greg Brockman and Alex Ray and Raul Puri and Gretchen Krueger and Michael Petrov and Heidy Khlaaf and Girish Sastry and Pamela Mishkin and Brooke Chan and Scott Gray and Nick Ryder and Mikhail Pavlov and Alethea Power and Lukasz Kaiser and Mohammad Bavarian and Clemens Winter and Philippe Tillet and Felipe Petroski Such and Dave Cummings and Matthias Plappert and Fotios Chantzis and Elizabeth Barnes and Ariel Herbert-Voss and William Hebgen Guss and Alex Nichol and Alex Paino and Nikolas Tezak and Jie Tang and Igor Babuschkin and Suchir Balaji and Shantanu Jain and William Saunders and Christopher Hesse and Andrew N. Carr and Jan Leike and Josh Achiam and Vedant Misra and Evan Morikawa and Alec Radford and Matthew Knight and Miles Brundage and Mira Murati and Katie Mayer and Peter Welinder and Bob McGrew and Dario Amodei and Sam McCandlish and Ilya Sutskever and Wojciech Zaremba},
year={2021},
eprint={2107.03374},
archivePrefix={arXiv},
primaryClass={cs.LG}
} | 2 | 44 | 2023-06-25T16:36:56 | ---
license: apache-2.0
task_categories:
- text-generation
- text2text-generation
language:
- en
tags:
- code
pretty_name: BabelCode HumanEval
size_categories:
- 1K<n<10K
source_datasets:
- original
- extended|openai_humaneval
---
# Dataset Card for BabelCode HumanEval
## Dataset Description
- **Repository:** [GitHub Repository](https://github.com/google-research/babelcode)
- **Paper:** [Measuring The Impact Of Programming Language Distribution](https://arxiv.org/abs/2302.01973)
### How To Use This Dataset
To use this dataset, you can either use the original [BabelCode Repo](https://github.com/google-research/babelcode), or you can use the [`bc_eval` Metric](https://huggingface.co/spaces/gabeorlanski/bc_eval).
### Dataset Summary
The BabelCode-HumaneEval (BC-HumanEval) dataset converts the [HumanEval dataset released by OpenAI](https://github.com/openai/human-eval) to 16 programming languages.
### Supported Tasks and Leaderboards
### Languages
BC-HumanEval supports:
* C++
* C#
* Dart
* Go
* Haskell
* Java
* Javascript
* Julia
* Kotlin
* Lua
* PHP
* Python
* R
* Rust
* Scala
* TypeScript
## Dataset Structure
```python
>>> from datasets import load_dataset
>>> load_dataset("gabeorlanski/bc-humaneval")
DatasetDict({
test: Dataset({
features: ['qid', 'title', 'language', 'text', 'signature_with_docstring', 'signature', 'arguments', 'solution', 'question_info'],
num_rows: 2576
})
})
```
### Data Fields
- `qid`: The question ID used for running tests.
- `title`: The title of the question.
- `language`: The programming language of the example.
- `text`: The description of the problem.
- `signature`: The signature for the problem.
- `signature_with_docstring`: The signature with the adequately formatted docstring for the given problem.
- `arguments`: The arguments of the problem.
- `solution`: The solution in Python.
- `question_info`: The dict of information used for executing predictions. It has the keys:
- `test_code`: The raw testing script used in the language. If you want to use this, replace `PLACEHOLDER_FN_NAME` (and `PLACEHOLDER_CLS_NAME` if needed) with the corresponding entry points. Next, replace `PLACEHOLDER_CODE_BODY` with the postprocessed prediction.
- `test_list`: The raw json line of the list of tests for the problem. To load them, use `json.loads`
- `test_case_ids`: The list of test case ids for the problem. These are used to determine if a prediction passes or not.
- `entry_fn_name`: The function's name to use an entry point.
- `entry_cls_name`: The class name to use an entry point.
- `commands`: The commands used to execute the prediction. Includes a `__FILENAME__` hole that is replaced with the filename.
- `timeouts`: The default timeouts for each command.
- `extension`: The extension for the prediction file.
**NOTE:** If you want to use a different function name (or class name for languages that require class names) for the prediction, you must update the `entry_fn_name` and `entry_cls_name` accordingly. For example, if you have the original question with `entry_fn_name` of `add`, but want to change it to `f`, you must update `ds["question_info"]["entry_fn_name"]` to `f`:
```python
>>> from datasets import load_dataset
>>> ds = load_dataset("gabeorlanski/bc-humaneval")['test']
>>> # The original entry_fn_name
>>> ds[0]['question_info']['entry_fn_name']
hasCloseElements
>>> # You MUST update the corresponding entry_fn_name
>>> ds[0]['question_info']['entry_fn_name'] = 'f'
>>> ds[0]['question_info']['entry_fn_name']
f
```
## Dataset Creation
See section 2 of the [BabelCode Paper](https://arxiv.org/abs/2302.01973) to learn more about how the datasets are translated.
For information on how the original HumanEval was curated, please see the [Evaluating Large Language Models Trained on Code paper](https://arxiv.org/abs/2107.03374).
### Dataset Curators
Google Research
### Licensing Information
CC-BY-4.0
### Citation Information
```
@article{orlanski2023measuring,
title={Measuring The Impact Of Programming Language Distribution},
author={Orlanski, Gabriel and Xiao, Kefan and Garcia, Xavier and Hui, Jeffrey and Howland, Joshua and Malmaud, Jonathan and Austin, Jacob and Singh, Rishah and Catasta, Michele},
journal={arXiv preprint arXiv:2302.01973},
year={2023}
}
@article{chen2021codex,
title={Evaluating Large Language Models Trained on Code},
author={Mark Chen and Jerry Tworek and Heewoo Jun and Qiming Yuan and Henrique Ponde de Oliveira Pinto and Jared Kaplan and Harri Edwards and Yuri Burda and Nicholas Joseph and Greg Brockman and Alex Ray and Raul Puri and Gretchen Krueger and Michael Petrov and Heidy Khlaaf and Girish Sastry and Pamela Mishkin and Brooke Chan and Scott Gray and Nick Ryder and Mikhail Pavlov and Alethea Power and Lukasz Kaiser and Mohammad Bavarian and Clemens Winter and Philippe Tillet and Felipe Petroski Such and Dave Cummings and Matthias Plappert and Fotios Chantzis and Elizabeth Barnes and Ariel Herbert-Voss and William Hebgen Guss and Alex Nichol and Alex Paino and Nikolas Tezak and Jie Tang and Igor Babuschkin and Suchir Balaji and Shantanu Jain and William Saunders and Christopher Hesse and Andrew N. Carr and Jan Leike and Josh Achiam and Vedant Misra and Evan Morikawa and Alec Radford and Matthew Knight and Miles Brundage and Mira Murati and Katie Mayer and Peter Welinder and Bob McGrew and Dario Amodei and Sam McCandlish and Ilya Sutskever and Wojciech Zaremba},
year={2021},
eprint={2107.03374},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
``` | 5,568 | [
[
-0.03253173828125,
-0.04119873046875,
0.00955963134765625,
0.01392364501953125,
0.0037975311279296875,
-0.016143798828125,
-0.028167724609375,
-0.0302581787109375,
0.01088714599609375,
0.027374267578125,
-0.0250244140625,
-0.047882080078125,
-0.03546142578125,
... |
ahmed-masry/chartqa_without_images | 2023-07-26T00:46:27.000Z | [
"region:us"
] | ahmed-masry | null | null | 0 | 44 | 2023-07-26T00:46:17 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
dataset_info:
features:
- name: imgname
dtype: string
- name: query
dtype: string
- name: label
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 3169833
num_examples: 28299
- name: val
num_bytes: 213437
num_examples: 1920
- name: test
num_bytes: 272309
num_examples: 2500
download_size: 1596635
dataset_size: 3655579
---
# Dataset Card for "chartqa_without_images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 748 | [
[
-0.042449951171875,
-0.0057525634765625,
0.01995849609375,
0.00966644287109375,
-0.040740966796875,
-0.000949859619140625,
0.02972412109375,
-0.011322021484375,
0.06304931640625,
0.049468994140625,
-0.056182861328125,
-0.070556640625,
-0.056793212890625,
-0.... |
abacusai/LongChat-Lines | 2023-07-28T03:14:01.000Z | [
"region:us"
] | abacusai | null | null | 9 | 44 | 2023-07-27T15:16:12 | ---
configs:
- config_name: default
data_files:
- split: '100'
path: data/100-*
- split: '150'
path: data/150-*
- split: '175'
path: data/175-*
- split: '200'
path: data/200-*
- split: '250'
path: data/250-*
- split: '300'
path: data/300-*
- split: '400'
path: data/400-*
- split: '500'
path: data/500-*
- split: '600'
path: data/600-*
- split: '680'
path: data/680-*
- split: '750'
path: data/750-*
- split: '850'
path: data/850-*
- split: '950'
path: data/950-*
- split: '1100'
path: data/1100-*
dataset_info:
features:
- name: expected_number
dtype: int64
- name: num_lines
dtype: int64
- name: token_size
dtype: int64
- name: prompt
dtype: string
splits:
- name: '100'
num_bytes: 275673
num_examples: 50
- name: '150'
num_bytes: 400446
num_examples: 50
- name: '175'
num_bytes: 463159
num_examples: 50
- name: '200'
num_bytes: 525856
num_examples: 50
- name: '250'
num_bytes: 650643
num_examples: 50
- name: '300'
num_bytes: 775800
num_examples: 50
- name: '400'
num_bytes: 1025288
num_examples: 50
- name: '500'
num_bytes: 1276039
num_examples: 50
- name: '600'
num_bytes: 1524627
num_examples: 50
- name: '680'
num_bytes: 1724325
num_examples: 50
- name: '750'
num_bytes: 1899422
num_examples: 50
- name: '850'
num_bytes: 2149220
num_examples: 50
- name: '950'
num_bytes: 2398398
num_examples: 50
- name: '1100'
num_bytes: 2772556
num_examples: 50
download_size: 7270406
dataset_size: 17861452
---
# Dataset Card for "LongChat-Lines"
This dataset is was used to evaluate the performance of model finetuned to operate on longer contexts it is based on
a task template proposed LMSys to evaluate attention to arbitrary points in the context. See the full details at
[https;//github.com/abacusai/Long-Context](https://github.com/abacusai/Long-Context). | 2,014 | [
[
-0.03546142578125,
-0.05938720703125,
0.020599365234375,
0.006618499755859375,
-0.033203125,
-0.032623291015625,
-0.00936126708984375,
-0.02789306640625,
0.020965576171875,
0.056427001953125,
-0.072021484375,
-0.028167724609375,
-0.0035152435302734375,
-0.00... |
natmin322/28k_vietnamese_voice_augmented_of_VigBigData | 2023-08-12T17:18:29.000Z | [
"region:us"
] | natmin322 | null | null | 1 | 44 | 2023-08-12T13:13:41 | ---
configs:
- config_name: default
data_files:
- split: train_1
path: data/train_1-*
- split: train_2
path: data/train_2-*
- split: train_3
path: data/train_3-*
- split: train_4
path: data/train_4-*
- split: train_5
path: data/train_5-*
- split: test
path: data/test-*
dataset_info:
features:
- name: audio
dtype: audio
- name: sentence
dtype: string
splits:
- name: train_1
num_bytes: 1433691842.0
num_examples: 5000
- name: train_2
num_bytes: 1026073200.0
num_examples: 5000
- name: train_3
num_bytes: 1113535830.0
num_examples: 5000
- name: train_4
num_bytes: 1489647293.0
num_examples: 5000
- name: train_5
num_bytes: 1416405046.0
num_examples: 5000
- name: test
num_bytes: 886300388.18
num_examples: 3005
download_size: 6939675259
dataset_size: 7365653599.18
---
# Dataset Card for "28k_vietnamese_voice_augmented_of_VigBigData"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,085 | [
[
-0.04150390625,
-0.0252685546875,
0.006145477294921875,
0.0308685302734375,
-0.0162811279296875,
0.009979248046875,
0.01023101806640625,
-0.01308441162109375,
0.049285888671875,
0.05523681640625,
-0.048095703125,
-0.062469482421875,
-0.03533935546875,
-0.020... |
MU-NLPC/Calc-svamp | 2023-10-30T15:05:26.000Z | [
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"license:mit",
"math world problems",
"math",
"arithmetics",
"arxiv:2305.15017",
"region:us"
] | MU-NLPC | null | null | 0 | 44 | 2023-09-08T14:56:46 | ---
language:
- en
license: mit
size_categories:
- n<1K
task_categories:
- text-generation
tags:
- math world problems
- math
- arithmetics
dataset_info:
- config_name: default
features:
- name: id
dtype: string
- name: question
dtype: string
- name: chain
dtype: string
- name: result
dtype: string
- name: result_float
dtype: float64
- name: equation
dtype: string
- name: problem_type
dtype: string
splits:
- name: test
num_bytes: 335744
num_examples: 1000
download_size: 116449
dataset_size: 335744
- config_name: original-splits
features:
- name: id
dtype: string
- name: question
dtype: string
- name: chain
dtype: string
- name: result
dtype: string
- name: result_float
dtype: float64
- name: equation
dtype: string
- name: problem_type
dtype: string
splits:
- name: test
num_bytes: 335744
num_examples: 1000
download_size: 116449
dataset_size: 335744
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- config_name: original-splits
data_files:
- split: test
path: original-splits/test-*
---
# Dataset Card for Calc-SVAMP
## Summary
The dataset is a collection of simple math word problems focused on arithmetics. It is derived from <https://github.com/arkilpatel/SVAMP/>.
The main addition in this dataset variant is the `chain` column. It was created by converting the solution to a simple html-like language that can be easily
parsed (e.g. by BeautifulSoup). The data contains 3 types of tags:
- gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case)
- output: An output of the external tool
- result: The final answer to the mathematical problem (a number)
## Supported Tasks
This variant of the dataset is intended for training Chain-of-Thought reasoning models able to use external tools to enhance the factuality of their responses.
This dataset presents in-context scenarios where models can outsource the computations in the reasoning chain to a calculator.
## Construction process
We created the dataset by converting the **equation** attribute in the original dataset to a sequence (chain) of calculations, with final one being the result to the math problem.
We also perform in-dataset and cross-dataset data-leak detection within the [Calc-X collection](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483).
However, for SVAMP specifically, we detected no data leaks and filtered no data.
## Content and data splits
The dataset contains the same data instances as the original dataset except for a correction of inconsistency between `equation` and `answer` in one data instance.
To the best of our knowledge, the original dataset does not contain an official train-test split. We treat the whole dataset as a testing benchmark.
## Attributes:
- **id**: problem id from the original dataset
- **question**: the question intended to answer
- **chain**: series of simple operations (derived from `equation`) that leads to the solution
- **result**: the result (number) as a string
- **result_float**: result converted to a floating point
- **equation**: a nested expression that evaluates to the correct result
- **problem_type**: a category of the problem
Attributes **id**, **question**, **chain**, and **result** are present in all datasets in [Calc-X collection](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483).
## Related work
This dataset was created as a part of a larger effort in training models capable of using a calculator during inference, which we call Calcformers.
- [**Calc-X collection**](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483) - datasets for training Calcformers
- [**Calcformers collection**](https://huggingface.co/collections/MU-NLPC/calcformers-65367392badc497807b3caf5) - calculator-using models we trained and published on HF
- [**Calc-X and Calcformers paper**](https://arxiv.org/abs/2305.15017)
- [**Calc-X and Calcformers repo**](https://github.com/prompteus/calc-x)
Here are links to the original dataset:
- [**original SVAMP dataset and repo**](https://github.com/arkilpatel/SVAMP/)
- [**original SVAMP paper**](https://www.semanticscholar.org/paper/Are-NLP-Models-really-able-to-Solve-Simple-Math-Patel-Bhattamishra/13c4e5a6122f3fa2663f63e49537091da6532f35)
## Licence
MIT, consistent with the original source dataset linked above.
## Cite
If you use this version of dataset in research, please cite the original [SVAMP paper](https://www.semanticscholar.org/paper/Are-NLP-Models-really-able-to-Solve-Simple-Math-Patel-Bhattamishra/13c4e5a6122f3fa2663f63e49537091da6532f35), and [Calc-X collection](https://arxiv.org/abs/2305.15017) as follows:
```bibtex
@inproceedings{kadlcik-etal-2023-soft,
title = "Calc-X and Calcformers: Empowering Arithmetical Chain-of-Thought through Interaction with Symbolic Systems",
author = "Marek Kadlčík and Michal Štefánik and Ondřej Sotolář and Vlastimil Martinek",
booktitle = "Proceedings of the The 2023 Conference on Empirical Methods in Natural Language Processing: Main track",
month = dec,
year = "2023",
address = "Singapore, Singapore",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2305.15017",
}
``` | 5,415 | [
[
-0.0352783203125,
-0.0263824462890625,
0.0170440673828125,
0.01256561279296875,
-0.007083892822265625,
-0.00751495361328125,
-0.0122833251953125,
-0.0277252197265625,
0.015838623046875,
0.0301361083984375,
-0.052459716796875,
-0.0260009765625,
-0.04266357421875,... |
hxyue1/ask_theology | 2023-10-20T22:41:23.000Z | [
"region:us"
] | hxyue1 | null | null | 1 | 44 | 2023-09-20T03:45:39 | ---
dataset_info:
features:
- name: title
dtype: string
- name: authors
dtype: string
- name: chapter
dtype: string
- name: content
dtype: string
- name: embeddings
sequence: float64
splits:
- name: train
num_bytes: 71960834
num_examples: 7534
download_size: 0
dataset_size: 71960834
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ask_theology"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 588 | [
[
-0.0499267578125,
-0.02728271484375,
0.018157958984375,
0.0186309814453125,
-0.00341033935546875,
-0.01904296875,
0.005962371826171875,
-0.0019130706787109375,
0.0531005859375,
0.039794921875,
-0.060577392578125,
-0.046478271484375,
-0.043853759765625,
-0.02... |
Rianknow/creditscoring | 2023-09-26T12:14:58.000Z | [
"region:us"
] | Rianknow | null | null | 0 | 44 | 2023-09-26T09:02:13 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Juniplayground/bronze-dataset-cot_type_v1 | 2023-09-26T16:19:35.000Z | [
"region:us"
] | Juniplayground | null | null | 0 | 44 | 2023-09-26T16:19:33 | ---
dataset_info:
features:
- name: Question
dtype: string
- name: Answer
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 8527256
num_examples: 8269
download_size: 3319227
dataset_size: 8527256
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "bronze-dataset-cot_type_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 525 | [
[
-0.0296173095703125,
-0.006214141845703125,
0.01480865478515625,
0.0265655517578125,
-0.028106689453125,
0.00982666015625,
0.036956787109375,
0.0040130615234375,
0.0504150390625,
0.03955078125,
-0.0614013671875,
-0.0726318359375,
-0.05364990234375,
-0.013359... |
Pavitra05/finalContent | 2023-10-02T20:32:21.000Z | [
"region:us"
] | Pavitra05 | null | null | 0 | 44 | 2023-10-02T20:25:00 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01497650146484375,
0.057098388671875,
0.028839111328125,
-0.0350341796875,
0.046478271484375,
0.052520751953125,
0.005046844482421875,
0.051361083984375,
0.016998291015625,
-0.05206298828125,
-0.01497650146484375,
-0.06036376953125,
0... |
KonstantyM/science_qa | 2023-10-08T00:23:32.000Z | [
"region:us"
] | KonstantyM | null | null | 0 | 44 | 2023-10-08T00:19:43 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 7497499873
num_examples: 4432703
download_size: 4282191598
dataset_size: 7497499873
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "science_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 525 | [
[
-0.03350830078125,
-0.01226043701171875,
0.034942626953125,
0.01477813720703125,
-0.019805908203125,
0.00873565673828125,
0.039703369140625,
-0.01020050048828125,
0.0692138671875,
0.0154876708984375,
-0.052459716796875,
-0.049224853515625,
-0.0272216796875,
... |
minh21/COVID-QA-Chunk-64-testset-biencoder-data-90_10 | 2023-10-09T04:29:10.000Z | [
"region:us"
] | minh21 | null | null | 0 | 44 | 2023-10-09T04:29:09 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: context_chunks
sequence: string
- name: document_id
dtype: int64
- name: id
dtype: int64
- name: context
dtype: string
splits:
- name: train
num_bytes: 13595044
num_examples: 203
download_size: 459357
dataset_size: 13595044
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "COVID-QA-Chunk-64-testset-biencoder-data-90_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 664 | [
[
-0.03564453125,
-0.0181732177734375,
-0.0002548694610595703,
0.023681640625,
-0.021087646484375,
-0.0008187294006347656,
0.031280517578125,
-0.0043487548828125,
0.05224609375,
0.010650634765625,
-0.044036865234375,
-0.0399169921875,
-0.035980224609375,
-0.00... |
dhkim123/jy_finetune_sd | 2023-10-11T21:56:16.000Z | [
"region:us"
] | dhkim123 | null | null | 0 | 44 | 2023-10-11T05:38:44 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 37668449.2
num_examples: 1300
download_size: 35715363
dataset_size: 37668449.2
---
# Dataset Card for "jy_finetune_sd"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 449 | [
[
-0.05377197265625,
-0.02325439453125,
0.0185089111328125,
0.0090789794921875,
-0.01442718505859375,
-0.016387939453125,
0.004364013671875,
0.0011796951293945312,
0.06768798828125,
0.0367431640625,
-0.060791015625,
-0.04833984375,
-0.037933349609375,
-0.01994... |
carnival13/rbrt_test_lrg | 2023-10-14T07:28:53.000Z | [
"region:us"
] | carnival13 | null | null | 0 | 44 | 2023-10-14T07:28:04 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 1270137685
num_examples: 900000
download_size: 282453475
dataset_size: 1270137685
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "rbrt_test_lrg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 502 | [
[
-0.04620361328125,
-0.04559326171875,
0.0007486343383789062,
0.0023708343505859375,
-0.01546478271484375,
0.01447296142578125,
0.00897216796875,
-0.0127716064453125,
0.041748046875,
0.0269622802734375,
-0.048736572265625,
-0.036285400390625,
-0.0275421142578125,... |
jihye-moon/LawQA-Ko | 2023-10-30T06:55:41.000Z | [
"task_categories:conversational",
"size_categories:1K<n<10K",
"language:ko",
"legal",
"region:us"
] | jihye-moon | null | null | 0 | 44 | 2023-10-19T07:30:09 | ---
task_categories:
- conversational
language:
- ko
tags:
- legal
size_categories:
- 1K<n<10K
---
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
법률에 대한 질문과 답변으로 구성된 데이터셋 입니다.
아래의 데이터셋에서 질문과 답변을 병합하여 해당 데이터를 만들었습니다.
| 정보 출처 | Dataset Page | Rows |
|---|---|---|
|[찾기쉬운생활법령정보](https://www.easylaw.go.kr/CSP/OnhunqueansLstRetrieve.laf?search_put=)| [jiwoochris/easylaw_kr](https://huggingface.co/datasets/jiwoochris/easylaw_kr) | 2,195 rows |
|[대한법률구조공단](https://www.klac.or.kr/legalinfo/counsel.do)| [jihye-moon/klac_legal_aid_counseling](https://huggingface.co/datasets/jihye-moon/klac_legal_aid_counseling) | 10,037 rows |
※ 해당 데이터는 모두 웹 페이지를 크롤링 하여 구축된 데이터 입니다.
※ 데이터의 법적 근거(판례, 법률)등을 precedent행에 추가하여, instruction tuning을 위한 데이터로 업데이트 할 계획이 있습니다. | 797 | [
[
-0.011871337890625,
-0.02532958984375,
0.013275146484375,
0.036712646484375,
-0.0316162109375,
-0.029266357421875,
-0.006969451904296875,
0.00214385986328125,
0.023406982421875,
0.038330078125,
-0.0288848876953125,
-0.06640625,
-0.04266357421875,
0.011581420... |
TheAIchemist13/hindi_asr_dataset_2 | 2023-10-19T07:51:24.000Z | [
"region:us"
] | TheAIchemist13 | null | null | 0 | 44 | 2023-10-19T07:51:09 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcriptions
dtype: string
splits:
- name: train
num_bytes: 60362774.0
num_examples: 175
- name: test
num_bytes: 3849203.0
num_examples: 5
download_size: 59670172
dataset_size: 64211977.0
---
# Dataset Card for "hindi_asr_dataset_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 630 | [
[
-0.0204925537109375,
-0.01401519775390625,
-0.01384735107421875,
0.0335693359375,
-0.0209808349609375,
0.01419830322265625,
0.01434326171875,
-0.01198577880859375,
0.04266357421875,
0.016387939453125,
-0.044281005859375,
-0.02911376953125,
-0.060455322265625,
... |
Luciya/llama-2-nuv-intent-noE-xl | 2023-10-23T09:00:06.000Z | [
"region:us"
] | Luciya | null | null | 0 | 44 | 2023-10-23T09:00:04 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1420374
num_examples: 3224
download_size: 224581
dataset_size: 1420374
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "llama-2-nuv-intent-noE-xl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 452 | [
[
-0.0178985595703125,
-0.0145263671875,
0.027801513671875,
0.0279083251953125,
-0.0323486328125,
-0.00937652587890625,
0.0282440185546875,
-0.003337860107421875,
0.0703125,
0.047607421875,
-0.061431884765625,
-0.0640869140625,
-0.052703857421875,
-0.005966186... |
jxm/mpqa | 2023-10-25T17:43:00.000Z | [
"region:us"
] | jxm | null | null | 0 | 44 | 2023-10-25T17:41:36 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: dev
path: data/dev-*
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 263258
num_examples: 8603
- name: test
num_bytes: 62502
num_examples: 2000
- name: dev
num_bytes: 7835
num_examples: 256
download_size: 0
dataset_size: 333595
---
# Dataset Card for "mpqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 650 | [
[
-0.051788330078125,
-0.0006847381591796875,
0.027008056640625,
0.00800323486328125,
-0.01284027099609375,
0.002105712890625,
0.041534423828125,
0.0103759765625,
0.03985595703125,
0.029937744140625,
-0.062164306640625,
-0.043701171875,
-0.04150390625,
-0.0281... |
lukemann/baby-agi-dataset-v0 | 2023-10-30T09:16:19.000Z | [
"region:us"
] | lukemann | null | null | 0 | 44 | 2023-10-30T05:36:34 | ---
dataset_info:
features:
- name: id
dtype: string
- name: instruction
dtype: string
- name: trajectory
list:
- name: image_id
dtype: string
- name: action_options
list:
- name: index
dtype: int32
- name: top_left
sequence: int32
- name: bottom_right
sequence: int32
- name: action_taken
struct:
- name: type
dtype: string
- name: value
dtype: string
- name: action_option_index
dtype: int32
splits:
- name: train
num_bytes: 722
num_examples: 1
download_size: 1432409
dataset_size: 722
---
# BabyAGI (Dataset)
The initial demonstration dataset follows the Huggingface dataset spec, with the raw data split into two components, trajectory images and trajectory metadata. The metadata is stored in the raw dataset, and the images are stored on S3. The data is loaded using the dataloader defined in [baby_agi_dataset.py](./baby_agi_dataset.py).
**Data Layout:**
```plaintext
├── data
│ ├── metadata_0.json
│ ├── metadata_1.json
│ └── ...
├-- baby_agi_dataset.py
```
### Metadata Format (.json)
```json
[
{
"id": "<trajectory_id_hash>",
"instruction": "<some instruction>",
"trajectory": [
{
"image_id": "image_id",
"action_options": [
{
"index": 0,
"top_left": [120, 340],
"bottom_right": [140, 440],
},
...
],
"action_taken": {
"type": "click",
"value": "value (only for type and scroll)",
"action_option_index": 0
}
},
...
]
},
]
```
## Action Types
The dataset metadata includes three types of actions: "click", "type", and "scroll". The `action_option_index` field indicates the index of the clicked element within the `action_options` list.
1. **Click**: Represents a user clicking on an element.
2. **Type**: Represents a user typing into an input field.
3. **Scroll**: Represents a user scrolling the viewport. The `value` field indicates the direction of the scroll, with "up" corresponding to a 200px scroll upwards and "down" corresponding to a 200px scroll downwards. Note that `bottom_left` and `top_right` will always be zero-arrays for these.
## Dataset Generation Pipeline
The dataset is generated through the following steps:
1. **Load Demo**: The demo is loaded from the Hugging Face dataset.
2. **Load Trace**: The trace is loaded from the Globus dataset.
3. **Process Trajectories**: For each Mind2Web (M2W) trajectory:
a) **Map Actions**: M2W actions are mapped to Playwright trace actions using the timestamp in `dom_content.json`.
b) **Screenshot DOM**: The DOM is "screenshoted" just before the action.
c) **Map Candidates**: `pos_candidates` and `neg_candidates` from the M2W action metadata are mapped to HTML bounding boxes via class+id matching from the action metadata. New bounding box coordinates are obtained for each.
d) **Craft Meta + Screenshot Pair**: The pair of metadata and screenshots is crafted and saved/appended.
4. **Save Data**: The updated data directory is saved to S3 and Hugging Face.
### Screenshots
Screenshots in this dataset are generated from the before states of Mind2Web trajectory traces. Each image has a width of 2036 and a height of 1144. For alternate screen sizes (via augmentation), padding is added to maintain the aspect ratio. This ensures that the content of the screenshot remains consistent across different screen sizes.
### Options Generation
Options in this dataset are generated from `positive_candidates` (always one) and `negative_candidates` in the Mind2Web (M2W) dataset. The M2W dataset labels *all* possible interactions on the DOM. Therefore, the 50 largest area-wise options within the viewport containing the positive candidate are selected.
### Scrolling
The Mind2Web (M2W) dataset captures the entire DOM, so when the selected option action is not in the viewport, artificial scroll actions are created. This action has two possible values: "up" and "down". Each of which corresponds to a 200px scroll in the respective direction.
### Selecting
The "Select" action in the Mind2Web (M2W) dataset is recorded when a user makes a selection from a dropdown list. In this dataset, we represent it as a sequence of two distinct actions in a trajectory:
1. **Click**: The user clicks on the dropdown element.
2. **Type**: The user types the desired value followed by Enter
## Usage
To use the dataset in your Python program, you can load it using the `load_dataset` function from the `datasets` library:
```python
from datasets import load_dataset
# typically load_dataset("lukemann/baby-agi-dataset-v0"
dataset = load_dataset("lukemann/baby-agi-dataset-v0")
first_row = dataset['train'][0]
print(first_row)
```
This will load the dataset and print the first row of the training set.
For a short demo, refer to the [demo.py](./demo.py) file. | 5,265 | [
[
-0.02911376953125,
-0.03167724609375,
0.0272369384765625,
0.0025920867919921875,
-0.000774383544921875,
-0.01531982421875,
0.00591278076171875,
-0.01404571533203125,
0.039581298828125,
0.02276611328125,
-0.09027099609375,
-0.034210205078125,
-0.034942626953125,
... |
norec | 2023-01-25T14:41:38.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:nb",
"language:nn",
"language:no",
"license:cc-by-nc-... | null | NoReC was created as part of the SANT project (Sentiment Analysis for Norwegian Text), a collaboration between the Language Technology Group (LTG) at the Department of Informatics at the University of Oslo, the Norwegian Broadcasting Corporation (NRK), Schibsted Media Group and Aller Media. This first release of the corpus comprises 35,194 reviews extracted from eight different news sources: Dagbladet, VG, Aftenposten, Bergens Tidende, Fædrelandsvennen, Stavanger Aftenblad, DinSide.no and P3.no. In terms of publishing date the reviews mainly cover the time span 2003–2017, although it also includes a handful of reviews dating back as far as 1998. | @InProceedings{VelOvrBer18,
author = {Erik Velldal and Lilja Ovrelid and
Eivind Alexander Bergem and Cathrine Stadsnes and
Samia Touileb and Fredrik Jorgensen},
title = {{NoReC}: The {N}orwegian {R}eview {C}orpus},
booktitle = {Proceedings of the 11th edition of the
Language Resources and Evaluation Conference},
year = {2018},
address = {Miyazaki, Japan},
pages = {4186--4191}
} | 1 | 43 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- nb
- nn
- 'no'
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: norec
pretty_name: NoReC
dataset_info:
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
- name: xpos_tags
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 1254757266
num_examples: 680792
- name: validation
num_bytes: 189534106
num_examples: 101106
- name: test
num_bytes: 193801708
num_examples: 101594
download_size: 212492611
dataset_size: 1638093080
---
# Dataset Card for NoReC
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/ltgoslo/norec
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2018/pdf/851.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
This dataset contains Norwegian Review Corpus (NoReC), created for the purpose of training and evaluating models for document-level sentiment analysis. More than 43,000 full-text reviews have been collected from major Norwegian news sources and cover a range of different domains, including literature, movies, video games, restaurants, music and theater, in addition to product reviews across a range of categories. Each review is labeled with a manually assigned score of 1–6, as provided by the rating of the original author.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The sentences in the dataset are in Norwegian (nb, nn, no).
## Dataset Structure
### Data Instances
A sample from training set is provided below:
```
{'deprel': ['det',
'amod',
'cc',
'conj',
'nsubj',
'case',
'nmod',
'cop',
'case',
'case',
'root',
'flat:name',
'flat:name',
'punct'],
'deps': ['None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None'],
'feats': ["{'Gender': 'Masc', 'Number': 'Sing', 'PronType': 'Dem'}",
"{'Definite': 'Def', 'Degree': 'Pos', 'Number': 'Sing'}",
'None',
"{'Definite': 'Def', 'Degree': 'Pos', 'Number': 'Sing'}",
"{'Definite': 'Def', 'Gender': 'Masc', 'Number': 'Sing'}",
'None',
'None',
"{'Mood': 'Ind', 'Tense': 'Pres', 'VerbForm': 'Fin'}",
'None',
'None',
'None',
'None',
'None',
'None'],
'head': ['5',
'5',
'4',
'2',
'11',
'7',
'5',
'11',
'11',
'11',
'0',
'11',
'11',
'11'],
'idx': '000000-02-01',
'lemmas': ['den',
'andre',
'og',
'sist',
'sesong',
'av',
'Rome',
'være',
'ute',
'på',
'DVD',
'i',
'Norge',
'$.'],
'misc': ['None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
"{'SpaceAfter': 'No'}",
'None'],
'pos_tags': [5, 0, 4, 0, 7, 1, 11, 3, 1, 1, 11, 1, 11, 12],
'text': 'Den andre og siste sesongen av Rome er ute på DVD i Norge.',
'tokens': ['Den',
'andre',
'og',
'siste',
'sesongen',
'av',
'Rome',
'er',
'ute',
'på',
'DVD',
'i',
'Norge',
'.'],
'xpos_tags': ['None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None']}
```
### Data Fields
The data instances have the following fields:
- deprel: [More Information Needed]
- deps: [More Information Needed]
- feats: [More Information Needed]
- head: [More Information Needed]
- idx: index
- lemmas: lemmas of all tokens
- misc: [More Information Needed]
- pos_tags: part of speech tags
- text: text string
- tokens: tokens
- xpos_tags: [More Information Needed]
The part of speech taggs correspond to these labels: "ADJ" (0), "ADP" (1), "ADV" (2), "AUX" (3), "CCONJ" (4), "DET" (5), "INTJ" (6), "NOUN" (7), "NUM" (8), "PART" (9), "PRON" (10), "PROPN" (11), "PUNCT" (12), "SCONJ" (13), "SYM" (14), "VERB" (15), "X" (16),
### Data Splits
The training, validation, and test set contain `680792`, `101106`, and `101594` sentences respectively.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@InProceedings{VelOvrBer18,
author = {Erik Velldal and Lilja {\O}vrelid and
Eivind Alexander Bergem and Cathrine Stadsnes and
Samia Touileb and Fredrik J{\o}rgensen},
title = {{NoReC}: The {N}orwegian {R}eview {C}orpus},
booktitle = {Proceedings of the 11th edition of the
Language Resources and Evaluation Conference},
year = {2018},
address = {Miyazaki, Japan},
pages = {4186--4191}
}
```
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. | 7,268 | [
[
-0.04150390625,
-0.03411865234375,
0.0029296875,
0.005023956298828125,
-0.02197265625,
0.0030155181884765625,
-0.01708984375,
-0.021697998046875,
0.04052734375,
0.03729248046875,
-0.054046630859375,
-0.08551025390625,
-0.0382080078125,
0.017486572265625,
... |
AConsApart/anime_subtitles_DialoGPT | 2021-03-06T02:31:23.000Z | [
"region:us"
] | AConsApart | null | null | 1 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.0149688720703125,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.046539306640625,
0.052520751953125,
0.005046844482421875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.01495361328125,
-0.060333251953125,
0.03... |
AdWeeb/DravidianMT | 2021-05-21T05:05:21.000Z | [
"region:us"
] | AdWeeb | null | null | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.0149688720703125,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.046539306640625,
0.052520751953125,
0.005046844482421875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.01495361328125,
-0.060333251953125,
0.03... |
Adnan/Urdu_News_Headlines | 2021-03-01T08:39:32.000Z | [
"region:us"
] | Adnan | null | null | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.0149688720703125,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.046539306640625,
0.052520751953125,
0.005046844482421875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.01495361328125,
-0.060333251953125,
0.03... |
AhmadSawal/qa | 2022-01-26T20:57:27.000Z | [
"region:us"
] | AhmadSawal | null | null | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.0149688720703125,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.046539306640625,
0.052520751953125,
0.005046844482421875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.01495361328125,
-0.060333251953125,
0.03... |
Akshith/aa | 2021-05-14T15:36:14.000Z | [
"region:us"
] | Akshith | null | null | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.014984130859375,
0.05718994140625,
0.0288543701171875,
-0.0350341796875,
0.046478271484375,
0.052520751953125,
0.005062103271484375,
0.051361083984375,
0.016998291015625,
-0.0521240234375,
-0.01496124267578125,
-0.0604248046875,
0.037... |
Akshith/g_rock | 2021-05-14T15:34:01.000Z | [
"region:us"
] | Akshith | null | null | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.014984130859375,
0.05718994140625,
0.0288543701171875,
-0.0350341796875,
0.046478271484375,
0.052520751953125,
0.005062103271484375,
0.051361083984375,
0.016998291015625,
-0.0521240234375,
-0.01496124267578125,
-0.0604248046875,
0.037... |
AlgoveraAI/CryptoPunks | 2022-02-28T15:25:44.000Z | [
"region:us"
] | AlgoveraAI | CryptoPunks is a non-fungible token (NFT) collection on the Ethereum blockchain. The dataset contains 10,000 CryptoPunk images, most of humans but also of three special types: Zombie (88), Ape (24) and Alien (9). They are provided with both clear backgrounds and teal backgrounds. | null | 4 | 43 | 2022-03-02T23:29:22 | # Dataset Card for CIFAR-10
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Additional Information](#additional-information)
- [Ocean Protocol](#ocean-protocol)
- [Algovera](#algovera)
## Dataset Description
- **Homepage:** https://market.oceanprotocol.com/asset/did:op:C9D0568838fa670baEe7195Ea443b32EfCAc2281
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
<img src="punks.png" width="100%">
### Dataset Summary
CryptoPunks is a non-fungible token (NFT) collection on the Ethereum blockchain. The dataset contains 10,000 CryptoPunk images, most of humans but also of three special types: Zombie (88), Ape (24) and Alien (9). They are provided with both clear backgrounds and teal backgrounds.
## Dataset Structure
### Data Fields
- img: 24x24x3 image
## Additional Information
### Ocean Protocol
We are working on a Proof of Concept for using HuggingFace with datasets and algorithms on the [Ocean Marketplace](https://market.oceanprotocol.com/). Ocean is an open source infrastructure for monetizing private datasets and training using private AI technologies such as Compute-to-Data.
### Algovera
Algovera is a community working to facilitate and accelerate the development of decentralised AI applications and research.
* Join our community on [Discord](https://discord.com/invite/e65RuHSDS5).
* Contribute to our [GitHub](https://github.com/AlgoveraAI).
* Check out our [Website](https://www.algovera.ai/).
* Find more resources on our [Notion](https://algovera.notion.site/).
* Subscribe to our [Calendar](https://calendar.google.com/calendar/embed?src=c_4qajdfj4imie9cpnkbvkrc7ri4%40group.calendar.google.com). | 1,807 | [
[
-0.059356689453125,
-0.044036865234375,
0.011566162109375,
0.01348876953125,
-0.040618896484375,
0.010528564453125,
0.001407623291015625,
-0.06103515625,
0.0648193359375,
0.029632568359375,
-0.038970947265625,
-0.0635986328125,
-0.026275634765625,
-0.0171508... |
Aliseyfi/event_token_type | 2021-12-23T18:07:33.000Z | [
"region:us"
] | Aliseyfi | null | null | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.014984130859375,
0.05718994140625,
0.0288543701171875,
-0.0350341796875,
0.046478271484375,
0.052520751953125,
0.005062103271484375,
0.051361083984375,
0.016998291015625,
-0.0521240234375,
-0.01496124267578125,
-0.0604248046875,
0.037... |
Anurag-Singh-creator/tasks | 2021-12-12T20:16:49.000Z | [
"region:us"
] | Anurag-Singh-creator | null | null | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.014984130859375,
0.05718994140625,
0.0288543701171875,
-0.0350341796875,
0.046478271484375,
0.052520751953125,
0.005062103271484375,
0.051361083984375,
0.016998291015625,
-0.0521240234375,
-0.01496124267578125,
-0.0604248046875,
0.037... |
Avishekavi/Avi | 2021-03-29T18:59:18.000Z | [
"region:us"
] | Avishekavi | null | null | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.014984130859375,
0.05718994140625,
0.0288543701171875,
-0.0350341796875,
0.046478271484375,
0.052520751953125,
0.005062103271484375,
0.051361083984375,
0.016998291015625,
-0.0521240234375,
-0.01496124267578125,
-0.0604248046875,
0.037... |
BatuhanYilmaz/github-issues | 2022-01-24T08:40:25.000Z | [
"region:us"
] | BatuhanYilmaz | null | null | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Binbin/my_dataset | 2021-03-22T01:15:48.000Z | [
"region:us"
] | Binbin | null | null | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
BlakesOrb6/Fred-Flintstone | 2021-11-08T17:43:33.000Z | [
"region:us"
] | BlakesOrb6 | null | null | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
CAGER/rick | 2021-07-09T02:05:44.000Z | [
"region:us"
] | CAGER | null | null | 0 | 43 | 2022-03-02T23:29:22 | welcoe to cager data set | 24 | [
[
-0.0216064453125,
0.01468658447265625,
-0.007244110107421875,
0.0213775634765625,
-0.01172637939453125,
-0.0034332275390625,
0.00786590576171875,
-0.00360870361328125,
0.0146636962890625,
0.05670166015625,
-0.006771087646484375,
-0.03778076171875,
-0.04147338867... |
CShorten/KerasBERT | 2022-06-28T11:51:07.000Z | [
"region:us"
] | CShorten | null | null | 2 | 43 | 2022-03-02T23:29:22 | <h1>KerasBERT</h1>
<ul>
<li>All Data</li>
<li>Keras API Docs</li>
<li>Keras Developer Guides</li>
<li>Keras Code Examples</li>
</ul>
Please cite KerasBERT: Modeling the Keras Language, Connor Shorten and Taghi M. Khoshgoftaar. https://ieeexplore.ieee.org/abstract/document/9679980. | 282 | [
[
-0.0190582275390625,
-0.03826904296875,
0.0438232421875,
0.002716064453125,
-0.0153045654296875,
-0.017059326171875,
-0.00508880615234375,
-0.031951904296875,
0.015411376953125,
0.04742431640625,
-0.04742431640625,
-0.045989990234375,
-0.030059814453125,
-0.... |
ChadxxxxHall/Inter-vision | 2021-08-11T22:44:47.000Z | [
"region:us"
] | ChadxxxxHall | null | null | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.0149688720703125,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.046539306640625,
0.052520751953125,
0.005046844482421875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.01495361328125,
-0.060333251953125,
0.03... |
GEM/SIMPITIKI | 2022-10-24T15:30:05.000Z | [
"task_categories:text2text-generation",
"task_ids:text-simplification",
"annotations_creators:crowd-sourced",
"language_creators:unknown",
"multilinguality:unknown",
"size_categories:unknown",
"source_datasets:original",
"language:it",
"license:cc-by-4.0",
"region:us"
] | GEM | SIMPITIKI is a Simplification corpus for Italian and it consists of two sets of simplified pairs: the first one is harvested from the Italian Wikipedia in a semi-automatic way; the second one is manually annotated sentence-by-sentence from documents in the administrative domain. | @article{tonelli2016simpitiki,
title={SIMPITIKI: a Simplification corpus for Italian},
author={Tonelli, Sara and Aprosio, Alessio Palmero and Saltori, Francesca},
journal={Proceedings of CLiC-it},
year={2016}
} | 2 | 43 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowd-sourced
language_creators:
- unknown
language:
- it
license:
- cc-by-4.0
multilinguality:
- unknown
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text2text-generation
task_ids:
- text-simplification
pretty_name: SIMPITIKI
---
# Dataset Card for GEM/SIMPITIKI
## Dataset Description
- **Homepage:** https://github.com/dhfbk/simpitiki
- **Repository:** https://github.com/dhfbk/simpitiki/tree/master/corpus
- **Paper:** http://ceur-ws.org/Vol-1749/paper52.pdf
- **Leaderboard:** N/A
- **Point of Contact:** Sara Tonelli
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/SIMPITIKI).
### Dataset Summary
SIMPITIKI is an Italian Simplification dataset. Its examples were selected from Italian Wikipedia such that their editing tracking descriptions contain any of the words "Simplified"/"Simplify"/"Simplification".
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/SIMPITIKI')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/SIMPITIKI).
#### website
[Github](https://github.com/dhfbk/simpitiki)
#### paper
[Website](http://ceur-ws.org/Vol-1749/paper52.pdf)
#### authors
Sara Tonelli (Fondazione Bruno Kessler), Alessio Palmero Aprosio (Fondazione Bruno Kessler), Francesca Saltori (Fondazione Bruno Kessler)
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
<!-- info: What is the webpage for the dataset (if it exists)? -->
<!-- scope: telescope -->
[Github](https://github.com/dhfbk/simpitiki)
#### Download
<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
[Github](https://github.com/dhfbk/simpitiki/tree/master/corpus)
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
[Website](http://ceur-ws.org/Vol-1749/paper52.pdf)
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
```
@article{tonelli2016simpitiki,
title={SIMPITIKI: a Simplification corpus for Italian},
author={Tonelli, Sara and Aprosio, Alessio Palmero and Saltori, Francesca},
journal={Proceedings of CLiC-it},
year={2016}
}
```
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Sara Tonelli
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
satonelli@fbk.eu
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
no
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
no
#### Covered Dialects
<!-- info: What dialects are covered? Are there multiple dialects per language? -->
<!-- scope: periscope -->
None
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`Italian`
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
cc-by-4.0: Creative Commons Attribution 4.0 International
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
The purpose of the dataset is to train NLG models to simplify complex text by learning different types of transformations (verb to noun, noun to verbs, deletion, insertion, etc)
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Simplification
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
This dataset aims to enhance research in text simplification in Italian language with different text transformations.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`academic`, `independent`
#### Curation Organization(s)
<!-- info: Name the organization(s). -->
<!-- scope: periscope -->
Fondazione Bruno Kessler (FBK)
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Sara Tonelli (Fondazione Bruno Kessler), Alessio Palmero Aprosio (Fondazione Bruno Kessler), Francesca Saltori (Fondazione Bruno Kessler)
#### Funding
<!-- info: Who funded the data creation? -->
<!-- scope: microscope -->
EU Horizon 2020 Programme via the SIMPATICO Project (H2020-EURO-6-2015, n. 692819)
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Sebastien Montella (Orange Labs), Vipul Raheja (Grammarly Inc.)
### Dataset Structure
#### Data Fields
<!-- info: List and describe the fields present in the dataset. -->
<!-- scope: telescope -->
Each sample comes with the following fields:
- `gem_id` (string): Unique sample ID
-`text` (string): The raw text to be simplified
-`simplified_text` (string): The simplified version of "text" field
-`transformation_type` (string): Nature of transformation applied to raw text in order to simplify it.
-`source_dataset` (string): Initial dataset source of sample. Values: 'itwiki' (for Italian Wikipedia) or 'tn' (manually annotated administrative documents from the Municipality of Trento, Italy)
#### Reason for Structure
<!-- info: How was the dataset structure determined? -->
<!-- scope: microscope -->
The dataset is organized as a pairs where the raw text (input) is associated with its simplified text (output). The editing transformation and the source dataset of each sample is also provided for advanced analysis.
#### How were labels chosen?
<!-- info: How were the labels chosen? -->
<!-- scope: microscope -->
SIMPITIKI dataset selects documents from Italian Wikipedia such that their editing tracking descriptions contain any of the words "Simplified"/"Simplify"/"Simplification". For the Public Administration domain of the documents of the Municipality of Trento (Italy)
#### Example Instance
<!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
<!-- scope: periscope -->
```
{"transformation_id": 31, "transformation_type": "Transformation - Lexical Substitution (word level)", "source_dataset": "tn", "text": "- assenza per <del>e</del>si<del>genze</del> particolari attestate da relazione dei servizi sociali;", "simplified_text": "- assenza per <ins>bi</ins>s<ins>ogn</ins>i particolari attestati da relazione dei servizi sociali;"}
```
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
Several splits are proposed to train models on different configurations:
-"train": Training samples randomly selected from initial corpus. 816 training samples.
-"validation": Validating samples randomly selected from initial corpus. 174 validating samples.
-"test": Testing samples randomly selected from initial corpus. 176 validating samples.
-"challenge_seen_transformations_train": This training challenge split includes specific transformations to simplify the raw text. Precisely, transformations are "Split", "Merge", "Reordering", "Insert - Verb", "Insert - Other", "Delete - Verb", "Delete - Other", "Transformation - Lexical Substitution (word level)", "Transformation - Anaphoric replacement", "Transformation - Noun to Verb", "Transformation - Verbal Features". 562 training samples.
-"challenge_seen_transformations_val": This validating challenge split includes same transformations than the ones observed in training. Precisely, transformations are "Split", "Merge", "Reordering", "Insert - Verb", "Insert - Other", "Delete - Verb", "Delete - Other", "Transformation - Lexical Substitution (word level)", "Transformation - Anaphoric replacement", "Transformation - Noun to Verb", "Transformation - Verbal Features". 121 validating samples.
-"challenge_seen_transformations_test": This testing challenge split includes same transformations than the ones observed in training. Precisely, transformations are "Split", "Merge", "Reordering", "Insert - Verb", "Insert - Other", "Delete - Verb", "Delete - Other", "Transformation - Lexical Substitution (word level)", "Transformation - Anaphoric replacement", "Transformation - Noun to Verb", "Transformation - Verbal Features". 127 testing samples.
-"challenge_unseen_transformations_test" : "Insert - Subject", "Delete - Subject", "Transformation - Lexical Substitution (phrase level)", "Transformation - Verb to Noun (nominalization)", "Transformation - Verbal Voice". 356 testing samples.
-"challenge_itwiki_train": This challenge split includes random samples from the Italian Wikipedia as source dataset. 402 training samples.
-"challenge_itwiki_val": This validating challenge split includes random samples from the Italian Wikipedia as source dataset. 86 validating samples.
-"challenge_itwiki_test": This testing challenge split includes random samples from the Italian Wikipedia as source dataset. 87 testing samples.
-"challenge_tn_test": This testing challenge split includes all samples from the Municipality of Trento administrative documents ('tn') as source dataset. 591 testing samples.
#### Splitting Criteria
<!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
<!-- scope: microscope -->
The training ratio is set to 0.7. The validation and test somehow equally divide the remaining 30% of the dataset.
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
<!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
<!-- scope: microscope -->
This dataset promotes Simplification task for Italian language.
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
no
#### Ability that the Dataset measures
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: periscope -->
Models can be evaluated if they can simplify text regarding different simplification transformations.
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
yes
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
yes
#### Split Information
<!-- info: Describe how the new splits were created -->
<!-- scope: periscope -->
The SIMPITIKI dataset provides a single file. Several splits are proposed to train models on different configurations:
-"train": Training samples randomly selected from initial corpus. 816 training samples.
-"validation": Validating samples randomly selected from initial corpus. 174 validating samples.
-"test": Testing samples randomly selected from initial corpus. 176 validating samples.
-"challenge_seen_transformations_train": This training challenge split includes specific transformations to simplify the raw text. Precisely, transformations are "Split", "Merge", "Reordering", "Insert - Verb", "Insert - Other", "Delete - Verb", "Delete - Other", "Transformation - Lexical Substitution (word level)", "Transformation - Anaphoric replacement", "Transformation - Noun to Verb", "Transformation - Verbal Features". 562 training samples.
-"challenge_seen_transformations_val": This validating challenge split includes same transformations than the ones observed in training. Precisely, transformations are "Split", "Merge", "Reordering", "Insert - Verb", "Insert - Other", "Delete - Verb", "Delete - Other", "Transformation - Lexical Substitution (word level)", "Transformation - Anaphoric replacement", "Transformation - Noun to Verb", "Transformation - Verbal Features". 121 validating samples.
-"challenge_seen_transformations_test": This testing challenge split includes same transformations than the ones observed in training. Precisely, transformations are "Split", "Merge", "Reordering", "Insert - Verb", "Insert - Other", "Delete - Verb", "Delete - Other", "Transformation - Lexical Substitution (word level)", "Transformation - Anaphoric replacement", "Transformation - Noun to Verb", "Transformation - Verbal Features". 127 testing samples.
-"challenge_unseen_transformations_test" : "Insert - Subject", "Delete - Subject", "Transformation - Lexical Substitution (phrase level)", "Transformation - Verb to Noun (nominalization)", "Transformation - Verbal Voice". 356 testing samples.
-"challenge_itwiki_train": This challenge split includes random samples from the Italian Wikipedia as source dataset. 402 training samples.
-"challenge_itwiki_val": This validating challenge split includes random samples from the Italian Wikipedia as source dataset. 86 validating samples.
-"challenge_itwiki_test": This testing challenge split includes random samples from the Italian Wikipedia as source dataset. 87 testing samples.
-"challenge_tn_test": This testing challenge split includes all samples from the Municipality of Trento administrative documents ('tn') as source dataset. 591 testing samples.
#### Split Motivation
<!-- info: What aspects of the model's generation capacities were the splits created to test? -->
<!-- scope: periscope -->
The splits allows to investigate the generalization of models regarding editing/transformations ("challenge_seen_transformations_test" / "challenge_unseen_transformations_test") and for transfer learning to different domain ("challenge_tn_test")
### Getting Started with the Task
#### Pointers to Resources
<!-- info: Getting started with in-depth research on the task. Add relevant pointers to resources that researchers can consult when they want to get started digging deeper into the task. -->
<!-- scope: microscope -->
- Coster and Kauchak, Simple English Wikipedia: A New Text Simplification Task, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 665–669, Portland, Oregon, June 19-24, 2011
- Xu et al, Optimizing Statistical Machine Translation for Text Simplification, Transactions of the Association for Computational Linguistics, vol. 4, pp. 401–415, 2016
- Aprosio et al, Neural Text Simplification in Low-Resource Conditions Using Weak Supervision, Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation (NeuralGen), pages 37–44, Minneapolis, Minnesota, USA, June 6, 2019
#### Technical Terms
<!-- info: Technical terms used in this card and the dataset and their definitions -->
<!-- scope: microscope -->
Simplification: Process that consists in transforming an input text to its simplified version.
## Previous Results
### Previous Results
#### Measured Model Abilities
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
The splits allows to investigate the generalization of models regarding editing/transformations ("challenge_seen_transformations_test" / "challenge_unseen_transformations_test") and for transfer learning to different domain ("challenge_tn_test")
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`BLEU`, `Other: Other Metrics`
#### Other Metrics
<!-- info: Definitions of other metrics -->
<!-- scope: periscope -->
FKBLEU (https://aclanthology.org/Q16-1029.pdf): Combines Flesch-Kincaid Index and iBLEU metrics.
SARI (https://aclanthology.org/Q16-1029.pdf): Compares system output against references and against the input sentence. It explicitly measures the goodness of words that are added, deleted and kept by the systems
Word-level F1
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
no
## Dataset Curation
### Original Curation
#### Original Curation Rationale
<!-- info: Original curation rationale -->
<!-- scope: telescope -->
Most of the resources for Text Simplification are in English. To stimulate research to different languages, SIMPITIKI proposes an Italian corpus with Complex-Simple sentence pairs.
#### Communicative Goal
<!-- info: What was the communicative goal? -->
<!-- scope: periscope -->
Text simplification allows a smooth reading of text to enhance understanding.
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
yes
#### Source Details
<!-- info: List the sources (one per line) -->
<!-- scope: periscope -->
Italian Wikipedia
(Manually) Annotated administrative documents from the Municipality of Trento, Italy
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Found`
#### Where was it found?
<!-- info: If found, where from? -->
<!-- scope: telescope -->
`Single website`, `Offline media collection`
#### Language Producers
<!-- info: What further information do we have on the language producers? -->
<!-- scope: microscope -->
SIMPITIKI is a combination of documents from Italian Wikipedia and from the Municipality of Trento, Italy.
#### Topics Covered
<!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
<!-- scope: periscope -->
Samples from documents from the Municipality of Trento corpus are in the administrative domain.
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
validated by data curator
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
not filtered
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
crowd-sourced
#### Number of Raters
<!-- info: What is the number of raters -->
<!-- scope: telescope -->
unknown
#### Rater Qualifications
<!-- info: Describe the qualifications required of an annotator. -->
<!-- scope: periscope -->
Native speaker
#### Raters per Training Example
<!-- info: How many annotators saw each training example? -->
<!-- scope: periscope -->
0
#### Raters per Test Example
<!-- info: How many annotators saw each test example? -->
<!-- scope: periscope -->
0
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
unknown
#### Annotation Values
<!-- info: Purpose and values for each annotation -->
<!-- scope: microscope -->
Annotators specified any of the tags as designed by Brunato et al. (https://aclanthology.org/W15-1604/):
-Split: Splitting a clause into two clauses.
-Merge: Merge two or more clauses together.
-Reordering: Word order changes.
-Insert: Insertion of words or phrases that provide supportive information to the original sentence
-Delete: dropping redundant information.
-Transformation: Modification which can affect the sentence at the lexical, morpho-syntactic and syntactic level: Lexical substitution (word level) / Lexical substitution (phrase level) / Anaphoric replacement / Noun to Verb / Verb to Noun / Verbal voice / Verbal features/ morpho–syntactic and syntactic level, also giving rise to overlapping phenomena
#### Any Quality Control?
<!-- info: Quality control measures? -->
<!-- scope: telescope -->
unknown
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
no
#### Justification for Using the Data
<!-- info: If not, what is the justification for reusing the data? -->
<!-- scope: microscope -->
The dataset is available online under the CC-BY 4.0 license.
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
likely
#### Categories of PII
<!-- info: What categories of PII are present or suspected in the data? -->
<!-- scope: periscope -->
`generic PII`
#### Any PII Identification?
<!-- info: Did the curators use any automatic/manual method to identify PII in the dataset? -->
<!-- scope: periscope -->
no identification
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
no
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
yes
#### Details on how Dataset Addresses the Needs
<!-- info: Describe how this dataset addresses the needs of underserved communities. -->
<!-- scope: microscope -->
The creator of SIMPITIKI wants to promote text simplification for Italian because few resources are available in other languages than English.
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
unsure
## Considerations for Using the Data
### PII Risks and Liability
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`research use only`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`research use only`
### Known Technical Limitations
#### Discouraged Use Cases
<!-- info: What are some discouraged use cases of a model trained to maximize the proposed metrics on this dataset? In particular, think about settings where decisions made by a model that performs reasonably well on the metric my still have strong negative consequences for user or members of the public. -->
<!-- scope: microscope -->
The risk of surface-based metrics (BLEU, chrf++, etc) for this task is that semantic adequacy is not respected when simplifying the input document.
| 24,027 | [
[
-0.040069580078125,
-0.05419921875,
0.031494140625,
-0.0005435943603515625,
-0.029022216796875,
-0.0218658447265625,
-0.0289306640625,
-0.023162841796875,
0.0430908203125,
0.01506805419921875,
-0.06011962890625,
-0.05792236328125,
-0.042388916015625,
0.02827... |
HarveyBWest/mybot | 2021-07-03T13:22:20.000Z | [
"region:us"
] | HarveyBWest | null | null | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Langame/starter | 2022-12-06T18:54:01.000Z | [
"task_categories:text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:mit",
"region:us"
] | Langame | null | null | 0 | 43 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- expert-generated
license:
- mit
multilinguality:
- monolingual
pretty_name: ''
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-generation
task_ids: []
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | 2,751 | [
[
-0.03265380859375,
-0.034759521484375,
0.00994873046875,
0.019012451171875,
-0.0148468017578125,
0.0169525146484375,
-0.02294921875,
-0.025665283203125,
0.0458984375,
0.044097900390625,
-0.0626220703125,
-0.083251953125,
-0.05157470703125,
0.004974365234375,... |
artyeth/Dorian | 2021-07-22T21:41:35.000Z | [
"region:us"
] | artyeth | null | null | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
aryanpatke/github-issues | 2022-02-14T17:17:48.000Z | [
"region:us"
] | aryanpatke | null | null | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
astarostap/antisemitic-tweets | 2021-02-08T14:40:52.000Z | [
"region:us"
] | astarostap | null | null | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
astarostap/antisemitic_tweets | 2021-02-06T14:46:47.000Z | [
"region:us"
] | astarostap | null | null | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.014984130859375,
0.05718994140625,
0.0288543701171875,
-0.0350341796875,
0.046478271484375,
0.052520751953125,
0.005062103271484375,
0.051361083984375,
0.016998291015625,
-0.0521240234375,
-0.01496124267578125,
-0.0604248046875,
0.037... |
atelders/politweets | 2021-08-19T12:28:45.000Z | [
"region:us"
] | atelders | null | null | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
athar/a_b | 2021-12-02T16:32:19.000Z | [
"region:us"
] | athar | null | null | 1 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
avadesian/dddd | 2021-12-13T08:37:33.000Z | [
"region:us"
] | avadesian | null | null | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
avanishcobaltest/datasetavanish | 2022-02-10T17:43:26.000Z | [
"region:us"
] | avanishcobaltest | null | null | 0 | 43 | 2022-03-02T23:29:22 | <S>AAAAAAAAAAAAAAAA</s> <h1/onmouseover=alert(1)>aaaaaaaaaaaaa
| 63 | [
[
-0.0299835205078125,
-0.06695556640625,
0.031463623046875,
0.0158843994140625,
-0.0307159423828125,
-0.010833740234375,
0.005889892578125,
0.000736236572265625,
0.0718994140625,
0.0264129638671875,
-0.062408447265625,
-0.01715087890625,
-0.0307159423828125,
... |
averyanalex/panorama | 2022-01-12T20:40:59.000Z | [
"region:us"
] | averyanalex | null | null | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
beacon/test | 2021-12-13T14:47:29.000Z | [
"region:us"
] | beacon | null | null | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
berkergurcay/2020-10K-Reports | 2021-05-21T23:33:15.000Z | [
"region:us"
] | berkergurcay | null | null | 1 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
bingzhen/test2 | 2022-01-17T08:01:12.000Z | [
"region:us"
] | bingzhen | null | null | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
braincode/braincode | 2021-09-11T17:32:59.000Z | [
"region:us"
] | braincode | null | null | 1 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
bs-modeling-metadata/wiki_dump | 2021-11-23T18:54:48.000Z | [
"region:us"
] | bs-modeling-metadata | null | null | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
cameronbc/synthtiger | 2022-02-08T04:24:39.000Z | [
"region:us"
] | cameronbc | A synthetic scene text OCR dataset derived from the
[SynthTIGER](https://github.com/clovaai/synthtiger) generator. | @inproceedings{yim2021synthtiger,
title={Synthtiger: Synthetic text image generator towards better text recognition models},
author={Yim, Moonbin and Kim, Yoonsik and Cho, Han-Cheol and Park, Sungrae},
booktitle={International Conference on Document Analysis and Recognition},
pages={109--124},
year={2021},
organization={Springer}
} | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ccccccc/hdjw_94ejrjr | 2021-02-18T07:41:38.000Z | [
"region:us"
] | ccccccc | null | null | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
cdleong/temp_africaNLP_keyword_spotting_for_african_languages | 2022-10-25T09:07:32.000Z | [
"language:wo",
"language:fuc",
"language:srr",
"language:mnk",
"language:snk",
"region:us"
] | cdleong | This new dataset is designed to solve this great NLP task and is crafted with a lot of care. | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | 0 | 43 | 2022-03-02T23:29:22 | ---
language:
- wo
- fuc
- srr
- mnk
- snk
---
## Dataset Description
- **Homepage:** https://zenodo.org/record/4661645
TEMPORARY TEST DATASET
Not for actual use! Attempting to test out a dataset script for loading https://zenodo.org/record/4661645
| 253 | [
[
-0.0179901123046875,
-0.0114898681640625,
0.00989532470703125,
0.037109375,
-0.052215576171875,
0.00914764404296875,
-0.007114410400390625,
0.01132965087890625,
0.0121612548828125,
0.0455322265625,
-0.08477783203125,
-0.04461669921875,
0.0028247833251953125,
... |
cem/film | 2021-12-23T22:02:57.000Z | [
"region:us"
] | cem | null | null | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
cemigo/taylor_vs_shakes | 2021-03-14T23:45:59.000Z | [
"region:us"
] | cemigo | null | null | 0 | 43 | 2022-03-02T23:29:22 | This dataset has 336 pieces of quotes from William Shakespeare and Taylor Swift (labeled) for supervised classification.
Source: https://www.kaggle.com/kellylougheed/tswift-vs-shakespeare | 187 | [
[
-0.0243072509765625,
-0.034088134765625,
0.0120086669921875,
0.023162841796875,
0.00012409687042236328,
0.004734039306640625,
-0.0017795562744140625,
-0.0286712646484375,
0.006099700927734375,
0.059478759765625,
-0.0628662109375,
-0.0116119384765625,
-0.03509521... |
cemigo/test-data | 2021-02-07T23:49:41.000Z | [
"region:us"
] | cemigo | null | null | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
clem/autonlp-data-french_word_detection | 2021-09-14T09:45:38.000Z | [
"region:us"
] | clem | null | null | 1 | 43 | 2022-03-02T23:29:22 | This is a very good dataset! | 28 | [
[
-0.0216522216796875,
-0.004207611083984375,
0.01580810546875,
0.0390625,
-0.0096282958984375,
-0.00994873046875,
-0.0022716522216796875,
-0.00545501708984375,
0.03594970703125,
0.051666259765625,
-0.0185699462890625,
-0.0540771484375,
-0.037811279296875,
-0.... |
coala/kkk | 2021-09-14T07:56:22.000Z | [
"region:us"
] | coala | null | null | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
congpt/dstc23_asr | 2021-04-06T18:04:04.000Z | [
"region:us"
] | congpt | null | null | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01494598388671875,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.0465087890625,
0.052490234375,
0.00505828857421875,
0.051361083984375,
0.0170135498046875,
-0.05206298828125,
-0.0149993896484375,
-0.06036376953125,
0.0379028320... |
crich/cider | 2021-11-15T18:26:56.000Z | [
"region:us"
] | crich | null | null | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
cristinakuo/latino40 | 2021-12-27T19:24:12.000Z | [
"region:us"
] | cristinakuo | null | null | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ctu-aic/multinli_cs | 2021-11-21T19:58:09.000Z | [
"region:us"
] | ctu-aic | null | null | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01494598388671875,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.0465087890625,
0.052490234375,
0.00505828857421875,
0.051361083984375,
0.0170135498046875,
-0.05206298828125,
-0.0149993896484375,
-0.06036376953125,
0.0379028320... |
cyko/books | 2021-11-27T12:09:21.000Z | [
"region:us"
] | cyko | null | null | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
dasago78/dasago78dataset | 2021-04-02T17:57:24.000Z | [
"region:us"
] | dasago78 | null | null | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
davanstrien/hipe2020 | 2022-02-15T11:40:24.000Z | [
"region:us"
] | davanstrien | null | null | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
davanstrien/iiif_labeled | 2022-02-28T11:06:07.000Z | [
"region:us"
] | davanstrien | null | null | 0 | 43 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.