id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
notrichardren/azaria-mitchell-diff-filtered-2 | 2023-10-03T05:22:58.000Z | [
"region:us"
] | notrichardren | null | null | 0 | 48 | 2023-10-03T05:22:52 | ---
configs:
- config_name: default
data_files:
- split: cities
path: data/cities-*
- split: companies
path: data/companies-*
- split: animals
path: data/animals-*
- split: elements
path: data/elements-*
- split: inventions
path: data/inventions-*
- split: facts
path: data/facts-*
dataset_info:
features:
- name: claim
dtype: string
- name: label
dtype: int64
- name: dataset
dtype: string
- name: qa_type
dtype: int64
- name: ind
dtype: int64
splits:
- name: cities
num_bytes: 311504
num_examples: 4416
- name: companies
num_bytes: 86125
num_examples: 777
- name: animals
num_bytes: 60222
num_examples: 692
- name: elements
num_bytes: 52499
num_examples: 636
- name: inventions
num_bytes: 49480
num_examples: 594
- name: facts
num_bytes: 43529
num_examples: 472
download_size: 209164
dataset_size: 603359
---
# Dataset Card for "azaria-mitchell-diff-filtered-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,132 | [
[
-0.057586669921875,
-0.028717041015625,
0.022735595703125,
0.0189666748046875,
-0.032257080078125,
-0.01224517822265625,
0.031402587890625,
-0.01556396484375,
0.0399169921875,
0.042755126953125,
-0.07342529296875,
-0.051910400390625,
-0.069580078125,
-0.0011... |
Trelis/openassistant-llama-style | 2023-10-31T11:29:33.000Z | [
"size_categories:1K<n<10k",
"language:en",
"language:es",
"language:ru",
"language:de",
"language:pl",
"language:th",
"language:vi",
"language:sv",
"language:bn",
"language:da",
"language:he",
"language:it",
"language:fa",
"language:sk",
"language:id",
"language:nb",
"language:el",... | Trelis | null | null | 2 | 48 | 2023-10-04T14:14:13 | ---
license: apache-2.0
language:
- en
- es
- ru
- de
- pl
- th
- vi
- sv
- bn
- da
- he
- it
- fa
- sk
- id
- nb
- el
- nl
- hu
- eu
- zh
- eo
- ja
- ca
- cs
- bg
- fi
- pt
- tr
- ro
- ar
- uk
- gl
- fr
- ko
tags:
- human-feedback
- llama-2
size_categories:
- 1K<n<10k
pretty_name: Filtered OpenAssistant Conversations
---
# Chat Fine-tuning Dataset - Llama 2 Style
This dataset allows for fine-tuning chat models using [INST] AND [/INST] to wrap user messages.
Preparation:
1. The dataset is cloned from [TimDettmers](https://huggingface.co/datasets/timdettmers/openassistant-guanaco), which itself is a subset of the Open Assistant dataset, which you can find [here](https://huggingface.co/datasets/OpenAssistant/oasst1/tree/main). This subset of the data only contains the highest-rated paths in the conversation tree, with a total of 9,846 samples.
1. The dataset was then filtered to:
- replace instances of '### Human:' with '[INST]'
- replace instances of '### Assistant:' with '</s><s> [/INST]' (to encourage the model to emit </s> when finished a response)
- if a row of data ends with an assistant response, then [INST] was additionally added to the end of that row of data.
Details of the root dataset follow, copied from that repo:
# OpenAssistant Conversations Dataset (OASST1)
## Dataset Description
- **Homepage:** https://www.open-assistant.io/
- **Repository:** https://github.com/LAION-AI/Open-Assistant
- **Paper:** https://arxiv.org/abs/2304.07327
### Dataset Summary
In an effort to democratize research on large-scale alignment, we release OpenAssistant
Conversations (OASST1), a human-generated, human-annotated assistant-style conversation
corpus consisting of 161,443 messages in 35 different languages, annotated with 461,292
quality ratings, resulting in over 10,000 fully annotated conversation trees. The corpus
is a product of a worldwide crowd-sourcing effort involving over 13,500 volunteers.
Please refer to our [paper](https://arxiv.org/abs/2304.07327) for further details.
### Dataset Structure
This dataset contains message trees. Each message tree has an initial prompt message as the root node,
which can have multiple child messages as replies, and these child messages can have multiple replies.
All messages have a role property: this can either be "assistant" or "prompter". The roles in
conversation threads from prompt to leaf node strictly alternate between "prompter" and "assistant".
This version of the dataset contains data collected on the [open-assistant.io](https://open-assistant.io/) website until April 12 2023.
### JSON Example: Message
For readability, the following JSON examples are shown formatted with indentation on multiple lines.
Objects are stored without indentation (on single lines) in the actual jsonl files.
```json
{
"message_id": "218440fd-5317-4355-91dc-d001416df62b",
"parent_id": "13592dfb-a6f9-4748-a92c-32b34e239bb4",
"user_id": "8e95461f-5e94-4d8b-a2fb-d4717ce973e4",
"text": "It was the winter of 2035, and artificial intelligence (..)",
"role": "assistant",
"lang": "en",
"review_count": 3,
"review_result": true,
"deleted": false,
"rank": 0,
"synthetic": true,
"model_name": "oasst-sft-0_3000,max_new_tokens=400 (..)",
"labels": {
"spam": { "value": 0.0, "count": 3 },
"lang_mismatch": { "value": 0.0, "count": 3 },
"pii": { "value": 0.0, "count": 3 },
"not_appropriate": { "value": 0.0, "count": 3 },
"hate_speech": { "value": 0.0, "count": 3 },
"sexual_content": { "value": 0.0, "count": 3 },
"quality": { "value": 0.416, "count": 3 },
"toxicity": { "value": 0.16, "count": 3 },
"humor": { "value": 0.0, "count": 3 },
"creativity": { "value": 0.33, "count": 3 },
"violence": { "value": 0.16, "count": 3 }
}
}
```
### JSON Example: Conversation Tree
For readability, only a subset of the message properties is shown here.
```json
{
"message_tree_id": "14fbb664-a620-45ce-bee4-7c519b16a793",
"tree_state": "ready_for_export",
"prompt": {
"message_id": "14fbb664-a620-45ce-bee4-7c519b16a793",
"text": "Why can't we divide by 0? (..)",
"role": "prompter",
"lang": "en",
"replies": [
{
"message_id": "894d30b6-56b4-4605-a504-89dd15d4d1c8",
"text": "The reason we cannot divide by zero is because (..)",
"role": "assistant",
"lang": "en",
"replies": [
// ...
]
},
{
"message_id": "84d0913b-0fd9-4508-8ef5-205626a7039d",
"text": "The reason that the result of a division by zero is (..)",
"role": "assistant",
"lang": "en",
"replies": [
{
"message_id": "3352725e-f424-4e3b-a627-b6db831bdbaa",
"text": "Math is confusing. Like those weird Irrational (..)",
"role": "prompter",
"lang": "en",
"replies": [
{
"message_id": "f46207ca-3149-46e9-a466-9163d4ce499c",
"text": "Irrational numbers are simply numbers (..)",
"role": "assistant",
"lang": "en",
"replies": []
},
// ...
]
}
]
}
]
}
}
```
Please refer to [oasst-data](https://github.com/LAION-AI/Open-Assistant/tree/main/oasst-data) for
details about the data structure and Python code to read and write jsonl files containing oasst data objects.
If you would like to explore the dataset yourself you can find a
[`getting-started`](https://github.com/LAION-AI/Open-Assistant/blob/main/notebooks/openassistant-oasst1/getting-started.ipynb)
notebook in the `notebooks/openassistant-oasst1` folder of the [LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant)
github repository.
## Main Dataset Files
Conversation data is provided either as nested messages in trees (extension `.trees.jsonl.gz`)
or as a flat list (table) of messages (extension `.messages.jsonl.gz`).
### Ready For Export Trees
```
2023-04-12_oasst_ready.trees.jsonl.gz 10,364 trees with 88,838 total messages
2023-04-12_oasst_ready.messages.jsonl.gz 88,838 messages
```
Trees in `ready_for_export` state without spam and deleted messages including message labels.
The oasst_ready-trees file usually is sufficient for supervised fine-tuning (SFT) & reward model (RM) training.
### All Trees
```
2023-04-12_oasst_all.trees.jsonl.gz 66,497 trees with 161,443 total messages
2023-04-12_oasst_all.messages.jsonl.gz 161,443 messages
```
All trees, including those in states `prompt_lottery_waiting` (trees that consist of only one message, namely the initial prompt),
`aborted_low_grade` (trees that stopped growing because the messages had low quality), and `halted_by_moderator`.
### Supplemental Exports: Spam & Prompts
```
2023-04-12_oasst_spam.messages.jsonl.gz
```
These are messages which were deleted or have a negative review result (`"review_result": false`).
Besides low quality, a frequent reason for message deletion is a wrong language tag.
```
2023-04-12_oasst_prompts.messages.jsonl.gz
```
These are all the kept initial prompt messages with positive review result (no spam) of trees in `ready_for_export` or `prompt_lottery_waiting` state.
### Using the Huggingface Datasets
While HF datasets is ideal for tabular datasets, it is not a natural fit for nested data structures like the OpenAssistant conversation trees.
Nevertheless, we make all messages which can also be found in the file `2023-04-12_oasst_ready.trees.jsonl.gz` available in parquet as train/validation splits.
These are directly loadable by [Huggingface Datasets](https://pypi.org/project/datasets/).
To load the oasst1 train & validation splits use:
```python
from datasets import load_dataset
ds = load_dataset("OpenAssistant/oasst1")
train = ds['train'] # len(train)=84437 (95%)
val = ds['validation'] # len(val)=4401 (5%)
```
The messages appear in depth-first order of the message trees.
Full conversation trees can be reconstructed from the flat messages table by using the `parent_id`
and `message_id` properties to identify the parent-child relationship of messages. The `message_tree_id`
and `tree_state` properties (only present in flat messages files) can be used to find all messages of a message tree or to select trees by their state.
### Languages
OpenAssistant Conversations incorporates 35 different languages with a distribution of messages as follows:
**Languages with over 1000 messages**
- English: 71956
- Spanish: 43061
- Russian: 9089
- German: 5279
- Chinese: 4962
- French: 4251
- Thai: 3042
- Portuguese (Brazil): 2969
- Catalan: 2260
- Korean: 1553
- Ukrainian: 1352
- Italian: 1320
- Japanese: 1018
<details>
<summary><b>Languages with under 1000 messages</b></summary>
<ul>
<li>Vietnamese: 952</li>
<li>Basque: 947</li>
<li>Polish: 886</li>
<li>Hungarian: 811</li>
<li>Arabic: 666</li>
<li>Dutch: 628</li>
<li>Swedish: 512</li>
<li>Turkish: 454</li>
<li>Finnish: 386</li>
<li>Czech: 372</li>
<li>Danish: 358</li>
<li>Galician: 339</li>
<li>Hebrew: 255</li>
<li>Romanian: 200</li>
<li>Norwegian Bokmål: 133</li>
<li>Indonesian: 115</li>
<li>Bulgarian: 95</li>
<li>Bengali: 82</li>
<li>Persian: 72</li>
<li>Greek: 66</li>
<li>Esperanto: 59</li>
<li>Slovak: 19</li>
</ul>
</details>
## Contact
- Discord [Open Assistant Discord Server](https://ykilcher.com/open-assistant-discord)
- GitHub: [LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant)
- E-Mail: [open-assistant@laion.ai](mailto:open-assistant@laion.ai) | 9,741 | [
[
-0.0212554931640625,
-0.068115234375,
0.01407623291015625,
0.01222991943359375,
-0.007137298583984375,
0.005031585693359375,
-0.00865936279296875,
-0.023406982421875,
0.0224609375,
0.029632568359375,
-0.048492431640625,
-0.059417724609375,
-0.04010009765625,
... |
ohtaman/aozora | 2023-10-21T05:13:27.000Z | [
"region:us"
] | ohtaman | null | null | 0 | 48 | 2023-10-08T06:25:53 | ---
dataset_info:
features:
- name: title
dtype: string
- name: author
dtype: string
- name: content
dtype: string
- name: filename
dtype: string
- name: category
dtype: string
- name: short_description
dtype: string
- name: char_kana_type
dtype: string
splits:
- name: train
num_bytes: 704528623.1545657
num_examples: 17006
- name: test
num_bytes: 4142823.8454343504
num_examples: 100
download_size: 393522386
dataset_size: 708671447.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "aozora"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 795 | [
[
-0.0421142578125,
-0.0257415771484375,
0.012725830078125,
0.006893157958984375,
-0.016815185546875,
-0.0101776123046875,
0.024658203125,
-0.022247314453125,
0.058837890625,
0.0499267578125,
-0.05029296875,
-0.05877685546875,
-0.04376220703125,
-0.02032470703... |
smangrul/hf-stack-v3 | 2023-10-12T11:13:59.000Z | [
"region:us"
] | smangrul | null | null | 0 | 48 | 2023-10-12T11:13:54 | ---
dataset_info:
features:
- name: repo_id
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 84459082
num_examples: 5139
download_size: 27283429
dataset_size: 84459082
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "hf-stack-v3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 564 | [
[
-0.045440673828125,
-0.017791748046875,
0.01227569580078125,
0.0277252197265625,
-0.0015964508056640625,
-0.0010213851928710938,
0.049530029296875,
-0.026275634765625,
0.052764892578125,
0.052703857421875,
-0.055908203125,
-0.05377197265625,
-0.03021240234375,
... |
hippocrates/PubmedSumm_train | 2023-10-17T20:02:01.000Z | [
"region:us"
] | hippocrates | null | null | 0 | 48 | 2023-10-13T10:53:29 | ---
dataset_info:
features:
- name: id
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 109611215
num_examples: 3000
- name: valid
num_bytes: 113408585
num_examples: 3000
- name: test
num_bytes: 112787485
num_examples: 3000
download_size: 151056547
dataset_size: 335807285
---
# Dataset Card for "PubmedSumm_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 631 | [
[
-0.0255126953125,
0.007534027099609375,
0.026275634765625,
0.01100921630859375,
-0.0175323486328125,
-0.008544921875,
0.01291656494140625,
0.0029659271240234375,
0.053924560546875,
0.036285400390625,
-0.05413818359375,
-0.047882080078125,
-0.050811767578125,
... |
Henil1/Sans-eng | 2023-10-14T19:28:56.000Z | [
"region:us"
] | Henil1 | null | null | 0 | 48 | 2023-10-14T17:21:48 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
Lajavaness/STS15-fr | 2023-10-19T23:14:05.000Z | [
"region:us"
] | Lajavaness | null | null | 1 | 48 | 2023-10-19T23:13:40 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
Lajavaness/STS16-fr | 2023-10-20T09:15:30.000Z | [
"region:us"
] | Lajavaness | null | null | 1 | 48 | 2023-10-20T09:15:12 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
cyrilzhang/TinyStories2-ascii-bpe-1k | 2023-10-24T01:25:32.000Z | [
"region:us"
] | cyrilzhang | null | null | 0 | 48 | 2023-10-24T01:22:35 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 2711297200
num_examples: 661292
- name: validation
num_bytes: 27301900
num_examples: 6659
download_size: 855612624
dataset_size: 2738599100
---
# Dataset Card for "TinyStories2-ascii-bpe-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 588 | [
[
-0.0355224609375,
-0.008636474609375,
0.01235198974609375,
0.0241546630859375,
-0.03399658203125,
-0.0175628662109375,
-0.0007033348083496094,
-0.0136871337890625,
0.047454833984375,
0.029052734375,
-0.07025146484375,
-0.048309326171875,
-0.04541015625,
0.00... |
Alamerton/pangolin-llama-finetuning | 2023-10-24T10:29:42.000Z | [
"region:us"
] | Alamerton | null | null | 0 | 48 | 2023-10-24T10:27:54 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
khederwaaOne/my_dataset | 2023-10-24T18:33:31.000Z | [
"region:us"
] | khederwaaOne | null | null | 0 | 48 | 2023-10-24T17:59:00 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
Adminhuggingface/LORA_DATASET | 2023-10-26T12:21:55.000Z | [
"region:us"
] | Adminhuggingface | null | null | 0 | 48 | 2023-10-26T12:21:55 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 6575532.0
num_examples: 26
download_size: 6574426
dataset_size: 6575532.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "LORA_DATASET"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 475 | [
[
-0.041473388671875,
-0.034637451171875,
0.0090484619140625,
0.01233673095703125,
-0.0198211669921875,
-0.00830078125,
0.0308837890625,
-0.0193634033203125,
0.0791015625,
0.04791259765625,
-0.05364990234375,
-0.05645751953125,
-0.0411376953125,
-0.02587890625... |
jin05102518/Raw_v2 | 2023-10-30T14:41:38.000Z | [
"region:us"
] | jin05102518 | null | null | 0 | 48 | 2023-10-30T13:35:23 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
detectors/lsun_c-ood | 2023-10-30T18:25:37.000Z | [
"task_categories:image-classification",
"size_categories:10K<n<100K",
"license:unknown",
"arxiv:1610.01119",
"arxiv:1706.02690",
"region:us"
] | detectors | null | null | 0 | 48 | 2023-10-30T17:35:57 | ---
license: unknown
size_categories: 10K<n<100K
task_categories:
- image-classification
paperswithcode_id: lsun
pretty_name: LSUN (c)
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 17509356.0
num_examples: 10000
download_size: 0
dataset_size: 17509356.0
---
# Dataset Card for LSUN (c) for OOD Detection
<!-- Provide a quick summary of the dataset. -->
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Original Dataset Authors**: Limin Wang, Sheng Guo, Weilin Huang, Yuanjun Xiong, Yu Qiao
- **OOD Split Authors:** Shiyu Liang, Yixuan Li, R. Srikant
- **Shared by:** Eduardo Dadalto
- **License:** unknown
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Original Dataset Paper:** http://arxiv.org/abs/1610.01119v2
- **First OOD Application Paper:** http://arxiv.org/abs/1706.02690v5
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
This dataset is intended to be used as an ouf-of-distribution dataset for image classification benchmarks.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
This dataset is not annotated.
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
The goal in curating and sharing this dataset to the HuggingFace Hub is to accelerate research and promote reproducibility in generalized Out-of-Distribution (OOD) detection.
Check the python library [detectors](https://github.com/edadaltocg/detectors) if you are interested in OOD detection.
### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
Please check original paper for details on the dataset.
### Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Please check original paper for details on the dataset.
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@software{detectors2023,
author = {Eduardo Dadalto},
title = {Detectors: a Python Library for Generalized Out-Of-Distribution Detection},
url = {https://github.com/edadaltocg/detectors},
doi = {https://doi.org/10.5281/zenodo.7883596},
month = {5},
year = {2023}
}
@article{1706.02690v5,
author = {Shiyu Liang and Yixuan Li and R. Srikant},
title = {Enhancing The Reliability of Out-of-distribution Image Detection in
Neural Networks},
year = {2017},
month = {6},
note = {ICLR 2018},
archiveprefix = {arXiv},
url = {http://arxiv.org/abs/1706.02690v5}
}
@article{1610.01119v2,
author = {Limin Wang and Sheng Guo and Weilin Huang and Yuanjun Xiong and Yu Qiao},
title = {Knowledge Guided Disambiguation for Large-Scale Scene Classification
with Multi-Resolution CNNs},
year = {2016},
month = {10},
note = {To appear in IEEE Transactions on Image Processing. Code and models
are available at https://github.com/wanglimin/MRCNN-Scene-Recognition},
archiveprefix = {arXiv},
url = {http://arxiv.org/abs/1610.01119v2}
}
```
## Dataset Card Authors
Eduardo Dadalto
## Dataset Card Contact
https://huggingface.co/edadaltocg | 3,814 | [
[
-0.0345458984375,
-0.033172607421875,
0.03143310546875,
-0.0037441253662109375,
-0.02783203125,
-0.035491943359375,
0.0014982223510742188,
-0.043212890625,
-0.00290679931640625,
0.0242462158203125,
-0.0160064697265625,
-0.048126220703125,
-0.03472900390625,
... |
albertvillanova/tmp-tests | 2021-12-02T14:12:12.000Z | [
"region:us"
] | albertvillanova | null | null | 0 | 47 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
alireza655/alireza655 | 2021-02-08T23:24:50.000Z | [
"region:us"
] | alireza655 | null | null | 0 | 47 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
arpelarpe/nota | 2022-10-11T07:56:49.000Z | [
"task_categories:automatic-speech-recognition",
"multilinguality:monolingual",
"language:da",
"license:cc0-1.0",
"region:us"
] | arpelarpe | Nota lyd- og tekstdata
Datasættet indeholder både tekst- og taledata fra udvalgte dele af Nota's lydbogsbiblotek. Datasættet består af
over 500 timers oplæsninger og medfølgende transkriptioner på dansk. Al lyddata er i .wav-format, mens tekstdata
er i .txt-format.
I data indgår indlæsninger af Notas eget blad "Inspiration" og "Radio/TV", som er udgivet i perioden 2007 til 2022.
Nota krediteres for arbejdet med at strukturere data, således at tekst og lyd stemmer overens.
Nota er en institution under Kulturministeriet, der gør trykte tekster tilgængelige i digitale formater til personer
med synshandicap og læsevanskeligheder, fx via produktion af lydbøger og oplæsning af aviser, magasiner, mv. | null | 2 | 47 | 2022-10-11T06:37:42 | ---
pretty_name: Nota
license:
- cc0-1.0
language:
- da
multilinguality:
- monolingual
task_categories:
- automatic-speech-recognition
---
# Dataset Card Nota Lyd- og tekstdata
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Disclaimer](#disclaimer)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** https://sprogteknologi.dk/dataset/notalyd-ogtekstdata
- **Data Storage Url:** https://sprogtek-ressources.digst.govcloud.dk/nota/
- **Point of Contact:** info@sprogteknologi.dk
### Dataset Summary
This data was created by the public institution Nota (https://nota.dk/), which is part of the Danish Ministry of Culture. Nota has a library audiobooks and audiomagazines for people with reading or sight disabilities. Nota also produces a number of audiobooks and audiomagazines themselves.
The dataset consists of .wav and .txt files from Nota's audiomagazines "Inspiration" and "Radio/TV".
The dataset has been published as a part of the initiative sprogteknologi.dk, within the Danish Agency for Digital Government (www.digst.dk).
336 GB available data, containing voice recordings and accompanying transcripts.
Each publication has been segmented into bits of 2 - 50 seconds .wav files with an accompanying transcription
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
Danish
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, called path and its sentence.
`
{'path': '<path_to_clip>.wav', 'sentence': 'Dette er et eksempel', 'audio': {'path': <path_to_clip>.wav', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 44100}
`
### Data Fields
path: The path to the audio file
audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
sentence: The sentence that was read by the speaker
### Data Splits
The material has for now only a train split. As this is very early stage of the dataset, splits might be introduced at a later stage.
## Dataset Creation
### Disclaimer
There might be smaller discrepancies between the .wav and .txt files. Therefore, there might be issues in the alignment of timestamps, text and sound files.
There are no strict rules as to how readers read aloud non-letter characters (i.e. numbers, €, $, !, ?). These symbols can be read differently throughout the dataset.
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset is made public and free to use. Recorded individuals has by written contract accepted and agreed to the publication of their recordings.
Other names appearing in the dataset are already publically known individuals (i.e. TV or Radio host names). Their names are not to be treated as sensitive or personal data in the context of this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
https://sprogteknologi.dk/
Contact info@sprogteknologi.dk if you have questions regarding use of data.
They gladly receive inputs and ideas on how to distribute the data.
### Licensing Information
[CC0-1.0](https://creativecommons.org/publicdomain/zero/1.0/)
### | 5,011 | [
[
-0.03619384765625,
-0.0399169921875,
0.01488494873046875,
0.0229949951171875,
-0.017822265625,
-0.0021152496337890625,
-0.032196044921875,
-0.035369873046875,
0.037994384765625,
0.04583740234375,
-0.06591796875,
-0.07647705078125,
-0.04718017578125,
0.020477... |
GEM/TaTA | 2022-11-03T14:23:59.000Z | [
"task_categories:table-to-text",
"annotations_creators:none",
"language_creators:unknown",
"multilinguality:yes",
"size_categories:unknown",
"source_datasets:original",
"language:ar",
"language:en",
"language:fr",
"language:ha",
"language:ig",
"language:pt",
"language:ru",
"language:sw",
... | GEM | Dataset loader for TaTA: A Multilingual Table-to-Text Dataset for African Languages | @misc{gehrmann2022TaTA,
Author = {Sebastian Gehrmann and Sebastian Ruder and Vitaly Nikolaev and Jan A. Botha and Michael Chavinda and Ankur Parikh and Clara Rivera},
Title = {TaTa: A Multilingual Table-to-Text Dataset for African Languages},
Year = {2022},
Eprint = {arXiv:2211.00142},
} | 0 | 47 | 2022-11-02T13:21:53 | ---
annotations_creators:
- none
language_creators:
- unknown
language:
- ar
- en
- fr
- ha
- ig
- pt
- ru
- sw
- yo
multilinguality:
- yes
size_categories:
- unknown
source_datasets:
- original
task_categories:
- table-to-text
task_ids: []
pretty_name: TaTA
tags:
- data-to-text
license: cc-by-sa-4.0
dataset_info:
features:
- name: gem_id
dtype: string
- name: example_id
dtype: string
- name: title
dtype: string
- name: unit_of_measure
dtype: string
- name: chart_type
dtype: string
- name: was_translated
dtype: string
- name: table_data
dtype: string
- name: linearized_input
dtype: string
- name: table_text
sequence: string
- name: target
dtype: string
splits:
- name: ru
num_bytes: 308435
num_examples: 210
- name: test
num_bytes: 1691383
num_examples: 763
- name: train
num_bytes: 10019272
num_examples: 6962
- name: validation
num_bytes: 1598442
num_examples: 754
download_size: 18543506
dataset_size: 13617532
---
# Dataset Card for GEM/TaTA
## Dataset Description
- **Homepage:** https://github.com/google-research/url-nlp
- **Repository:** https://github.com/google-research/url-nlp
- **Paper:** https://arxiv.org/abs/2211.00142
- **Leaderboard:** https://github.com/google-research/url-nlp
- **Point of Contact:** Sebastian Ruder
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/TaTA).
### Dataset Summary
Existing data-to-text generation datasets are mostly limited to English. Table-to-Text in African languages (TaTA) addresses this lack of data as the first large multilingual table-to-text dataset with a focus on African languages. TaTA was created by transcribing figures and accompanying text in bilingual reports by the Demographic and Health Surveys Program, followed by professional translation to make the dataset fully parallel. TaTA includes 8,700 examples in nine languages including four African languages (Hausa, Igbo, Swahili, and Yorùbá) and a zero-shot test language (Russian).
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/TaTA')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/TaTA).
#### website
[Github](https://github.com/google-research/url-nlp)
#### paper
[ArXiv](https://arxiv.org/abs/2211.00142)
#### authors
Sebastian Gehrmann, Sebastian Ruder , Vitaly Nikolaev, Jan A. Botha, Michael Chavinda, Ankur Parikh, Clara Rivera
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
<!-- info: What is the webpage for the dataset (if it exists)? -->
<!-- scope: telescope -->
[Github](https://github.com/google-research/url-nlp)
#### Download
<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
[Github](https://github.com/google-research/url-nlp)
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
[ArXiv](https://arxiv.org/abs/2211.00142)
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
```
@misc{gehrmann2022TaTA,
Author = {Sebastian Gehrmann and Sebastian Ruder and Vitaly Nikolaev and Jan A. Botha and Michael Chavinda and Ankur Parikh and Clara Rivera},
Title = {TaTa: A Multilingual Table-to-Text Dataset for African Languages},
Year = {2022},
Eprint = {arXiv:2211.00142},
}
```
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Sebastian Ruder
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
ruder@google.com
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
yes
#### Leaderboard Link
<!-- info: Provide a link to the leaderboard. -->
<!-- scope: periscope -->
[Github](https://github.com/google-research/url-nlp)
#### Leaderboard Details
<!-- info: Briefly describe how the leaderboard evaluates models. -->
<!-- scope: microscope -->
The paper introduces a metric StATA which is trained on human ratings and which is used to rank approaches submitted to the leaderboard.
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
yes
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`English`, `Portuguese`, `Arabic`, `French`, `Hausa`, `Swahili (macrolanguage)`, `Igbo`, `Yoruba`, `Russian`
#### Whose Language?
<!-- info: Whose language is in the dataset? -->
<!-- scope: periscope -->
The language is taken from reports by the demographic and health surveys program.
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
cc-by-sa-4.0: Creative Commons Attribution Share Alike 4.0 International
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
The dataset poses significant reasoning challenges and is thus meant as a way to asses the verbalization and reasoning capabilities of structure-to-text models.
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Data-to-Text
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
Summarize key information from a table in a single sentence.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`industry`
#### Curation Organization(s)
<!-- info: Name the organization(s). -->
<!-- scope: periscope -->
Google Research
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Sebastian Gehrmann, Sebastian Ruder , Vitaly Nikolaev, Jan A. Botha, Michael Chavinda, Ankur Parikh, Clara Rivera
#### Funding
<!-- info: Who funded the data creation? -->
<!-- scope: microscope -->
Google Research
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Sebastian Gehrmann (Google Research)
### Dataset Structure
#### Data Fields
<!-- info: List and describe the fields present in the dataset. -->
<!-- scope: telescope -->
- `example_id`: The ID of the example. Each ID (e.g., `AB20-ar-1`) consists of three parts: the document ID, the language ISO 639-1 code, and the index of the table within the document.
- `title`: The title of the table.
- `unit_of_measure`: A description of the numerical value of the data. E.g., percentage of households with clean water.
- `chart_type`: The kind of chart associated with the data. We consider the following (normalized) types: horizontal bar chart, map chart, pie graph, tables, line chart, pie chart, vertical chart type, line graph, vertical bar chart, and other.
- `was_translated`: Whether the table was transcribed in the original language of the report or translated.
- `table_data`: The table content is a JSON-encoded string of a two-dimensional list, organized by row, from left to right, starting from the top of the table. Number of items varies per table. Empty cells are given as empty string values in the corresponding table cell.
- `table_text`: The sentences forming the description of each table are encoded as a JSON object. In the case of more than one sentence, these are separated by commas. Number of items varies per table.
- `linearized_input`: A single string that contains the table content separated by vertical bars, i.e., |. Including title, unit of measurement, and the content of each cell including row and column headers in between brackets, i.e., (Medium Empowerment, Mali, 17.9).
#### Reason for Structure
<!-- info: How was the dataset structure determined? -->
<!-- scope: microscope -->
The structure includes all available information for the infographics on which the dataset is based.
#### How were labels chosen?
<!-- info: How were the labels chosen? -->
<!-- scope: microscope -->
Annotators looked through English text to identify sentences that describe an infographic. They then identified the corresponding location of the parallel non-English document. All sentences were extracted.
#### Example Instance
<!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
<!-- scope: periscope -->
```
{
"example_id": "FR346-en-39",
"title": "Trends in early childhood mortality rates",
"unit_of_measure": "Deaths per 1,000 live births for the 5-year period before the survey",
"chart_type": "Line chart",
"was_translated": "False",
"table_data": "[[\"\", \"Child mortality\", \"Neonatal mortality\", \"Infant mortality\", \"Under-5 mortality\"], [\"1990 JPFHS\", 5, 21, 34, 39], [\"1997 JPFHS\", 6, 19, 29, 34], [\"2002 JPFHS\", 5, 16, 22, 27], [\"2007 JPFHS\", 2, 14, 19, 21], [\"2009 JPFHS\", 5, 15, 23, 28], [\"2012 JPFHS\", 4, 14, 17, 21], [\"2017-18 JPFHS\", 3, 11, 17, 19]]",
"table_text": [
"neonatal, infant, child, and under-5 mortality rates for the 5 years preceding each of seven JPFHS surveys (1990 to 2017-18).",
"Under-5 mortality declined by half over the period, from 39 to 19 deaths per 1,000 live births.",
"The decline in mortality was much greater between the 1990 and 2007 surveys than in the most recent period.",
"Between 2012 and 2017-18, under-5 mortality decreased only modestly, from 21 to 19 deaths per 1,000 live births, and infant mortality remained stable at 17 deaths per 1,000 births."
],
"linearized_input": "Trends in early childhood mortality rates | Deaths per 1,000 live births for the 5-year period before the survey | (Child mortality, 1990 JPFHS, 5) (Neonatal mortality, 1990 JPFHS, 21) (Infant mortality, 1990 JPFHS, 34) (Under-5 mortality, 1990 JPFHS, 39) (Child mortality, 1997 JPFHS, 6) (Neonatal mortality, 1997 JPFHS, 19) (Infant mortality, 1997 JPFHS, 29) (Under-5 mortality, 1997 JPFHS, 34) (Child mortality, 2002 JPFHS, 5) (Neonatal mortality, 2002 JPFHS, 16) (Infant mortality, 2002 JPFHS, 22) (Under-5 mortality, 2002 JPFHS, 27) (Child mortality, 2007 JPFHS, 2) (Neonatal mortality, 2007 JPFHS, 14) (Infant mortality, 2007 JPFHS, 19) (Under-5 mortality, 2007 JPFHS, 21) (Child mortality, 2009 JPFHS, 5) (Neonatal mortality, 2009 JPFHS, 15) (Infant mortality, 2009 JPFHS, 23) (Under-5 mortality, 2009 JPFHS, 28) (Child mortality, 2012 JPFHS, 4) (Neonatal mortality, 2012 JPFHS, 14) (Infant mortality, 2012 JPFHS, 17) (Under-5 mortality, 2012 JPFHS, 21) (Child mortality, 2017-18 JPFHS, 3) (Neonatal mortality, 2017-18 JPFHS, 11) (Infant mortality, 2017-18 JPFHS, 17) (Under-5 mortality, 2017-18 JPFHS, 19)"
}
```
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
- `Train`: Training set, includes examples with 0 or more references.
- `Validation`: Validation set, includes examples with 3 or more references.
- `Test`: Test set, includes examples with 3 or more references.
- `Ru`: Russian zero-shot set. Includes English and Russian examples (Russian is not includes in any of the other splits).
#### Splitting Criteria
<!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
<!-- scope: microscope -->
The same table across languages is always in the same split, i.e., if table X is in the test split in language A, it will also be in the test split in language B. In addition to filtering examples without transcribed table values, every example of the development and test splits has at least 3 references.
From the examples that fulfilled these criteria, 100 tables were sampled for both development and test for a total of 800 examples each. A manual review process excluded a few tables in each set, resulting in a training set of 6,962 tables, a development set of 752 tables, and a test set of 763 tables.
####
<!-- info: What does an outlier of the dataset in terms of length/perplexity/embedding look like? -->
<!-- scope: microscope -->
There are tables without references, without values, and others that are very large. The dataset is distributed as-is, but the paper describes multiple strategies to deal with data issues.
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
<!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
<!-- scope: microscope -->
There is no other multilingual data-to-text dataset that is parallel over languages. Moreover, over 70% of references in the dataset require reasoning and it is thus of very high quality and challenging for models.
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
yes
#### Unique Language Coverage
<!-- info: Does this dataset cover other languages than other datasets for the same task? -->
<!-- scope: periscope -->
yes
#### Difference from other GEM datasets
<!-- info: What else sets this dataset apart from other similar datasets in GEM? -->
<!-- scope: microscope -->
More languages, parallel across languages, grounded in infographics, not centered on Western entities or source documents
#### Ability that the Dataset measures
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: periscope -->
reasoning, verbalization, content planning
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
no
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
no
### Getting Started with the Task
#### Pointers to Resources
<!-- info: Getting started with in-depth research on the task. Add relevant pointers to resources that researchers can consult when they want to get started digging deeper into the task. -->
<!-- scope: microscope -->
The background section of the [paper](https://arxiv.org/abs/2211.00142) provides a list of related datasets.
#### Technical Terms
<!-- info: Technical terms used in this card and the dataset and their definitions -->
<!-- scope: microscope -->
- `data-to-text`: Term that refers to NLP tasks in which the input is structured information and the output is natural language.
## Previous Results
### Previous Results
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`Other: Other Metrics`
#### Other Metrics
<!-- info: Definitions of other metrics -->
<!-- scope: periscope -->
`StATA`: A new metric associated with TaTA that is trained on human judgments and which has a much higher correlation with them.
#### Proposed Evaluation
<!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
<!-- scope: microscope -->
The creators used a human evaluation that measured [attribution](https://arxiv.org/abs/2112.12870) and reasoning capabilities of various models. Based on these ratings, they trained a new metric and showed that existing metrics fail to measure attribution.
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
no
## Dataset Curation
### Original Curation
#### Original Curation Rationale
<!-- info: Original curation rationale -->
<!-- scope: telescope -->
The curation rationale is to create a multilingual data-to-text dataset that is high-quality and challenging.
#### Communicative Goal
<!-- info: What was the communicative goal? -->
<!-- scope: periscope -->
The communicative goal is to describe a table in a single sentence.
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
no
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Found`
#### Where was it found?
<!-- info: If found, where from? -->
<!-- scope: telescope -->
`Single website`
#### Language Producers
<!-- info: What further information do we have on the language producers? -->
<!-- scope: microscope -->
The language was produced by USAID as part of the Demographic and Health Surveys program (https://dhsprogram.com/).
#### Topics Covered
<!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
<!-- scope: periscope -->
The topics are related to fertility, family planning, maternal and child health, gender, and nutrition.
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
validated by crowdworker
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
not filtered
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
expert created
#### Number of Raters
<!-- info: What is the number of raters -->
<!-- scope: telescope -->
11<n<50
#### Rater Qualifications
<!-- info: Describe the qualifications required of an annotator. -->
<!-- scope: periscope -->
Professional annotator who is a fluent speaker of the respective language
#### Raters per Training Example
<!-- info: How many annotators saw each training example? -->
<!-- scope: periscope -->
0
#### Raters per Test Example
<!-- info: How many annotators saw each test example? -->
<!-- scope: periscope -->
1
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
yes
#### Which Annotation Service
<!-- info: Which annotation services were used? -->
<!-- scope: periscope -->
`other`
#### Annotation Values
<!-- info: Purpose and values for each annotation -->
<!-- scope: microscope -->
The additional annotations are for system outputs and references and serve to develop metrics for this task.
#### Any Quality Control?
<!-- info: Quality control measures? -->
<!-- scope: telescope -->
validated by data curators
#### Quality Control Details
<!-- info: Describe the quality control measures that were taken. -->
<!-- scope: microscope -->
Ratings were compared to a small (English) expert-curated set of ratings to ensure high agreement. There were additional rounds of training and feedback to annotators to ensure high quality judgments.
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
yes
#### Other Consented Downstream Use
<!-- info: What other downstream uses of the data did the original data creators and the data curators consent to? -->
<!-- scope: microscope -->
In addition to data-to-text generation, the dataset can be used for translation or multimodal research.
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
no PII
#### Justification for no PII
<!-- info: Provide a justification for selecting `no PII` above. -->
<!-- scope: periscope -->
The DHS program only publishes aggregate survey information and thus, no personal information is included.
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
no
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
yes
#### Details on how Dataset Addresses the Needs
<!-- info: Describe how this dataset addresses the needs of underserved communities. -->
<!-- scope: microscope -->
The dataset is focusing on data about African countries and the languages included in the dataset are spoken in Africa. It aims to improve the representation of African languages in the NLP and NLG communities.
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
no
#### Are the Language Producers Representative of the Language?
<!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? -->
<!-- scope: periscope -->
The language producers for this dataset are those employed by the DHS program which is a US-funded program. While the data is focused on African countries, there may be implicit western biases in how the data is presented.
## Considerations for Using the Data
### PII Risks and Liability
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`open license - commercial use allowed`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`open license - commercial use allowed`
### Known Technical Limitations
#### Technical Limitations
<!-- info: Describe any known technical limitations, such as spurrious correlations, train/test overlap, annotation biases, or mis-annotations, and cite the works that first identified these limitations when possible. -->
<!-- scope: microscope -->
While tables were transcribed in the available languages, the majority of the tables were published in English as the first language. Professional translators were used to translate the data, which makes it plausible that some translationese exists in the data. Moreover, it was unavoidable to collect reference sentences that are only partially entailed by the source tables.
#### Unsuited Applications
<!-- info: When using a model trained on this dataset in a setting where users or the public may interact with its predictions, what are some pitfalls to look out for? In particular, describe some applications of the general task featured in this dataset that its curation or properties make it less suitable for. -->
<!-- scope: microscope -->
The domain of health reports includes potentially sensitive topics relating to reproduction, violence, sickness, and death. Perceived negative values could be used to amplify stereotypes about people from the respective regions or countries. The intended academic use of this dataset is to develop and evaluate models that neutrally report the content of these tables but not use the outputs to make value judgments, and these applications are thus discouraged.
| 24,871 | [
[
-0.0280303955078125,
-0.060394287109375,
0.023468017578125,
0.0028553009033203125,
-0.0138397216796875,
-0.0032634735107421875,
-0.0297698974609375,
-0.026153564453125,
0.033447265625,
0.0310211181640625,
-0.0278167724609375,
-0.06689453125,
-0.04638671875,
... |
bigbio/linnaeus | 2022-12-22T15:44:50.000Z | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-4.0",
"region:us"
] | bigbio | Linnaeus is a novel corpus of full-text documents manually annotated for species mentions. | @Article{gerner2010linnaeus,
title={LINNAEUS: a species name identification system for biomedical literature},
author={Gerner, Martin and Nenadic, Goran and Bergman, Casey M},
journal={BMC bioinformatics},
volume={11},
number={1},
pages={1--17},
year={2010},
publisher={BioMed Central}
} | 1 | 47 | 2022-11-13T22:09:07 |
---
language:
- en
bigbio_language:
- English
license: cc-by-4.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_4p0
pretty_name: LINNAEUS
homepage: http://linnaeus.sourceforge.net/
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
---
# Dataset Card for LINNAEUS
## Dataset Description
- **Homepage:** http://linnaeus.sourceforge.net/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED
Linnaeus is a novel corpus of full-text documents manually annotated for species mentions.
## Citation Information
```
@Article{gerner2010linnaeus,
title={LINNAEUS: a species name identification system for biomedical literature},
author={Gerner, Martin and Nenadic, Goran and Bergman, Casey M},
journal={BMC bioinformatics},
volume={11},
number={1},
pages={1--17},
year={2010},
publisher={BioMed Central}
}
```
| 895 | [
[
-0.02630615234375,
-0.005977630615234375,
0.01751708984375,
0.00907135009765625,
-0.043182373046875,
-0.01473236083984375,
-0.002285003662109375,
-0.0310211181640625,
0.054412841796875,
0.02349853515625,
-0.0256195068359375,
-0.0631103515625,
-0.034637451171875,... |
grammarly/detexd-benchmark | 2023-07-10T17:36:37.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"region:us"
] | grammarly | null | null | 1 | 47 | 2023-03-21T18:44:32 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
size_categories:
- 1K<n<10K
pretty_name: 'DeTexD: A Benchmark Dataset for Delicate Text Detection'
dataset_info:
features:
- name: text
dtype: string
- name: annotator_1
dtype: int32
- name: annotator_2
dtype: int32
- name: annotator_3
dtype: int32
- name: label
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: test
num_examples: 1023
---
# Dataset Card for DeTexD: A Benchmark Dataset for Delicate Text Detection
## Dataset Description
- **Repository:** [DeTexD repository](https://github.com/grammarly/detexd)
- **Paper:** [DeTexD: A Benchmark Dataset for Delicate Text Detection](TODO)
### Dataset Summary
We define *delicate text* as any text that is emotionally charged or potentially triggering such that engaging with it has the potential to result in harm. This broad term covers a range of sensitive texts that vary across four major dimensions: 1) riskiness, 2) explicitness, 3) topic, and 4) target.
This dataset contains texts with fine-grained individual annotator labels from 0 to 5 (where 0 indicates no risk and 5 indicates high risk) and averaged binary labels. See paper for more details.
**Repository:** [DeTexD repository](https://github.com/grammarly/detexd) <br>
**Paper:** [DeTexD: A Benchmark Dataset for Delicate Text Detection](TODO)
## Dataset Structure
### Data Instances
```
{'text': '"He asked me and the club if we could give him a couple of days off just to clear up his mind and he will be back in the group, I suppose, next Monday, back for training and then be a regular part of the whole squad again," Rangnick said.',
'annotator_1': 0,
'annotator_2': 0,
'annotator_3': 0,
'label': 0}
```
### Data Fields
- `text`: Text to be classified
- `annotator_1`: Annotator 1 score (0-5)
- `annotator_2`: Annotator 2 score (0-5)
- `annotator_3`: Annotator 3 score (0-5)
- `label`: Averaged binary score (>=3), either "negative" (0) or positive (1)
### Data Splits
| | test |
|--------------------|-----:|
| Number of examples | 1023 |
### Citation Information
```
@inproceedings{chernodub-etal-2023-detexd,
title = "{D}e{T}ex{D}: A Benchmark Dataset for Delicate Text Detection",
author = "Yavnyi, Serhii and Sliusarenko, Oleksii and Razzaghi, Jade and Mo, Yichen and Hovakimyan, Knar and Chernodub, Artem",
booktitle = "The 7th Workshop on Online Abuse and Harms (WOAH)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.woah-1.2",
pages = "14--28",
abstract = "Over the past few years, much research has been conducted to identify and regulate toxic language. However, few studies have addressed a broader range of sensitive texts that are not necessarily overtly toxic. In this paper, we introduce and define a new category of sensitive text called {``}delicate text.{''} We provide the taxonomy of delicate text and present a detailed annotation scheme. We annotate DeTexD, the first benchmark dataset for delicate text detection. The significance of the difference in the definitions is highlighted by the relative performance deltas between models trained each definitions and corpora and evaluated on the other. We make publicly available the DeTexD Benchmark dataset, annotation guidelines, and baseline model for delicate text detection.",
}
``` | 3,539 | [
[
-0.016845703125,
-0.06890869140625,
0.0267181396484375,
0.0012578964233398438,
-0.0297698974609375,
0.0022182464599609375,
-0.023223876953125,
-0.054931640625,
-0.01174163818359375,
0.01557159423828125,
-0.058074951171875,
-0.06341552734375,
-0.0521240234375,
... |
huanngzh/anime_face_control_60k | 2023-04-07T02:20:48.000Z | [
"region:us"
] | huanngzh | null | null | 1 | 47 | 2023-04-06T19:14:05 | ---
dataset_info:
features:
- name: item_id
dtype: string
- name: prompt
dtype: string
- name: blip_caption
dtype: string
- name: landmarks
sequence:
sequence: float64
- name: source
dtype: image
- name: target
dtype: image
- name: visual
dtype: image
- name: origin_path
dtype: string
- name: source_path
dtype: string
- name: target_path
dtype: string
- name: visual_path
dtype: string
splits:
- name: train
num_bytes: 5359477272.0
num_examples: 60000
download_size: 0
dataset_size: 5359477272.0
---
# Dataset Card for "acgn_face_control_60k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 767 | [
[
-0.036895751953125,
-0.0120849609375,
-0.01739501953125,
0.0234527587890625,
-0.0086669921875,
0.0033054351806640625,
0.02587890625,
-0.0185089111328125,
0.049835205078125,
0.035858154296875,
-0.061492919921875,
-0.056915283203125,
-0.042877197265625,
-0.023... |
chujiezheng/wizard_of_wikipedia | 2023-05-08T15:05:32.000Z | [
"language:en",
"license:cc-by-nc-4.0",
"arxiv:2009.09378",
"region:us"
] | chujiezheng | null | null | 0 | 47 | 2023-05-08T13:35:40 | ---
license: cc-by-nc-4.0
language:
- en
---
Wizard-of-Wikipedia data for the Findings of EMNLP 2020 paper "Difference-aware Knowledge Selection for Knowledge-grounded Conversation Generation"
[GitHub repo](https://github.com/chujiezheng/DiffKS). [Original paper](https://arxiv.org/abs/2009.09378).
```bib
@inproceedings{zheng-etal-2020-diffks,
title="{D}ifference-aware Knowledge Selection for Knowledge-grounded Conversation Generation",
author="Zheng, Chujie and
Cao, Yunbo and
Jiang, Daxin and
Huang, Minlie",
booktitle="Findings of EMNLP",
year="2020"
}
```
| 594 | [
[
-0.047393798828125,
-0.057952880859375,
0.020416259765625,
0.017425537109375,
0.0038852691650390625,
-0.0136566162109375,
-0.00698089599609375,
-0.031585693359375,
0.050750732421875,
0.033721923828125,
-0.047210693359375,
-0.0408935546875,
-0.032135009765625,
... |
9wimu9/ada_derana_sinhala | 2023-07-13T17:12:52.000Z | [
"region:us"
] | 9wimu9 | null | null | 1 | 47 | 2023-07-13T15:12:05 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: heading
dtype: string
- name: content
sequence: string
splits:
- name: train
num_bytes: 418940569
num_examples: 170420
download_size: 159392910
dataset_size: 418940569
---
# Dataset Card for "ada_derana_sinhala"
This dataset includes Ada derana sinhala web site news articles from January 6, 2010 to July 11, 2023. you can visit the original web page by using the "id" | 466 | [
[
-0.012603759765625,
-0.0477294921875,
0.0147552490234375,
0.00539398193359375,
-0.037353515625,
-0.01495361328125,
0.0157470703125,
-0.0364990234375,
0.041656494140625,
0.0206756591796875,
-0.046661376953125,
-0.052978515625,
-0.008697509765625,
0.0195770263... |
heliosbrahma/mental_health_conversational_dataset | 2023-07-22T11:30:56.000Z | [
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:n<1K",
"language:en",
"license:mit",
"medical",
"region:us"
] | heliosbrahma | null | null | 3 | 47 | 2023-07-21T06:58:12 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 102904
num_examples: 154
download_size: 60865
dataset_size: 102904
license: mit
task_categories:
- text-generation
- conversational
language:
- en
tags:
- medical
pretty_name: Mental Health Conversational Dataset
size_categories:
- n<1K
---
# Dataset Card for "mental_health_conversational_dataset"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
## Dataset Description
### Dataset Summary
This dataset contains conversational pair of questions and answers in a single text related to Mental Health. Dataset was curated from healthcare websites, popular blogs like WebMD and HeatlhLine, online FAQs etc. All questions and answers have been anonymized to remove any PII data and pre-processed to remove any unwanted characters.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
A data instance include a text columns which is a conversational pair of questions and answers. Questions were asked by the patients and answers were given by healthcare providers.
### Data Fields
- 'text': conversational pair of questions and answers between patient and healthcare provider.
## Dataset Creation
### Curation Rationale
Chatbots offer a readily available and accessible platform for individuals seeking support. They can be accessed anytime and anywhere, providing immediate assistance to those in need. Chatbots can offer empathetic and non-judgmental responses, providing emotional support to users. While they cannot replace human interaction entirely, they can be a helpful supplement, especially in moments of distress.
Hence, this dataset was curated to help finetune a conversational AI bot using this custom dataset which can then be deployed and be provided to the end patient as a chatbot.
### Source Data
This dataset was curated from healthcare websites, popular blogs like WebMD and HeatlhLine, online FAQs etc.
### Personal and Sensitive Information
The dataset may contain sensitive information related to mental health. All questions and answers have been anonymized to remove any PII data. | 2,575 | [
[
-0.02880859375,
-0.075439453125,
0.0243377685546875,
0.017364501953125,
-0.0071868896484375,
0.0078125,
-0.01239013671875,
-0.0150909423828125,
0.036376953125,
0.040283203125,
-0.0782470703125,
-0.054107666015625,
-0.03570556640625,
0.00012493133544921875,
... |
shirsh10mall/LLM_Instruct_Learning_Project_Preprocessed_Tokenized_Open_Orca_Dataset_Flan_T5 | 2023-08-08T11:52:51.000Z | [
"region:us"
] | shirsh10mall | null | null | 1 | 47 | 2023-07-25T13:53:22 | ---
dataset_info:
features:
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: Inputs Token length
dtype: int64
- name: Response Token length
dtype: int64
splits:
- name: train
num_bytes: 1283943963.5926845
num_examples: 430318
- name: test
num_bytes: 226579926.12734038
num_examples: 75939
download_size: 588711752
dataset_size: 1510523889.7200248
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "temp_data_LLM_Project"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 886 | [
[
-0.0288848876953125,
-0.010467529296875,
0.0187225341796875,
0.00786590576171875,
-0.0230865478515625,
0.01311492919921875,
0.0159759521484375,
0.012115478515625,
0.06146240234375,
0.03375244140625,
-0.056182861328125,
-0.0682373046875,
-0.036285400390625,
-... |
gaodrew/pubmedQA-Llama-2-format | 2023-08-03T09:06:44.000Z | [
"region:us"
] | gaodrew | null | null | 2 | 47 | 2023-08-03T08:01:21 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
nlplabtdtu/people_qa | 2023-08-10T15:00:51.000Z | [
"region:us"
] | nlplabtdtu | null | null | 0 | 47 | 2023-08-10T14:58:40 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
google/trueteacher | 2023-09-06T19:39:38.000Z | [
"language:en",
"license:cc-by-nc-4.0",
"natural-language-inference",
"news-articles-summarization",
"arxiv:2305.11171",
"arxiv:2210.11416",
"region:us"
] | google | null | null | 8 | 47 | 2023-08-15T11:56:54 | ---
license: cc-by-nc-4.0
language:
- en
tags:
- natural-language-inference
- news-articles-summarization
---
# **TrueTeacher**
## Dataset Summary
This is a large-scale synthetic dataset for training **Factual Consistency Evaluation** models, introduced in the [TrueTeacher paper (Gekhman et al, 2023)](https://arxiv.org/pdf/2305.11171.pdf).
## Dataset Details
The dataset contains model-generated summaries of articles from the train split of the **CNN/DailyMail** dataset [(Hermann et al., 2015)](https://proceedings.neurips.cc/paper_files/paper/2015/file/afdec7005cc9f14302cd0474fd0f3c96-Paper.pdf)
which are annotated for factual consistency using **FLAN-PaLM 540B** [(Chung et al.,2022)](https://arxiv.org/pdf/2210.11416.pdf).
Summaries were generated using summarization models with different capacities, which were created by fine-tuning **T5** [(Raffel et al., 2020)](https://jmlr.org/papers/volume21/20-074/20-074.pdf) on the **XSum** dataset [(Narayan et al., 2018)](https://aclanthology.org/D18-1206.pdf).
We used the following 5 capacities: T5-11B, T5-3B, T5-large, T5-base and T5-small.
## Data format
The data contains json lines with the following keys:
- `"summarization_model"` - The summarization model used to generate the summary.
- `"cnndm_id"` - The original id from the CNN/DailyMail dataset, this need to be used in order to retrieve the corresponding article from CNN/DailyMail (which was used as the grounding document).
- `"summary"` - The model-generated summary.
- `"label"` - A binary label ('1' - Factualy Consistent, '0' - Factualy Inconsistent).
Here is an example of a single data item:
```json
{
"summarization_model": "T5-11B",
"cnndm_id": "f72048a23154de8699c307e2f41157abbfcae261",
"summary": "Children's brains are being damaged by prolonged internet access, a former children's television presenter has warned."
"label": "1",
}
```
## Loading the dataset
To use the dataset, you need to fetch the relevant documents from the CNN/DailyMail dataset. The follwoing code can be used for that purpose:
```python
from datasets import load_dataset
from tqdm import tqdm
trueteacher_data = load_dataset("google/trueteacher", split='train')
cnn_dailymail_data = load_dataset("cnn_dailymail", version="3.0.0", split='train')
cnn_dailymail_articles_by_id = {example['id']: example['article'] for example in cnn_dailymail_data}
trueteacher_data_with_documents = []
for example in tqdm(trueteacher_data):
example['document'] = cnn_dailymail_articles_by_id[example['cnndm_id']]
trueteacher_data_with_documents.append(example)
```
## Intended Use
This dataset is intended for a research use (**non-commercial**) in English.
The recommended use case is training factual consistency evaluation models for summarization.
## Out-of-scope use
Any use cases which violate the **cc-by-nc-4.0** license.
Usage in languages other than English.
## Citation
If you use this dataset for a research publication, please cite the TrueTeacher paper (using the bibtex entry below), as well as the CNN/DailyMail, XSum, T5 and FLAN papers mentioned above.
```
@misc{gekhman2023trueteacher,
title={TrueTeacher: Learning Factual Consistency Evaluation with Large Language Models},
author={Zorik Gekhman and Jonathan Herzig and Roee Aharoni and Chen Elkind and Idan Szpektor},
year={2023},
eprint={2305.11171},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 3,440 | [
[
-0.011474609375,
-0.046539306640625,
0.015411376953125,
0.004917144775390625,
-0.01715087890625,
-0.01050567626953125,
-0.0276336669921875,
-0.01001739501953125,
-0.005786895751953125,
0.0237884521484375,
-0.035247802734375,
-0.047943115234375,
-0.0523681640625,... |
flozi00/LLM-Task-Classification | 2023-10-22T13:59:20.000Z | [
"region:us"
] | flozi00 | null | null | 0 | 47 | 2023-09-22T07:19:04 | ---
dataset_info:
config_name: multilingual
features:
- name: text
dtype: string
- name: named_labels
dtype: string
splits:
- name: train
num_bytes: 29832442
num_examples: 60106
download_size: 0
dataset_size: 29832442
configs:
- config_name: multilingual
data_files:
- split: train
path: multilingual/train-*
---
# Dataset Card for "LLM-Task-Classification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 530 | [
[
-0.0216217041015625,
-0.009857177734375,
0.022216796875,
0.0103759765625,
-0.004390716552734375,
-0.00007617473602294922,
0.011810302734375,
-0.0156097412109375,
0.040924072265625,
0.03924560546875,
-0.067626953125,
-0.058258056640625,
-0.056549072265625,
-0... |
Rianknow/sarcastic_headline | 2023-09-30T04:43:27.000Z | [
"region:us"
] | Rianknow | null | null | 0 | 47 | 2023-09-30T04:42:16 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
emozilla/Long-Data-Collections-Fine-Tune | 2023-10-09T15:01:11.000Z | [
"region:us"
] | emozilla | null | null | 0 | 47 | 2023-10-07T02:17:23 | ---
dataset_info:
features:
- name: text
dtype: string
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 12859272204
num_examples: 98557
download_size: 7118608463
dataset_size: 12859272204
---
# Dataset Card for "Long-Data-Collections-Fine-Tune"
Paraquet version of the fine-tune split of [togethercomputer/Long-Data-Collections](https://huggingface.co/datasets/togethercomputer/Long-Data-Collections)
Statistics (in # of characters): `total_len: 6419025428, average_len: 65130.08135393731` | 573 | [
[
-0.05401611328125,
-0.0360107421875,
0.01396942138671875,
0.00939178466796875,
-0.06622314453125,
0.0054931640625,
-0.032806396484375,
-0.028778076171875,
0.06915283203125,
0.048431396484375,
-0.034027099609375,
-0.05126953125,
-0.035614013671875,
0.00542068... |
carnival13/sur_test_rt5 | 2023-10-10T02:04:29.000Z | [
"region:us"
] | carnival13 | null | null | 0 | 47 | 2023-10-10T02:04:05 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 1141002885
num_examples: 900000
download_size: 279016062
dataset_size: 1141002885
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "sur_test_rt5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 501 | [
[
-0.059234619140625,
-0.0162353515625,
0.004528045654296875,
0.018096923828125,
-0.0203857421875,
-0.00012993812561035156,
0.031646728515625,
-0.005222320556640625,
0.04046630859375,
0.038787841796875,
-0.06011962890625,
-0.058868408203125,
-0.032318115234375,
... |
Aaryan333/MisaHub_WCE_Segmentation_train_val | 2023-10-10T02:17:03.000Z | [
"region:us"
] | Aaryan333 | null | null | 0 | 47 | 2023-10-10T02:16:44 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 131460889.53022918
num_examples: 2094
- name: validation
num_bytes: 32711768.699770816
num_examples: 524
download_size: 162770574
dataset_size: 164172658.23
---
# Dataset Card for "MisaHub_WCE_Segmentation_train_val"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 504 | [
[
-0.05169677734375,
-0.0084075927734375,
0.01010894775390625,
0.032073974609375,
-0.0203399658203125,
-0.00968170166015625,
0.046600341796875,
-0.0144195556640625,
0.045806884765625,
0.0325927734375,
-0.06646728515625,
-0.0516357421875,
-0.04571533203125,
-0.... |
open-phi/textbooks_grounded | 2023-10-17T02:35:15.000Z | [
"region:us"
] | open-phi | null | null | 0 | 47 | 2023-10-17T02:35:07 | ---
dataset_info:
features:
- name: topic
dtype: string
- name: model
dtype: string
- name: concepts
sequence: string
- name: outline
sequence: string
- name: markdown
dtype: string
splits:
- name: train
num_bytes: 9661917
num_examples: 85
download_size: 3742034
dataset_size: 9661917
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "textbooks_grounded"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 595 | [
[
-0.035064697265625,
-0.0206756591796875,
0.0174407958984375,
0.0052032470703125,
-0.013763427734375,
-0.0074615478515625,
0.007358551025390625,
0.01505279541015625,
0.0257110595703125,
0.0372314453125,
-0.047271728515625,
-0.06561279296875,
-0.024993896484375,
... |
automated-research-group/gpt2-winogrande | 2023-10-28T06:48:10.000Z | [
"region:us"
] | automated-research-group | null | null | 0 | 47 | 2023-10-28T06:48:09 | ---
dataset_info:
features:
- name: answer
dtype: string
- name: id
dtype: string
- name: question
dtype: string
- name: input_perplexity
dtype: float64
- name: input_likelihood
dtype: float64
- name: output_perplexity
dtype: float64
- name: output_likelihood
dtype: float64
splits:
- name: validation
num_bytes: 357232
num_examples: 1267
download_size: 162550
dataset_size: 357232
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "gpt2-winogrande"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 711 | [
[
-0.0287933349609375,
-0.01241302490234375,
0.0209808349609375,
0.00969696044921875,
-0.022705078125,
-0.00353240966796875,
0.01058197021484375,
-0.0199737548828125,
0.036041259765625,
0.01416778564453125,
-0.04217529296875,
-0.04107666015625,
-0.057373046875,
... |
csarron/4m-img-caps | 2022-03-28T18:50:53.000Z | [
"region:us"
] | csarron | null | null | 1 | 46 | 2022-03-02T23:29:22 | see [read_pyarrow.py](https://gist.github.com/csarron/df712e53c9e0dcaad4eb6843e7a3d51c#file-read_pyarrow-py) for how to read one pyarrow file.
example PyTorch dataset:
```python
from torch.utils.data import Dataset
class ImageCaptionArrowDataset(Dataset):
def __init__(
self,
dataset_file,
tokenizer,
):
import pyarrow as pa
data = [pa.ipc.open_file(pa.memory_map(f, "rb")).read_all() for f in glob.glob(dataset_file)]
self.data = pa.concat_tables(data)
# do other initialization, like init image preprocessing fn,
def __getitem__(self, index):
# item_id = self.data["id"][index].as_py()
text = self.data["text"][index].as_py() # get text
if isinstance(text, list):
text = random.choice(text)
img_bytes = self.data["image"][index].as_py() # get image bytes
# do some processing with image and text, return the features
# img_feat = self.image_bytes_to_tensor(img_bytes)
# inputs = self.tokenizer(
# text,
# padding="max_length",
# max_length=self.max_text_len,
# truncation=True,
# return_token_type_ids=True,
# return_attention_mask=True,
# add_special_tokens=True,
# return_tensors="pt",
# )
# input_ids = inputs.input_ids.squeeze(0)
# attention_mask = inputs.attention_mask.squeeze(0)
# return {
# # "item_ids": item_id,
# "text_ids": input_ids,
# "input_ids": input_ids,
# "text_masks": attention_mask,
# "pixel_values": img_feat,
# }
def __len__(self):
return len(self.data)
``` | 1,764 | [
[
-0.0167388916015625,
-0.0396728515625,
0.007671356201171875,
0.0108795166015625,
-0.0240325927734375,
-0.006244659423828125,
-0.0012578964233398438,
-0.02459716796875,
0.0128326416015625,
0.0164642333984375,
-0.02496337890625,
-0.039154052734375,
-0.047485351562... |
indonesian-nlp/id_newspapers_2018 | 2022-10-25T13:47:43.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:id",
"license:cc-by-4.0",
"region:us"
] | indonesian-nlp | null | null | 8 | 46 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- id
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-generation
task_ids:
- language-modeling
pretty_name: Indonesian Newspapers 2018
---
# Dataset of Indonesian Online Newspaper
This is a copy of dataset created by **Feryandi Nurdiantoro** (<https://github.com/feryandi/Dataset-Artikel>). The original dataset in json format is stored uncompressed in Google Drive in more than 500K files, one file per article. Unfortunately, due to its size, it is impossible to download the whole dataset as one big compressed file (it takes forever to compress it online). Therefore I provide here a copy and its cleaned version as compressed files.
The dataset contains around 500K articles (136M of words) from 7 Indonesian newspapers: Detik, Kompas, Tempo, CNN Indonesia, Sindo, Republika and Poskota. The articles are dated between 1st January 2018 and 20th August 2018 (with few exceptions dated earlier). The size of uncompressed 500K json files (newspapers-json.tgz) is around 2.2GB, and the cleaned uncompressed in a big text file (newspapers.txt.gz) is about 1GB. The original source in Google Drive contains also a dataset in html format which include raw data (pictures, css, javascript, ...) from the online news website. I don't copy it here since it is about 60GB and mostly we only need the text content for NLP research.
Following is the compressed files:
* newspaper-json.gz: the compressed original 500K json files.
* newspaper.txt.gz: a dump of all json files in one big cleaned text file which is normally the only one needed for language model training.
The license has been copied from the source:
## License
Proyek ini dilisensikan dibawah lisensi **Creative Commons Attribution-ShareAlike 4.0 International License**\*. Kumpulan data yang dibagikan bertujuan untuk ilmu pengetahuan, pembelajaran, dan penelitian Bahasa Indonesia (komputasi maupun lingusitik), dan hanya dapat digunakan untuk hal tersebut. Kepemilikan data untuk setiap artikel dimiliki oleh media yang bersangkutan dimana data tersebut diambil; dan pemilik repository ini tidak melakukan klaim kepemilikan atas konten tersebut. Jika Anda mendapati bahwa data ini telah melanggar suatu hak cipta; mohon kontak pengelola repository ini.
This work is licensed under a **Creative Commons Attribution-ShareAlike 4.0 International License**. The dataset is shared for the sole purpose of aiding open scientific research in Bahasa Indonesia (computing or linguistics), and can only be used for that purpose. The ownership of each article within the dataset belongs to the respective newspaper from which it was extracted; and the maintainer of the repository does not claim ownership of any of the content within it. If you think, by any means, that this dataset breaches any established copyrights; please contact the repository maintainer.
| 2,986 | [
[
-0.007480621337890625,
-0.04925537109375,
0.0178985595703125,
0.037261962890625,
-0.035491943359375,
-0.00710296630859375,
-0.030029296875,
-0.028778076171875,
0.042144775390625,
0.054595947265625,
-0.0195159912109375,
-0.0222015380859375,
-0.04058837890625,
... |
albertxu/CrosswordQA | 2022-10-29T23:45:36.000Z | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"language:en",
"license:unknown",
"region:us"
] | albertxu | null | null | 3 | 46 | 2022-04-26T08:05:14 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
task_categories:
- question-answering
task_ids:
- open-domain-qa
---
# Dataset Card for CrosswordQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://github.com/albertkx/Berkeley-Crossword-Solver
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Albert Xu](mailto:albertxu@usc.edu) and [Eshaan Pathak](mailto:eshaanpathak@berkeley.edu)
### Dataset Summary
The CrosswordQA dataset is a set of over 6 million clue-answer pairs scraped from the New York Times and many other crossword publishers. The dataset was created to train the Berkeley Crossword Solver's QA model. See our paper for more information. Answers are automatically segmented (e.g., BUZZLIGHTYEAR -> Buzz Lightyear), and thus may occasionally be segmented incorrectly.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
```
{
"id": 0,
"clue": "Clean-up target",
"answer": "mess"
}
```
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] | 3,111 | [
[
-0.032745361328125,
-0.037628173828125,
0.0126495361328125,
0.00321197509765625,
-0.01690673828125,
0.01611328125,
0.004451751708984375,
-0.0267791748046875,
0.04150390625,
0.040130615234375,
-0.06744384765625,
-0.0635986328125,
-0.043060302734375,
0.0184936... |
sileod/movie_recommendation | 2023-05-25T14:53:49.000Z | [
"task_categories:multiple-choice",
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"task_ids:open-domain-qa",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language... | sileod | Movie recommendation task based on the Movielens dataset | @InProceedings{sileodreclm22,
author="Sileo, Damien
and Vossen, Wout
and Raymaekers, Robbe",
editor="Hagen, Matthias
and Verberne, Suzan
and Macdonald, Craig
and Seifert, Christin
and Balog, Krisztian
and N{\o}rv{\aa}g, Kjetil
and Setty, Vinay",
title="Zero-Shot Recommendation as Language Modeling",
booktitle="Advances in Information Retrieval",
year="2022",
publisher="Springer International Publishing",
address="Cham",
pages="223--230",
abstract="Recommendation is the task of ranking items (e.g. movies or products) according to individual user needs. Current systems rely on collaborative filtering and content-based techniques, which both require structured training data. We propose a framework for recommendation with off-the-shelf pretrained language models (LM) that only used unstructured text corpora as training data. If a user u liked Matrix and Inception, we construct a textual prompt, e.g. ``Movies like Matrix, Inception, {\$}{\$}{\{}<{\}}m{\{}>{\}}{\$}{\$}<m>'' to estimate the affinity between u and m with LM likelihood. We motivate our idea with a corpus analysis, evaluate several prompt structures, and we compare LM-based recommendation with standard matrix factorization trained on different data regimes. The code for our experiments is publicly available (https://colab.research.google.com/drive/...?usp=sharing).",
isbn="978-3-030-99739-7"
} | 9 | 46 | 2022-05-27T08:25:19 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: 'movie_recommendation'
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- multiple-choice
- question-answering
task_ids:
- multiple-choice-qa
- open-domain-qa
tags:
- movie-recommendation
- collaborative-filtering
- movielens
- film
---
# Dataset for evaluation of (zero-shot) recommendation with language models
We showed that pretrained large language models can act as a recommender system, and compare few-shot learning results to matrix factorization baselines.
This is the BIG-Bench version of our language-based movie recommendation dataset.
<https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/movie_recommendation>
GPT-2 has a 48.8% accuracy, chance is 25%.
Human accuracy is 60.4%.
# Citation
```
@InProceedings{sileodreclm22,
author="Sileo, Damien
and Vossen, Wout
and Raymaekers, Robbe",
editor="Hagen, Matthias
and Verberne, Suzan
and Macdonald, Craig
and Seifert, Christin
and Balog, Krisztian
and N{\o}rv{\aa}g, Kjetil
and Setty, Vinay",
title="Zero-Shot Recommendation as Language Modeling",
booktitle="Advances in Information Retrieval",
year="2022",
publisher="Springer International Publishing",
address="Cham",
pages="223--230",
isbn="978-3-030-99739-7"
}
``` | 1,452 | [
[
-0.007129669189453125,
-0.050933837890625,
0.053955078125,
-0.004199981689453125,
-0.0128173828125,
-0.0176239013671875,
-0.0258331298828125,
-0.0163726806640625,
-0.0066375732421875,
0.023590087890625,
-0.034637451171875,
-0.05462646484375,
-0.037811279296875,
... |
nateraw/kitti | 2022-07-15T18:17:21.000Z | [
"task_categories:object-detection",
"annotations_creators:found",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:unknown",
"region:us"
] | nateraw | null | null | 1 | 46 | 2022-06-15T05:58:44 | ---
annotations_creators:
- found
language_creators:
- crowdsourced
language:
- en
license:
- unknown
multilinguality:
- monolingual
pretty_name: Kitti
size_categories:
- 1K<n<10K
task_categories:
- object-detection
task_ids:
- object-detection
---
# Dataset Card for Kitti
The [Kitti](http://www.cvlibs.net/datasets/kitti/eval_object.php) dataset.
The Kitti object detection and object orientation estimation benchmark consists of 7481 training images and 7518 test images, comprising a total of 80.256 labeled objects | 522 | [
[
-0.058441162109375,
-0.0143890380859375,
0.0091552734375,
-0.00994110107421875,
-0.035247802734375,
-0.00942230224609375,
0.014434814453125,
-0.02593994140625,
-0.01352691650390625,
0.03009033203125,
-0.0285491943359375,
-0.04534912109375,
-0.01384735107421875,
... |
Gpaiva/NERDE | 2022-07-28T01:27:18.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:pt",
"license:cc-by-4.0",
"ner",
"portugu... | Gpaiva | (pt) NERDE é um dataset para NER a partir de documentos jurídicos da defesa econômica em português do Brasil, foi criado em colaboração com o Cade e o laboratório LATITUDE/UnB.
(en) NERDE is a NER dataset from economic defense legal documents in Brazilian Portuguese, created in collaboration with Cade and the LATITUDE/UnB laboratory. | """
_DESCRIPTION = | 3 | 46 | 2022-07-22T01:50:19 | ---
annotations_creators:
- expert-generated
language:
- pt
language_creators:
- expert-generated
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: NERDE
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- ner
- portuguese-ner
- economic-defense
task_categories:
- token-classification
task_ids:
- named-entity-recognition
---
# Dataset Card for NERDE
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [NERDE repository](https://github.com/guipaiva/NERDE)
- **Point of Contact:** [Guilherme P. Paiva](mailto:guipaivagpp@gmail.com)
### Dataset Summary
NERDE is a dataset for Named Entity Recognition for Economic Defense. It was created in collaboration with LATITUDE/UnB Laboratory and the Administrative Council for Economic Defense (Cade)
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The language in the dataset is Brazilian Portuguese from legal documents. The BCP-47 code for Brazilian Portuguese is pt-BR
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@guipaiva](https://github.com/guipaiva) for adding this dataset.
| 3,134 | [
[
-0.049224853515625,
-0.03472900390625,
-0.0008835792541503906,
0.016143798828125,
-0.01081085205078125,
0.00794219970703125,
-0.008514404296875,
-0.03326416015625,
0.06304931640625,
0.058624267578125,
-0.05364990234375,
-0.0692138671875,
-0.055084228515625,
... |
arbml/Arabic_Hate_Speech | 2022-10-21T20:22:02.000Z | [
"region:us"
] | arbml | null | null | 1 | 46 | 2022-10-21T20:21:56 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet
dtype: string
- name: is_off
dtype: string
- name: is_hate
dtype: string
- name: is_vlg
dtype: string
- name: is_vio
dtype: string
splits:
- name: train
num_bytes: 1656540
num_examples: 8557
- name: validation
num_bytes: 234165
num_examples: 1266
download_size: 881261
dataset_size: 1890705
---
# Dataset Card for "Arabic_Hate_Speech"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 598 | [
[
-0.048065185546875,
-0.034149169921875,
-0.00487518310546875,
0.0088958740234375,
-0.00913238525390625,
0.007160186767578125,
0.005321502685546875,
-0.0209503173828125,
0.051025390625,
0.026123046875,
-0.0455322265625,
-0.07049560546875,
-0.06475830078125,
-... |
jpwahle/autoregressive-paraphrase-dataset | 2022-11-19T12:14:43.000Z | [
"task_categories:text-classification",
"task_categories:text-generation",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"plagiarism",
"pa... | jpwahle | null | null | 1 | 46 | 2022-11-06T08:28:27 | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- machine-generated
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Machine Paraphrase Dataset (T5, GPT-3)
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- plagiarism
- paraphrase
- academic integrity
- arxiv
- wikipedia
- theses
- bert
- roberta
- t5
- gpt-3
task_categories:
- text-classification
- text-generation
task_ids: []
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Size:** 163MB
- **Repository:** https://github.com/jpwahle/emnlp22-transforming
- **Paper:** https://arxiv.org/abs/2210.03568
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | 2,970 | [
[
-0.036834716796875,
-0.03973388671875,
0.00787353515625,
0.01763916015625,
-0.01227569580078125,
0.01275634765625,
-0.0290985107421875,
-0.0233917236328125,
0.05242919921875,
0.050506591796875,
-0.068359375,
-0.07769775390625,
-0.04730224609375,
0.0126800537... |
gsarti/mt_geneval | 2022-11-21T14:52:09.000Z | [
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:translation",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"language:it",
"language:fr",
"language:ar",
"language:de",
"language:hi",
"lang... | gsarti | The MT-GenEval benchmark evaluates gender translation accuracy on English -> {Arabic, French, German, Hindi, Italian,
Portuguese, Russian, Spanish}. The dataset contains individual sentences with annotations on the gendered target words,
and contrastive original-invertend translations with additional preceding context. | @inproceedings{currey-etal-2022-mtgeneval,
title = "{MT-GenEval}: {A} Counterfactual and Contextual Dataset for Evaluating Gender Accuracy in Machine Translation",
author = "Currey, Anna and
Nadejde, Maria and
Pappagari, Raghavendra and
Mayer, Mia and
Lauly, Stanislas, and
Niu, Xing and
Hsu, Benjamin and
Dinu, Georgiana",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
publisher = "Association for Computational Linguistics",
url = ""https://arxiv.org/pdf/2211.01355.pdf,
} | 1 | 46 | 2022-11-21T10:50:15 | ---
annotations_creators:
- expert-generated
language:
- en
- it
- fr
- ar
- de
- hi
- pt
- ru
- es
language_creators:
- expert-generated
license:
- cc-by-sa-3.0
multilinguality:
- translation
pretty_name: mt_geneval
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- gender
- constrained mt
task_categories:
- translation
task_ids: []
---
# Dataset Card for MT-GenEval
## Table of Contents
- [Dataset Card for MT-GenEval](#dataset-card-for-mt-geneval)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Machine Translation](#machine-translation)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** [Github](https://github.com/amazon-science/machine-translation-gender-eval)
- **Paper:** [EMNLP 2022](https://arxiv.org/abs/2211.01355)
- **Point of Contact:** [Anna Currey](mailto:ancurrey@amazon.com)
### Dataset Summary
The MT-GenEval benchmark evaluates gender translation accuracy on English -> {Arabic, French, German, Hindi, Italian, Portuguese, Russian, Spanish}. The dataset contains individual sentences with annotations on the gendered target words, and contrastive original-invertend translations with additional preceding context.
**Disclaimer**: *The MT-GenEval benchmark was released in the EMNLP 2022 paper [MT-GenEval: A Counterfactual and Contextual Dataset for Evaluating Gender Accuracy in Machine Translation](https://arxiv.org/abs/2211.01355) by Anna Currey, Maria Nadejde, Raghavendra Pappagari, Mia Mayer, Stanislas Lauly, Xing Niu, Benjamin Hsu, and Georgiana Dinu and is hosted through Github by the [Amazon Science](https://github.com/amazon-science?type=source) organization. The dataset is licensed under a [Creative Commons Attribution-ShareAlike 3.0 Unported License](https://creativecommons.org/licenses/by-sa/3.0/).*
### Supported Tasks and Leaderboards
#### Machine Translation
Refer to the [original paper](https://arxiv.org/abs/2211.01355) for additional details on gender accuracy evaluation with MT-GenEval.
### Languages
The dataset contains source English sentences extracted from Wikipedia translated into the following languages: Arabic (`ar`), French (`fr`), German (`de`), Hindi (`hi`), Italian (`it`), Portuguese (`pt`), Russian (`ru`), and Spanish (`es`).
## Dataset Structure
### Data Instances
The dataset contains two configuration types, `sentences` and `context`, mirroring the original repository structure, with source and target language specified in the configuration name (e.g. `sentences_en_ar`, `context_en_it`) The `sentences` configurations contains masculine and feminine versions of individual sentences with gendered word annotations. Here is an example entry of the `sentences_en_it` split (all `sentences_en_XX` splits have the same structure):
```json
{
{
"orig_id": 0,
"source_feminine": "Pagratidis quickly recanted her confession, claiming she was psychologically pressured and beaten, and until the moment of her execution, she remained firm in her innocence.",
"reference_feminine": "Pagratidis subito ritrattò la sua confessione, affermando che era aveva subito pressioni psicologiche e era stata picchiata, e fino al momento della sua esecuzione, rimase ferma sulla sua innocenza.",
"source_masculine": "Pagratidis quickly recanted his confession, claiming he was psychologically pressured and beaten, and until the moment of his execution, he remained firm in his innocence.",
"reference_masculine": "Pagratidis subito ritrattò la sua confessione, affermando che era aveva subito pressioni psicologiche e era stato picchiato, e fino al momento della sua esecuzione, rimase fermo sulla sua innocenza.",
"source_feminine_annotated": "Pagratidis quickly recanted <F>her</F> confession, claiming <F>she</F> was psychologically pressured and beaten, and until the moment of <F>her</F> execution, <F>she</F> remained firm in <F>her</F> innocence.",
"reference_feminine_annotated": "Pagratidis subito ritrattò la sua confessione, affermando che era aveva subito pressioni psicologiche e era <F>stata picchiata</F>, e fino al momento della sua esecuzione, rimase <F>ferma</F> sulla sua innocenza.",
"source_masculine_annotated": "Pagratidis quickly recanted <M>his</M> confession, claiming <M>he</M> was psychologically pressured and beaten, and until the moment of <M>his</M> execution, <M>he</M> remained firm in <M>his</M> innocence.",
"reference_masculine_annotated": "Pagratidis subito ritrattò la sua confessione, affermando che era aveva subito pressioni psicologiche e era <M>stato picchiato</M>, e fino al momento della sua esecuzione, rimase <M>fermo</M> sulla sua innocenza.",
"source_feminine_keywords": "her;she;her;she;her",
"reference_feminine_keywords": "stata picchiata;ferma",
"source_masculine_keywords": "his;he;his;he;his",
"reference_masculine_keywords": "stato picchiato;fermo",
}
}
```
The `context` configuration contains instead different English sources related to stereotypical professional roles with additional preceding context and contrastive original-inverted translations. Here is an example entry of the `context_en_it` split (all `context_en_XX` splits have the same structure):
```json
{
"orig_id": 0,
"context": "Pierpont told of entering and holding up the bank and then fleeing to Fort Wayne, where the loot was divided between him and three others.",
"source": "However, Pierpont stated that Skeer was the planner of the robbery.",
"reference_original": "Comunque, Pierpont disse che Skeer era il pianificatore della rapina.",
"reference_flipped": "Comunque, Pierpont disse che Skeer era la pianificatrice della rapina."
}
```
### Data Splits
All `sentences_en_XX` configurations have 1200 examples in the `train` split and 300 in the `test` split. For the `context_en_XX` configurations, the number of example depends on the language pair:
| Configuration | # Train | # Test |
| :-----------: | :--------: | :-----: |
| `context_en_ar` | 792 | 1100 |
| `context_en_fr` | 477 | 1099 |
| `context_en_de` | 598 | 1100 |
| `context_en_hi` | 397 | 1098 |
| `context_en_it` | 465 | 1904 |
| `context_en_pt` | 574 | 1089 |
| `context_en_ru` | 583 | 1100 |
| `context_en_es` | 534 | 1096 |
### Dataset Creation
From the original paper:
>In developing MT-GenEval, our goal was to create a realistic, gender-balanced dataset that naturally incorporates a diverse range of gender phenomena. To this end, we extracted English source sentences from Wikipedia as the basis for our dataset. We automatically pre-selected relevant sentences using EN gender-referring words based on the list provided by [Zhao et al. (2018)](https://doi.org/10.18653/v1/N18-2003).
Please refer to the original article [MT-GenEval: A Counterfactual and Contextual Dataset for Evaluating Gender Accuracy in Machine Translation](https://arxiv.org/abs/2211.01355) for additional information on dataset creation.
## Additional Information
### Dataset Curators
The original authors of MT-GenEval are the curators of the original dataset. For problems or updates on this 🤗 Datasets version, please contact [gabriele.sarti996@gmail.com](mailto:gabriele.sarti996@gmail.com).
### Licensing Information
The dataset is licensed under the [Creative Commons Attribution-ShareAlike 3.0 International License](https://creativecommons.org/licenses/by-sa/3.0/).
### Citation Information
Please cite the authors if you use these corpora in your work.
```bibtex
@inproceedings{currey-etal-2022-mtgeneval,
title = "{MT-GenEval}: {A} Counterfactual and Contextual Dataset for Evaluating Gender Accuracy in Machine Translation",
author = "Currey, Anna and
Nadejde, Maria and
Pappagari, Raghavendra and
Mayer, Mia and
Lauly, Stanislas, and
Niu, Xing and
Hsu, Benjamin and
Dinu, Georgiana",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2211.01355",
}
``` | 8,665 | [
[
-0.023468017578125,
-0.057373046875,
0.0282745361328125,
0.016204833984375,
-0.006622314453125,
-0.01409149169921875,
-0.004955291748046875,
-0.015869140625,
-0.00043487548828125,
0.01751708984375,
-0.0635986328125,
-0.050079345703125,
-0.054229736328125,
0.... |
RobotsMaliAI/bayelemabaga | 2023-04-24T16:56:24.000Z | [
"task_categories:translation",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:bm",
"language:fr",
"region:us"
] | RobotsMaliAI | The Bayelemabaga dataset is a collection of 44160 aligned machine translation ready Bambara-French lines,
originating from Corpus Bambara de Reference. The dataset is constitued of text extracted from 231 source files,
varing from periodicals, books, short stories, blog posts, part of the Bible and the Quran. | @misc{bayelemabagamldataset2022
title={Machine Learning Dataset Development for Manding Languages},
author={
Valentin Vydrin and
Christopher Homan and
Michael Leventhal and
Allashera Auguste Tapo and
Marco Zampieri and
Jean-Jacques Meric and
Kirill Maslinsky and
Andrij Rovenchak and
Sebastien Diarra
},
howpublished = {url{https://github.com/robotsmali-ai/datasets}},
year={2022}
} | 3 | 46 | 2022-12-04T08:47:14 | ---
task_categories:
- translation
- text-generation
language:
- bm
- fr
size_categories:
- 10K<n<100K
---
# BAYƐLƐMABAGA: Parallel French - Bambara Dataset for Machine Learning
## Overview
The Bayelemabaga dataset is a collection of 46976 aligned machine translation ready Bambara-French lines, originating from [Corpus Bambara de Reference](http://cormande.huma-num.fr/corbama/run.cgi/first_form). The dataset is constitued of text extracted from **264** text files, varing from periodicals, books, short stories, blog posts, part of the Bible and the Quran.
## Snapshot: 46976
| | |
|:---|---:|
| **Lines** | **46976** |
| French Tokens (spacy) | 691312 |
| Bambara Tokens (daba) | 660732 |
| French Types | 32018 |
| Bambara Types | 29382 |
| Avg. Fr line length | 77.6 |
| Avg. Bam line length | 61.69 |
| Number of text sources | 264 |
## Data Splits
| | | |
|:-----:|:---:|------:|
| Train | 80% | 37580 |
| Valid | 10% | 4698 |
| Test | 10% | 4698 |
||
## Remarks
* We are working on resolving some last minute misalignment issues.
### Maintenance
* This dataset is supposed to be actively maintained.
### Benchmarks:
- `Coming soon`
### Sources
- [`sources`](./bayelemabaga/sources.txt)
### To note:
- ʃ => (sh/shy) sound: Symbol left in the dataset, although not a part of bambara orthography nor French orthography.
## License
- `CC-BY-SA-4.0`
## Version
- `1.0.1`
## Citation
```
@misc{bayelemabagamldataset2022
title={Machine Learning Dataset Development for Manding Languages},
author={
Valentin Vydrin and
Jean-Jacques Meric and
Kirill Maslinsky and
Andrij Rovenchak and
Allahsera Auguste Tapo and
Sebastien Diarra and
Christopher Homan and
Marco Zampieri and
Michael Leventhal
},
howpublished = {url{https://github.com/robotsmali-ai/datasets}},
year={2022}
}
```
## Contacts
- `sdiarra <at> robotsmali <dot> org`
- `aat3261 <at> rit <dot> edu` | 1,998 | [
[
-0.033935546875,
-0.034210205078125,
0.007122039794921875,
0.018463134765625,
-0.011199951171875,
0.01215362548828125,
-0.007541656494140625,
-0.00414276123046875,
0.00571441650390625,
0.036651611328125,
-0.043243408203125,
-0.06646728515625,
-0.049224853515625,... |
NeelNanda/c4-10k | 2022-12-26T23:12:52.000Z | [
"region:us"
] | NeelNanda | null | null | 0 | 46 | 2022-12-26T23:12:45 | ---
dataset_info:
features:
- name: text
dtype: string
- name: timestamp
dtype: timestamp[us]
- name: url
dtype: string
splits:
- name: train
num_bytes: 21970889
num_examples: 10000
download_size: 13645542
dataset_size: 21970889
---
# Dataset Card for "c4-10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 429 | [
[
-0.045440673828125,
0.00023090839385986328,
0.020782470703125,
0.028533935546875,
-0.0187225341796875,
0.0118255615234375,
0.0189208984375,
-0.034454345703125,
0.056976318359375,
0.0302734375,
-0.054168701171875,
-0.051971435546875,
-0.042144775390625,
0.000... |
waylandy/phosformer_curated | 2023-01-25T17:44:30.000Z | [
"region:us"
] | waylandy | null | null | 1 | 46 | 2023-01-25T17:42:25 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01494598388671875,
0.057159423828125,
0.028839111328125,
-0.0350341796875,
0.04656982421875,
0.052490234375,
0.00504302978515625,
0.0513916015625,
0.016998291015625,
-0.0521240234375,
-0.0149993896484375,
-0.06036376953125,
0.03790283... |
relbert/t_rex | 2023-03-31T21:02:35.000Z | [
"multilinguality:monolingual",
"size_categories:n<1K",
"language:en",
"license:other",
"region:us"
] | relbert | T-Rex dataset. | @inproceedings{elsahar2018t,
title={T-rex: A large scale alignment of natural language with knowledge base triples},
author={Elsahar, Hady and Vougiouklis, Pavlos and Remaci, Arslen and Gravier, Christophe and Hare, Jonathon and Laforest, Frederique and Simperl, Elena},
booktitle={Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)},
year={2018}
} | 1 | 46 | 2023-01-25T21:47:54 | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- n<1K
pretty_name: relbert/t_rex
---
# Dataset Card for "relbert/t_rex"
## Dataset Description
- **Repository:** [https://hadyelsahar.github.io/t-rex/](https://hadyelsahar.github.io/t-rex/)
- **Paper:** [https://aclanthology.org/L18-1544/](https://aclanthology.org/L18-1544/)
- **Dataset:** Cleaned T-REX for link prediction.
## Dataset Summary
This is the T-REX dataset proposed in [https://aclanthology.org/L18-1544/](https://aclanthology.org/L18-1544/).
The test split is universal across different version, which is manually checked by the author of [relbert/t_rex](https://huggingface.co/datasets/relbert/t_rex),
and the test split contains predicates that is not included in the train/validation split.
The number of triples in each split is summarized in the table below.
***Note:*** To make it consistent with other datasets ([nell](https://huggingface.co/datasets/relbert/nell) and [conceptnet](https://huggingface.co/datasets/relbert/conceptnet)), we rename predicate/subject/object as relation/head/tail.
- Number of instances
| | train | validation | test |
|:--------------------------------|--------:|-------------:|-------:|
| number of triples | 1,274,264 | 318,566 | 122 |
| number of unique relation types (predicate) | 759 | 676 | 34 |
### Filtering to Remove Noise
We apply filtering to keep triples with named-entities in either of head or tail (`named-entity filter`).
Then, we remove predicates if they have less than three triples (`rare-predicate filter`).
After the filtering, we manually remove too vague and noisy predicate, and unify same predicates with different names (see the annotation [here](https://huggingface.co/datasets/relbert/t_rex/raw/main/predicate_manual_check.csv)).
Finally, we remove triples that contain enties that has frequency less than 5 (`frequnecy`).
| Dataset | `raw` | `named-entity filter` | `rare-predicate` | `unify-denoise-predicate` | `frequnecy` |
|:----------|-----------:|-----------------------:|-----------------:|--------------------------:|------------:|
| Triples | 20,877,472 | 12,561,573 | 12,561,250 | 12,410,726 | 1,616,065 |
| Predicate | 1,616 | 1,470 | 1,237 | 839 | 839 |
## Dataset Structure
An example looks as follows.
```shell
{
"tail": "Persian",
"head": "Tajik",
"title": "Tandoor bread",
"text": "Tandoor bread (Arabic: \u062e\u0628\u0632 \u062a\u0646\u0648\u0631 khubz tannoor, Armenian: \u0569\u0578\u0576\u056b\u0580 \u0570\u0561\u0581 tonir hats, Azerbaijani: T\u0259ndir \u00e7\u00f6r\u0259yi, Georgian: \u10d7\u10dd\u10dc\u10d8\u10e1 \u10de\u10e3\u10e0\u10d8 tonis puri, Kazakh: \u0442\u0430\u043d\u0434\u044b\u0440 \u043d\u0430\u043d tandyr nan, Kyrgyz: \u0442\u0430\u043d\u0434\u044b\u0440 \u043d\u0430\u043d tandyr nan, Persian: \u0646\u0627\u0646 \u062a\u0646\u0648\u0631\u06cc nan-e-tanuri, Tajik: \u043d\u043e\u043d\u0438 \u0442\u0430\u043d\u0443\u0440\u0439 noni tanuri, Turkish: Tand\u0131r ekme\u011fi, Uyghur: ) is a type of leavened bread baked in a clay oven called a tandoor, similar to naan. In Pakistan, tandoor breads are popular especially in the Khyber Pakhtunkhwa and Punjab regions, where naan breads are baked in tandoor clay ovens fired by wood or charcoal. These tandoor-prepared naans are known as tandoori naan.",
"relation": "[Artifact] is a type of [Type]"
}
```
## Reproduce the Dataset
```shell
git clone https://huggingface.co/datasets/relbert/t_rex
cd t_rex
mkdir data_raw
cd data_raw
cd data_raw
wget https://figshare.com/ndownloader/files/8760241
unzip 8760241
cd ../
python process.py
python unify_predicate.py
python min_entity_filter.py
python create_split.py
```
## Citation Information
```
@inproceedings{elsahar2018t,
title={T-rex: A large scale alignment of natural language with knowledge base triples},
author={Elsahar, Hady and Vougiouklis, Pavlos and Remaci, Arslen and Gravier, Christophe and Hare, Jonathon and Laforest, Frederique and Simperl, Elena},
booktitle={Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)},
year={2018}
}
```
| 4,337 | [
[
-0.0340576171875,
-0.04742431640625,
0.0155792236328125,
0.00968170166015625,
-0.0136871337890625,
0.00045990943908691406,
-0.0101165771484375,
-0.0291748046875,
0.043243408203125,
0.031890869140625,
-0.040008544921875,
-0.06317138671875,
-0.0361328125,
0.01... |
Multimodal-Fatima/OK-VQA_test | 2023-05-29T02:08:55.000Z | [
"region:us"
] | Multimodal-Fatima | null | null | 0 | 46 | 2023-03-07T14:45:38 | ---
dataset_info:
features:
- name: image
dtype: image
- name: question_type
dtype: string
- name: confidence
dtype: int32
- name: answers
sequence: string
- name: answers_original
list:
- name: answer
dtype: string
- name: raw_answer
dtype: string
- name: answer_confidence
dtype: string
- name: answer_id
dtype: int64
- name: id_image
dtype: int64
- name: answer_type
dtype: string
- name: question_id
dtype: int64
- name: question
dtype: string
- name: id
dtype: int64
- name: clip_tags_LAION_ViT_H_14_2B
sequence: string
- name: clip_tags_ViT_L_14
sequence: string
- name: blip_caption_beam_5
dtype: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14
sequence: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_LAION-ViT-H-14-2B
sequence: string
- name: DETA_detections_deta_swin_large_o365_coco_classes
list:
- name: attribute
dtype: string
- name: box
sequence: float32
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float32
- name: size
dtype: string
- name: tag
dtype: string
- name: DETA_detections_deta_swin_large_o365_coco_classes_caption_module_random
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: captions_module
sequence: string
- name: captions_module_filter
sequence: string
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
- name: clip_tags_ViT_B_16_with_openai
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B_with_openai
sequence: string
- name: clip_tags_ViT_L_14_with_openai
sequence: string
- name: Attributes_ViT_L_14_descriptors_text_davinci_003_full
sequence: string
- name: Attributes_ViT_B_16_descriptors_text_davinci_003_full
sequence: string
- name: Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full
sequence: string
- name: DETA_detections_deta_swin_large_o365_coco_classes_caption_all_patches_Salesforce_blip_image_captioning_large_
list:
- name: attribute
dtype: string
- name: box
sequence: float64
- name: captions_all_patches
sequence: string
- name: label
dtype: string
- name: location
dtype: string
- name: ratio
dtype: float64
- name: size
dtype: string
- name: tag
dtype: string
- name: blip_caption_topk_50_Salesforce_blip_image_captioning_large_multiple
sequence: string
splits:
- name: test
num_bytes: 1133674079.0
num_examples: 5046
download_size: 959321361
dataset_size: 1133674079.0
---
# Dataset Card for "OK-VQA_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 3,060 | [
[
-0.054168701171875,
-0.032470703125,
0.0166473388671875,
-0.00833892822265625,
-0.0070648193359375,
-0.01532745361328125,
0.033905029296875,
-0.0087738037109375,
0.047088623046875,
0.03375244140625,
-0.04888916015625,
-0.040496826171875,
-0.02587890625,
-0.0... |
lexlms/lex_files | 2023-07-24T11:59:58.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:extended",
"language:en",
... | lexlms | null | null | 4 | 46 | 2023-05-10T11:40:26 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- extended
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
pretty_name: LexFiles
tags:
- legal
- law
---
# Dataset Card for "LexFiles"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Specifications](#supported-tasks-and-leaderboards)
## Dataset Description
- **Homepage:** https://github.com/coastalcph/lexlms
- **Repository:** https://github.com/coastalcph/lexlms
- **Paper:** https://arxiv.org/abs/2305.07507
- **Point of Contact:** [Ilias Chalkidis](mailto:ilias.chalkidis@di.ku.dk)
### Dataset Summary
The LeXFiles is a new diverse English multinational legal corpus that we created including 11 distinct sub-corpora that cover legislation and case law from 6 primarily English-speaking legal systems (EU, CoE, Canada, US, UK, India).
The corpus contains approx. 19 billion tokens. In comparison, the "Pile of Law" corpus released by Hendersons et al. (2022) comprises 32 billion in total, where the majority (26/30) of sub-corpora come from the United States of America (USA), hence the corpus as a whole is biased towards the US legal system in general, and the federal or state jurisdiction in particular, to a significant extent.
### Dataset Specifications
| Corpus | Corpus alias | Documents | Tokens | Pct. | Sampl. (a=0.5) | Sampl. (a=0.2) |
|-----------------------------------|----------------------|-----------|--------|--------|----------------|----------------|
| EU Legislation | `eu-legislation` | 93.7K | 233.7M | 1.2% | 5.0% | 8.0% |
| EU Court Decisions | `eu-court-cases` | 29.8K | 178.5M | 0.9% | 4.3% | 7.6% |
| ECtHR Decisions | `ecthr-cases` | 12.5K | 78.5M | 0.4% | 2.9% | 6.5% |
| UK Legislation | `uk-legislation` | 52.5K | 143.6M | 0.7% | 3.9% | 7.3% |
| UK Court Decisions | `uk-court-cases` | 47K | 368.4M | 1.9% | 6.2% | 8.8% |
| Indian Court Decisions | `indian-court-cases` | 34.8K | 111.6M | 0.6% | 3.4% | 6.9% |
| Canadian Legislation | `canadian-legislation` | 6K | 33.5M | 0.2% | 1.9% | 5.5% |
| Canadian Court Decisions | `canadian-court-cases` | 11.3K | 33.1M | 0.2% | 1.8% | 5.4% |
| U.S. Court Decisions [1] | `us-court-cases` | 4.6M | 11.4B | 59.2% | 34.7% | 17.5% |
| U.S. Legislation | `us-legislation` | 518 | 1.4B | 7.4% | 12.3% | 11.5% |
| U.S. Contracts | `us-contracts` | 622K | 5.3B | 27.3% | 23.6% | 15.0% |
| Total | `lexlms/lex_files` | 5.8M | 18.8B | 100% | 100% | 100% |
[1] We consider only U.S. Court Decisions from 1965 onwards (cf. post Civil Rights Act), as a hard threshold for cases relying on severely out-dated and in many cases harmful law standards. The rest of the corpora include more recent documents.
[2] Sampling (Sampl.) ratios are computed following the exponential sampling introduced by Lample et al. (2019).
Additional corpora not considered for pre-training, since they do not represent factual legal knowledge.
| Corpus | Corpus alias | Documents | Tokens |
|----------------------------------------|------------------------|-----------|--------|
| Legal web pages from C4 | `legal-c4` | 284K | 340M |
### Usage
Load a specific sub-corpus, given the corpus alias, as presented above.
```python
from datasets import load_dataset
dataset = load_dataset('lexlms/lex_files', name='us-court-cases')
```
### Citation
[*Ilias Chalkidis\*, Nicolas Garneau\*, Catalina E.C. Goanta, Daniel Martin Katz, and Anders Søgaard.*
*LeXFiles and LegalLAMA: Facilitating English Multinational Legal Language Model Development.*
*2022. In the Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics. Toronto, Canada.*](https://aclanthology.org/2023.acl-long.865/)
```
@inproceedings{chalkidis-etal-2023-lexfiles,
title = "{L}e{XF}iles and {L}egal{LAMA}: Facilitating {E}nglish Multinational Legal Language Model Development",
author = "Chalkidis, Ilias and
Garneau, Nicolas and
Goanta, Catalina and
Katz, Daniel and
S{\o}gaard, Anders",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.865",
pages = "15513--15535",
}
``` | 5,145 | [
[
-0.0244293212890625,
-0.0172576904296875,
0.036956787109375,
0.004383087158203125,
-0.029266357421875,
0.016082763671875,
-0.01300048828125,
-0.024383544921875,
0.0271453857421875,
0.0294952392578125,
-0.02020263671875,
-0.0693359375,
-0.04534912109375,
0.00... |
ccmusic-database/music_genre | 2023-10-14T11:13:37.000Z | [
"task_categories:audio-classification",
"size_categories:1K<n<10K",
"language:zh",
"language:en",
"license:mit",
"music",
"art",
"region:us"
] | ccmusic-database | This database contains about 1700 musical pieces (.mp3 format)
with lengths of 270-300s that are divided into 17 genres in total. | @dataset{zhaorui_liu_2021_5676893,
author = {Zhaorui Liu, Monan Zhou, Shenyang Xu and Zijin Li},
title = {{Music Data Sharing Platform for Computational Musicology Research (CCMUSIC DATASET)}},
month = nov,
year = 2021,
publisher = {Zenodo},
version = {1.1},
doi = {10.5281/zenodo.5676893},
url = {https://doi.org/10.5281/zenodo.5676893}
} | 8 | 46 | 2023-05-25T14:10:47 | ---
license: mit
task_categories:
- audio-classification
language:
- zh
- en
tags:
- music
- art
pretty_name: Music Genre Database
size_categories:
- 1K<n<10K
---
# Dataset Card for Music Genre Dataset
## Dataset Description
- **Homepage:** <https://ccmusic-database.github.io>
- **Repository:** <https://huggingface.co/datasets/ccmusic-database/music_genre>
- **Paper:** <https://doi.org/10.5281/zenodo.5676893>
- **Leaderboard:** <https://ccmusic-database.github.io/team.html>
- **Point of Contact:** N/A
### Dataset Summary
This database contains about 1700 musical pieces (.mp3 format) with lengths of 270-300s that are divided into 17 genres in total.
### Supported Tasks and Leaderboards
Audio classification
### Languages
Multilingual
## Usage
When doing classification task, only one colum of fst_level_label, sec_level_label and thr_level_label can be used, not for mixing.
```
from datasets import load_dataset
dataset = load_dataset("ccmusic-database/music_genre", split="test")
for item in dataset:
print(item)
```
## Dataset Structure
### Data Instances
.zip(.jpg)
### Data Fields
```
1_Classic
3_Symphony
4_Opera
5_Solo
6_Chamber
2_Non_classic
7_Pop
12_Pop_vocal_ballad
13_Adult_contemporary
14_Teen_pop
8_Dance_and_house
15_Contemporary_dance_pop
16_Dance_pop
9_Indie
17_Classic_indie_pop
18_Chamber_cabaret_and_art_pop
10_Soul_or_r_and_b
11_Rock
19_Adult_alternative_rock
20_Uplifting_anthemic_rock
21_Soft_rock
22_Acoustic_pop
```
### Data Splits
Train, valid, test
## Dataset Creation
### Curation Rationale
Promoting the development of AI in the music industry
### Source Data
#### Initial Data Collection and Normalization
Zhaorui Liu, Monan Zhou
#### Who are the source language producers?
Composers of the songs in dataset
### Annotations
#### Annotation process
Students collected about 1700 musical pieces (.mp3 format) with lengths of 270-300s divided into 17 genres in total.
#### Who are the annotators?
Students from CCMUSIC
### Personal and Sensitive Information
Due to copyright issues with the original music, only mel spectrograms are provided in the dataset.
## Considerations for Using the Data
### Social Impact of Dataset
Promoting the development of AI in the music industry
### Discussion of Biases
Most are English songs
### Other Known Limitations
Samples are not balanced enough
## Additional Information
### Dataset Curators
Zijin Li
### Evaluation
Coming soon...
### Licensing Information
```
MIT License
Copyright (c) CCMUSIC
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
### Citation Information
```
@dataset{zhaorui_liu_2021_5676893,
author = {Zhaorui Liu, Monan Zhou, Shenyang Xu and Zijin Li},
title = {CCMUSIC DATABASE: A Music Data Sharing Platform for Computational Musicology Research},
month = {nov},
year = {2021},
publisher = {Zenodo},
version = {1.1},
doi = {10.5281/zenodo.5676893},
url = {https://doi.org/10.5281/zenodo.5676893}
}
```
### Contributions
Provide a dataset for music genre classification | 4,168 | [
[
-0.038604736328125,
-0.0258026123046875,
0.011566162109375,
0.0338134765625,
-0.0241851806640625,
0.006259918212890625,
-0.042388916015625,
-0.0192413330078125,
0.03228759765625,
0.040069580078125,
-0.070068359375,
-0.08648681640625,
-0.0205535888671875,
0.0... |
wtcherr/LAION10K | 2023-06-05T15:19:52.000Z | [
"region:us"
] | wtcherr | null | null | 1 | 46 | 2023-06-05T15:19:40 | ---
dataset_info:
features:
- name: image
dtype: image
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 171092053.43
num_examples: 8226
download_size: 170695611
dataset_size: 171092053.43
---
# Dataset Card for "LAION10K"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 397 | [
[
-0.03460693359375,
-0.0028858184814453125,
0.016265869140625,
0.025543212890625,
-0.01197052001953125,
-0.006145477294921875,
0.020233154296875,
-0.00792694091796875,
0.0638427734375,
0.040008544921875,
-0.050140380859375,
-0.05145263671875,
-0.039581298828125,
... |
Xenova/quickdraw-small | 2023-06-25T17:08:02.000Z | [
"region:us"
] | Xenova | null | null | 1 | 46 | 2023-06-23T22:04:14 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': aircraft carrier
'1': airplane
'2': alarm clock
'3': ambulance
'4': angel
'5': animal migration
'6': ant
'7': anvil
'8': apple
'9': arm
'10': asparagus
'11': axe
'12': backpack
'13': banana
'14': bandage
'15': barn
'16': baseball bat
'17': baseball
'18': basket
'19': basketball
'20': bat
'21': bathtub
'22': beach
'23': bear
'24': beard
'25': bed
'26': bee
'27': belt
'28': bench
'29': bicycle
'30': binoculars
'31': bird
'32': birthday cake
'33': blackberry
'34': blueberry
'35': book
'36': boomerang
'37': bottlecap
'38': bowtie
'39': bracelet
'40': brain
'41': bread
'42': bridge
'43': broccoli
'44': broom
'45': bucket
'46': bulldozer
'47': bus
'48': bush
'49': butterfly
'50': cactus
'51': cake
'52': calculator
'53': calendar
'54': camel
'55': camera
'56': camouflage
'57': campfire
'58': candle
'59': cannon
'60': canoe
'61': car
'62': carrot
'63': castle
'64': cat
'65': ceiling fan
'66': cell phone
'67': cello
'68': chair
'69': chandelier
'70': church
'71': circle
'72': clarinet
'73': clock
'74': cloud
'75': coffee cup
'76': compass
'77': computer
'78': cookie
'79': cooler
'80': couch
'81': cow
'82': crab
'83': crayon
'84': crocodile
'85': crown
'86': cruise ship
'87': cup
'88': diamond
'89': dishwasher
'90': diving board
'91': dog
'92': dolphin
'93': donut
'94': door
'95': dragon
'96': dresser
'97': drill
'98': drums
'99': duck
'100': dumbbell
'101': ear
'102': elbow
'103': elephant
'104': envelope
'105': eraser
'106': eye
'107': eyeglasses
'108': face
'109': fan
'110': feather
'111': fence
'112': finger
'113': fire hydrant
'114': fireplace
'115': firetruck
'116': fish
'117': flamingo
'118': flashlight
'119': flip flops
'120': floor lamp
'121': flower
'122': flying saucer
'123': foot
'124': fork
'125': frog
'126': frying pan
'127': garden hose
'128': garden
'129': giraffe
'130': goatee
'131': golf club
'132': grapes
'133': grass
'134': guitar
'135': hamburger
'136': hammer
'137': hand
'138': harp
'139': hat
'140': headphones
'141': hedgehog
'142': helicopter
'143': helmet
'144': hexagon
'145': hockey puck
'146': hockey stick
'147': horse
'148': hospital
'149': hot air balloon
'150': hot dog
'151': hot tub
'152': hourglass
'153': house plant
'154': house
'155': hurricane
'156': ice cream
'157': jacket
'158': jail
'159': kangaroo
'160': key
'161': keyboard
'162': knee
'163': knife
'164': ladder
'165': lantern
'166': laptop
'167': leaf
'168': leg
'169': light bulb
'170': lighter
'171': lighthouse
'172': lightning
'173': line
'174': lion
'175': lipstick
'176': lobster
'177': lollipop
'178': mailbox
'179': map
'180': marker
'181': matches
'182': megaphone
'183': mermaid
'184': microphone
'185': microwave
'186': monkey
'187': moon
'188': mosquito
'189': motorbike
'190': mountain
'191': mouse
'192': moustache
'193': mouth
'194': mug
'195': mushroom
'196': nail
'197': necklace
'198': nose
'199': ocean
'200': octagon
'201': octopus
'202': onion
'203': oven
'204': owl
'205': paint can
'206': paintbrush
'207': palm tree
'208': panda
'209': pants
'210': paper clip
'211': parachute
'212': parrot
'213': passport
'214': peanut
'215': pear
'216': peas
'217': pencil
'218': penguin
'219': piano
'220': pickup truck
'221': picture frame
'222': pig
'223': pillow
'224': pineapple
'225': pizza
'226': pliers
'227': police car
'228': pond
'229': pool
'230': popsicle
'231': postcard
'232': potato
'233': power outlet
'234': purse
'235': rabbit
'236': raccoon
'237': radio
'238': rain
'239': rainbow
'240': rake
'241': remote control
'242': rhinoceros
'243': rifle
'244': river
'245': roller coaster
'246': rollerskates
'247': sailboat
'248': sandwich
'249': saw
'250': saxophone
'251': school bus
'252': scissors
'253': scorpion
'254': screwdriver
'255': sea turtle
'256': see saw
'257': shark
'258': sheep
'259': shoe
'260': shorts
'261': shovel
'262': sink
'263': skateboard
'264': skull
'265': skyscraper
'266': sleeping bag
'267': smiley face
'268': snail
'269': snake
'270': snorkel
'271': snowflake
'272': snowman
'273': soccer ball
'274': sock
'275': speedboat
'276': spider
'277': spoon
'278': spreadsheet
'279': square
'280': squiggle
'281': squirrel
'282': stairs
'283': star
'284': steak
'285': stereo
'286': stethoscope
'287': stitches
'288': stop sign
'289': stove
'290': strawberry
'291': streetlight
'292': string bean
'293': submarine
'294': suitcase
'295': sun
'296': swan
'297': sweater
'298': swing set
'299': sword
'300': syringe
'301': t-shirt
'302': table
'303': teapot
'304': teddy-bear
'305': telephone
'306': television
'307': tennis racquet
'308': tent
'309': The Eiffel Tower
'310': The Great Wall of China
'311': The Mona Lisa
'312': tiger
'313': toaster
'314': toe
'315': toilet
'316': tooth
'317': toothbrush
'318': toothpaste
'319': tornado
'320': tractor
'321': traffic light
'322': train
'323': tree
'324': triangle
'325': trombone
'326': truck
'327': trumpet
'328': umbrella
'329': underwear
'330': van
'331': vase
'332': violin
'333': washing machine
'334': watermelon
'335': waterslide
'336': whale
'337': wheel
'338': windmill
'339': wine bottle
'340': wine glass
'341': wristwatch
'342': yoga
'343': zebra
'344': zigzag
splits:
- name: train
num_bytes: 1759529671.8579164
num_examples: 4500000
- name: valid
num_bytes: 97751648.4365509
num_examples: 250000
- name: test
num_bytes: 97751648.4365509
num_examples: 250000
download_size: 1902583033
dataset_size: 1955032968.731018
---
# Dataset Card for "quickdraw-small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 9,136 | [
[
-0.04034423828125,
-0.0165863037109375,
0.02325439453125,
0.0099945068359375,
-0.031890869140625,
-0.004665374755859375,
0.020416259765625,
-0.0026187896728515625,
0.07684326171875,
0.02423095703125,
-0.0567626953125,
-0.047332763671875,
-0.0400390625,
-0.01... |
Intel/neural-chat-dataset-v1-1 | 2023-09-26T23:03:45.000Z | [
"license:apache-2.0",
"region:us"
] | Intel | null | null | 2 | 46 | 2023-06-30T07:35:44 | ---
license: apache-2.0
---
Here is a collective list of instruction dataset used for Neural Chat fine-tuning. The total number of instruction samples and tokens are about 1.1M and 326M respectively.
| Type | Language | Dataset | Number |
|--| ---- |--------|----|
| HC3 | en | [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3) | 24K |
| dolly | en | [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) | 15K |
| alpaca-zh | zh | [tigerbot-alpaca-zh-0.5m](https://huggingface.co/datasets/TigerResearch/tigerbot-alpaca-zh-0.5m) | 500K |
| alpaca-en | en | [TigerResearch/tigerbot-alpaca-en-50k](https://huggingface.co/datasets/TigerResearch/tigerbot-alpaca-en-50k) | 50K |
| math | en | [tigerbot-gsm-8k-en](https://huggingface.co/datasets/TigerResearch/tigerbot-gsm-8k-en) | 8K |
| general | en | [tigerbot-stackexchange-qa-en-0.5m](https://huggingface.co/datasets/TigerResearch/tigerbot-stackexchange-qa-en-0.5m) | 500K |
The collective dataset has been validated on multiple LLMs (such as MPT, LLama) by the NeuralChat team (Kaokao Lv, Wenxin Zhang, Xuhui Ren, and Haihao Shen) from Intel/SATG/AIA/AIPT. Thanks to [Hello-SimpleAI](https://huggingface.co/Hello-SimpleAI), [databricks](https://huggingface.co/databricks), [TigerResearch/TigerBot](https://github.com/TigerResearch/TigerBot) for releasing the open-source instruction dataset.
| 1,745 | [
[
-0.037200927734375,
-0.0367431640625,
-0.0011768341064453125,
0.0183868408203125,
-0.00884246826171875,
-0.008880615234375,
-0.0265045166015625,
-0.01617431640625,
0.00815582275390625,
0.0247650146484375,
-0.0413818359375,
-0.05035400390625,
-0.027801513671875,
... |
HachiML/humaneval-ja-v0.6 | 2023-07-24T08:25:21.000Z | [
"task_categories:text2text-generation",
"size_categories:n<1K",
"source_datasets:openai_humaneval",
"language:ja",
"license:mit",
"code",
"code-generation",
"region:us"
] | HachiML | null | null | 0 | 46 | 2023-07-23T13:32:38 | ---
dataset_info:
features:
- name: task_id
dtype: string
- name: prompt
dtype: string
- name: prompt_ja
dtype: string
- name: canonical_solution
dtype: string
- name: test
dtype: string
- name: entry_point
dtype: string
splits:
- name: test
num_bytes: 274703
num_examples: 164
download_size: 125629
dataset_size: 274703
license: mit
task_categories:
- text2text-generation
language:
- ja
tags:
- code
- code-generation
size_categories:
- n<1K
pretty_name: HumanEval Japanese
source_datasets:
- openai_humaneval
---
# Dataset Card for "humaneval-ja"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 736 | [
[
-0.044403076171875,
-0.019134521484375,
-0.0014257431030273438,
0.00974273681640625,
-0.0223846435546875,
-0.00531005859375,
0.01666259765625,
-0.0191650390625,
0.06427001953125,
0.0357666015625,
-0.0570068359375,
-0.056396484375,
-0.0333251953125,
-0.004245... |
IfanSnek/bleat | 2023-09-21T13:19:48.000Z | [
"task_categories:conversational",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"region:us"
] | IfanSnek | null | null | 8 | 46 | 2023-07-28T00:51:56 | ---
license: apache-2.0
task_categories:
- conversational
language:
- en
size_categories:
- n<1K
configs:
- config_name: default
data_files: clean.txt
sample_by: paragraph
---
# Data format:
* Must start with `### System:\n` and a JSON array of available functions. The list must exist but may be empty.
* A `### User:` header must come next with text from the user.
* If the assistant makes a call, a `### Call:` header must come next along with a JSON object with `"name"` and `"parameters"` entries. The `"parameters"` entry is a JSON object of `"parameter": "value"` pairs.
* If a call is made, a `### Return` header must come after with a json object of `"return": "value"`.
* The assistant will make a response with `### Assistant: Response`.
* Separate conversations must be separated with two newlines.
# Example:
```
### System:
[
{
"name": "calculate_age",
"description": "Calculates the age of a person.",
"parameters": [
{
"name": "birth_date",
"type": "date",
"description": "The date of birth of the person."
},
{
"name": "current_date",
"type": "date",
"description": "The current date."
}
],
"required": ["birth_date", "current_date"]
}
]
### User: How old am I?
### Assistant: When were you born?
### User: 2000-01-01
### Call:
{
"name": "calculate_age",
"parameters": {"birth_date": "2000-01-01","current_date": "2020-12-30"}
}
### Return:
{"result": 20}
### Assistant: You are 20 years old.
### User: How old is my brother?
### Assistant: When was your brother born?
### User: 2002-01-01
### Call:
{
"name": "calculate_age",
"parameters": {"birth_date": "2002-01-01","current_date": "2020-12-30"}
}
### Return:
{"result": 18}
### Assistant: Your brother is 18 years old, 2 years younger than you.
``` | 1,933 | [
[
-0.0092926025390625,
-0.05401611328125,
0.0196990966796875,
0.038177490234375,
0.003047943115234375,
0.00007128715515136719,
0.02044677734375,
-0.006641387939453125,
0.0271453857421875,
0.045379638671875,
-0.0440673828125,
-0.044586181640625,
-0.0309295654296875... |
frtna/FineTuneTaxonomy | 2023-08-04T23:46:31.000Z | [
"region:us"
] | frtna | null | null | 0 | 46 | 2023-08-04T23:45:21 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01494598388671875,
0.057159423828125,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052520751953125,
0.005077362060546875,
0.051361083984375,
0.0170135498046875,
-0.05206298828125,
-0.01494598388671875,
-0.06036376953125,
0.03... |
benjis/bigvul | 2023-08-31T03:02:50.000Z | [
"region:us"
] | benjis | null | null | 0 | 46 | 2023-08-31T02:55:32 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: CVE ID
dtype: string
- name: CVE Page
dtype: string
- name: CWE ID
dtype: string
- name: codeLink
dtype: string
- name: commit_id
dtype: string
- name: commit_message
dtype: string
- name: func_after
dtype: string
- name: func_before
dtype: string
- name: lang
dtype: string
- name: project
dtype: string
- name: vul
dtype: int8
splits:
- name: train
num_bytes: 404950685.2579571
num_examples: 150908
- name: validation
num_bytes: 88684597.21877055
num_examples: 33049
- name: test
num_bytes: 88687280.64632414
num_examples: 33050
download_size: 252969708
dataset_size: 582322563.1230518
---
# Dataset Card for "bigvul"
Unofficial, not affiliated with the authors.
- **Paper:** https://doi.org/10.1145/3379597.3387501
- **Repository:** https://github.com/ZeoVan/MSR_20_Code_vulnerability_CSV_Dataset | 1,115 | [
[
0.0084381103515625,
-0.0300140380859375,
0.00698089599609375,
0.0133056640625,
-0.03167724609375,
-0.0014247894287109375,
-0.0160064697265625,
-0.016845703125,
0.018524169921875,
0.0207366943359375,
-0.0245513916015625,
-0.072998046875,
-0.047149658203125,
-... |
ashwincv0112/SAS_Python_Conversion | 2023-09-08T08:23:29.000Z | [
"region:us"
] | ashwincv0112 | null | null | 0 | 46 | 2023-09-08T08:23:28 | ---
dataset_info:
features:
- name: SAS Code
dtype: string
- name: Converted Python Code
dtype: string
splits:
- name: train
num_bytes: 6362
num_examples: 30
download_size: 5247
dataset_size: 6362
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "SAS_Python_Conversion"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 492 | [
[
-0.033050537109375,
-0.0012159347534179688,
0.0159149169921875,
0.037017822265625,
-0.010284423828125,
-0.015594482421875,
0.005863189697265625,
0.00569915771484375,
0.0294952392578125,
0.0255584716796875,
-0.0618896484375,
-0.040924072265625,
-0.03790283203125,... |
Aharneish/spirit-qa | 2023-10-03T07:23:52.000Z | [
"region:us"
] | Aharneish | null | null | 0 | 46 | 2023-09-26T04:53:09 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01497650146484375,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.016998291015625,
-0.05206298828125,
-0.01496124267578125,
-0.06036376953125,
0.0379... |
tiwes/wzd3k | 2023-09-28T12:48:02.000Z | [
"region:us"
] | tiwes | null | null | 0 | 46 | 2023-09-28T12:36:09 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01497650146484375,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.016998291015625,
-0.05206298828125,
-0.01496124267578125,
-0.06036376953125,
0.0379... |
pouya-haghi/pile-1k | 2023-10-04T17:55:34.000Z | [
"region:us"
] | pouya-haghi | null | null | 0 | 46 | 2023-10-04T17:53:01 | ---
dataset_info:
features:
- name: text
dtype: string
- name: meta
struct:
- name: pile_set_name
dtype: string
splits:
- name: train
num_bytes: 5342511
num_examples: 1000
download_size: 2874173
dataset_size: 5342511
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "pile-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 508 | [
[
-0.053497314453125,
-0.013916015625,
-0.0055999755859375,
0.02276611328125,
-0.033477783203125,
-0.004230499267578125,
0.04736328125,
-0.006710052490234375,
0.07073974609375,
0.050567626953125,
-0.055633544921875,
-0.049591064453125,
-0.049041748046875,
-0.0... |
lecslab/glosslm | 2023-10-23T00:17:11.000Z | [
"region:us"
] | lecslab | null | null | 0 | 46 | 2023-10-04T23:17:46 | ---
dataset_info:
features:
- name: ID
dtype: string
- name: glottocode
dtype: string
- name: transcription
dtype: string
- name: glosses
dtype: string
- name: translation
dtype: string
- name: metalang_glottocode
dtype: string
- name: is_segmented
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 92230126
num_examples: 451407
download_size: 31766111
dataset_size: 92230126
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "glosslm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 719 | [
[
-0.04339599609375,
-0.02716064453125,
0.0168914794921875,
0.006336212158203125,
-0.00839996337890625,
0.007480621337890625,
0.0153961181640625,
-0.0154266357421875,
0.059783935546875,
0.03863525390625,
-0.05548095703125,
-0.059539794921875,
-0.039093017578125,
... |
zkdeng/dangerousSpiders | 2023-10-05T00:49:18.000Z | [
"region:us"
] | zkdeng | null | null | 0 | 46 | 2023-10-05T00:28:00 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Acantholycosa_lignaria
'1': Aglaoctenus_castaneus
'2': Aglaoctenus_lagotis
'3': Allocosa_funerea
'4': Allotrochosina_schauinslandi
'5': Alopecosa_albofasciata
'6': Alopecosa_barbipes
'7': Alopecosa_cuneata
'8': Alopecosa_inquilina
'9': Alopecosa_kochi
'10': Alopecosa_pulverulenta
'11': Anahita_punctulata
'12': Ancylometes_bogotensis
'13': Ancylometes_concolor
'14': Ancylometes_rufus
'15': Anoteropsis_hilaris
'16': Anoteropsis_litoralis
'17': Araneus_diadematus
'18': Arctosa_cinerea
'19': Arctosa_leopardus
'20': Arctosa_littoralis
'21': Arctosa_perita
'22': Arctosa_personata
'23': Asthenoctenus_borellii
'24': Aulonia_albimana
'25': Centroctenus_brevipes
'26': Cheiracanthium_erraticum
'27': Cheiracanthium_gracile
'28': Cheiracanthium_inclusum
'29': Cheiracanthium_mildei
'30': Cheiracanthium_punctorium
'31': Ctenus_amphora
'32': Ctenus_hibernalis
'33': Ctenus_medius
'34': Ctenus_ornatus
'35': Cupiennius_coccineus
'36': Cupiennius_getazi
'37': Cupiennius_salei
'38': Diapontia_uruguayensis
'39': Eratigena_agrestis
'40': Geolycosa_vultuosa
'41': Gladicosa_gulosa
'42': Gladicosa_pulchra
'43': Hippasa_holmerae
'44': Hogna_antelucana
'45': Hogna_baltimoriana
'46': Hogna_bivittata
'47': Hogna_carolinensis
'48': Hogna_crispipes
'49': Hogna_frondicola
'50': Hogna_gumia
'51': Hogna_radiata
'52': Lampona_cylindrata
'53': Latrodectus_bishopi
'54': Latrodectus_curacaviensis
'55': Latrodectus_geometricus
'56': Latrodectus_hasselti
'57': Latrodectus_hesperus
'58': Latrodectus_katipo
'59': Latrodectus_mactans
'60': Latrodectus_mirabilis
'61': Latrodectus_renivulvatus
'62': Latrodectus_tredecimguttatus
'63': Latrodectus_variolus
'64': Loxosceles_amazonica
'65': Loxosceles_deserta
'66': Loxosceles_laeta
'67': Loxosceles_reclusa
'68': Loxosceles_rufescens
'69': Loxosceles_tenochtitlan
'70': Loxosceles_yucatana
'71': Lycosa_erythrognatha
'72': Lycosa_hispanica
'73': Lycosa_pampeana
'74': Lycosa_praegrandis
'75': Lycosa_singoriensis
'76': Lycosa_tarantula
'77': Missulena_bradleyi
'78': Missulena_occatoria
'79': Paratrochosina_amica
'80': Pardosa_amentata
'81': Pardosa_lapidicina
'82': Pardosa_mercurialis
'83': Pardosa_moesta
'84': Pardosa_wagleri
'85': Phoneutria_boliviensis
'86': Phoneutria_depilata
'87': Phoneutria_fera
'88': Phoneutria_nigriventer
'89': Phoneutria_pertyi
'90': Phoneutria_reidyi
'91': Pirata_piraticus
'92': Portacosa_cinerea
'93': Rabidosa_hentzi
'94': Rabidosa_punctulata
'95': Rabidosa_rabida
'96': Schizocosa_avida
'97': Schizocosa_malitiosa
'98': Schizocosa_mccooki
'99': Sicarius_thomisoides
'100': Sosippus_californicus
'101': Tigrosa_annexa
'102': Tigrosa_aspersa
'103': Tigrosa_georgicola
'104': Tigrosa_helluo
'105': Trochosa_ruricola
'106': Trochosa_sepulchralis
'107': Trochosa_terricola
'108': Tropicosa_moesta
'109': Venator_immansuetus
'110': Venator_spenceri
'111': Venatrix_furcillata
'112': Wadicosa_fidelis
'113': Xerolycosa_miniata
'114': Xerolycosa_nemoralis
splits:
- name: train
num_bytes: 4290587998.03
num_examples: 166895
download_size: 3551438155
dataset_size: 4290587998.03
---
# Dataset Card for "dangerousSpiders"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 4,593 | [
[
-0.0254974365234375,
-0.02349853515625,
0.004909515380859375,
0.00958251953125,
-0.0164947509765625,
0.0109100341796875,
0.0269012451171875,
-0.028961181640625,
0.05303955078125,
0.04864501953125,
-0.05096435546875,
-0.056732177734375,
-0.055877685546875,
-0... |
HuggingSara/medqa | 2023-10-05T14:12:30.000Z | [
"region:us"
] | HuggingSara | null | null | 0 | 46 | 2023-10-05T14:10:17 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: options
struct:
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: E
dtype: string
- name: meta_info
dtype: string
- name: answer_idx
dtype: string
splits:
- name: train
num_bytes: 9470204
num_examples: 10178
- name: validation
num_bytes: 1184039
num_examples: 1272
- name: test
num_bytes: 1211382
num_examples: 1273
download_size: 6952745
dataset_size: 11865625
---
# Dataset Card for "Med_QA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 968 | [
[
-0.029541015625,
-0.01207733154296875,
0.03741455078125,
0.0015010833740234375,
-0.0172882080078125,
0.0009589195251464844,
0.0400390625,
-0.012054443359375,
0.064697265625,
0.032440185546875,
-0.0577392578125,
-0.05584716796875,
-0.031158447265625,
-0.01269... |
ContextualAI/tiny-hellaswag | 2023-10-09T21:43:49.000Z | [
"region:us"
] | ContextualAI | null | null | 0 | 46 | 2023-10-09T21:43:46 | ---
dataset_info:
features:
- name: query
dtype: string
- name: choices
sequence: string
- name: gold_generation
dtype: string
splits:
- name: dev
num_bytes: 46204
num_examples: 100
download_size: 30744
dataset_size: 46204
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
---
# Dataset Card for "tiny-hellaswag"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 513 | [
[
-0.045623779296875,
-0.0240325927734375,
0.02099609375,
0.00519561767578125,
-0.03033447265625,
-0.0155792236328125,
0.0080413818359375,
-0.00484466552734375,
0.06793212890625,
0.0279388427734375,
-0.06781005859375,
-0.052734375,
-0.041961669921875,
-0.01824... |
FinGPT/fingpt-finred | 2023-10-10T06:58:37.000Z | [
"region:us"
] | FinGPT | null | null | 1 | 46 | 2023-10-10T06:56:22 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 25113554
num_examples: 27558
- name: test
num_bytes: 4477146
num_examples: 5112
download_size: 2114835
dataset_size: 29590700
---
# Dataset Card for "fingpt-finred"
This dataset consist of both Relation Extraction part and Classification part, and it used in Multi-task Instruction Tuning
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 745 | [
[
-0.04638671875,
-0.050933837890625,
0.0007634162902832031,
0.00771331787109375,
-0.024169921875,
-0.00830078125,
-0.011138916015625,
-0.02227783203125,
0.01372528076171875,
0.046966552734375,
-0.0689697265625,
-0.03326416015625,
-0.03424072265625,
-0.0225219... |
tahrirchi/uz-crawl | 2023-10-28T19:10:00.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"language:uz",
"license:apache-2.0",
"uz",
"crawl",
"telegram_blog... | tahrirchi | null | null | 4 | 46 | 2023-10-26T10:43:01 | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: news
path: data/news-*
- split: telegram_blogs
path: data/telegram_blogs-*
dataset_info:
features:
- name: text
dtype: string
- name: timestamp
dtype: string
- name: source
dtype: string
splits:
- name: news
num_bytes: 3272404822
num_examples: 964268
- name: telegram_blogs
num_bytes: 248666870
num_examples: 227337
download_size: 1581389108
dataset_size: 3521071692
annotations_creators:
- no-annotation
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
multilinguality:
- monolingual
language:
- uz
size_categories:
- 1M<n<10M
pretty_name: UzCrawl
tags:
- uz
- crawl
- telegram_blogs
---
# Dataset Card for UzCrawl
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://tahrirchi.uz/grammatika-tekshiruvi](https://tahrirchi.uz/grammatika-tekshiruvi)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 3.52 GB
- **Size of the generated dataset:** 1.58 GB
- **Total amount of disk used:** 5.1 GB
### Dataset Summary
In an effort to democratize research on low-resource languages, we release UzCrawl dataset, a web and telegram crawl corpus consisting of materials from nearly 1.2 million unique sources in the Uzbek Language.
Please refer to our [blogpost](https://tahrirchi.uz/grammatika-tekshiruvi) and paper (Coming soon!) for further details.
To load and use dataset, run this script:
```python
from datasets import load_dataset
uz_crawl=load_dataset("tahrirchi/uz-crawl")
```
## Dataset Structure
### Data Instances
#### plain_text
- **Size of downloaded dataset files:** 3.52 GB
- **Size of the generated dataset:** 1.58 GB
- **Total amount of disk used:** 5.1 GB
An example of 'news' looks as follows.
```
{
'text': "O‘zbekiston Respublikasi Vazirlar Mahkamasining 2019 yil 24 iyuldagi 620-son qarori bilan tasdiqlangan «Xorijiy davlatlarda ta'lim olganlik to‘g‘risidagi hujjatlarni tan olish tartibi to‘g‘risida»gi Nizom ijrosini ta'minlash maqsadida Ta'lim sifatini nazorat qilish davlat inspeksiyasida (Toshkent shahar, Chilonzor tumani, Nurxon ko‘chasi, 21-uy) 2019 yil 9 –14 sentabr kunlari sohalar bo‘yicha sinov testlari bo‘lib o‘tishi rejalashtirilgan.\nTa'lim sifatini nazorat qilish davlat inspeksiyasi matbuot xizmati xabariga\xa0ko‘ra, «Huquqshunoslik», «Sog‘liqni saqlash va ijtimoiy ta'minot», «Iqtisodiyot», «Qishloq xo‘jaligi, muhandislik, ishlov berish va qurilish» hamda «O‘qituvchilar tayyorlash va pedagogik fanlar» sohalari bo‘yicha sinov testlari o‘tkaziladigan sanasi va sinov testida ishtirok etuvchilar ro‘yxati jadvalga muvofiq belgilanadi.\nTa'lim sifatini nazorat qilish davlat inspeksiyasi ogohlantirishicha, xorijiy davlatlarda ta'lim olganlik to‘g‘risidagi hujjatlarni tan olish uchun belgilangan sinov testlariga o‘z vaqtida kelmagan, sinov testida ishtirok etuvchilar ro‘yxatida mavjud bo‘lmagan talabgorlarga sinovlarga kirishga ruxsat etilmaydi.",
'timestamp': '2019-06-09',
'source': 'https://kun.uz/uz/news/2019/09/06/xorijda-talim-olganlik-togrisidagi-hujjatlarni-tan-olish-uchun-testlar-otkaziladigan-kunlar-malum-boldi'
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature that contains text.
- `timestamp`: a `string` feature that contains timestamp of the material.
- `source`: a `string` feature that contains url of the material.
### Data Splits
| name | |
|-----------------|--------:|
| news | 964268 |
| telegram_blogs | 227337 |
## Dataset Creation
The news portion have been crawled from 21 different websites using [Scrapy](https://scrapy.org/) framework. And telegram_blogs portion is consisted of manually curated texts from 81 high-quality Telegram channels.
## Citation
Please cite this model using the following format:
```
@online{Mamasaidov2023UzBooks,
author = {Mukhammadsaid Mamasaidov and Abror Shopulatov},
title = {UzCrawl dataset},
year = {2023},
url = {https://huggingface.co/datasets/tahrirchi/uz-crawl},
note = {Accessed: 2023-10-28}, % change this date
urldate = {2023-10-28} % change this date
}
```
## Gratitude
We are thankful to these awesome organizations and people for helping to make it happen:
- [Asadbek Kiyomov](https://www.linkedin.com/in/asadbey): for his works on the beginning of the project.
- [Ilya Gusev](https://github.com/IlyaGusev/): for his advise throughout the process
- [David Dale](https://daviddale.ru): for his advise throughout the process
## Contacts
We believe that this work will inspire all enthusiasts around the world to open the hidden beauty of low-resource languages, in particular of Uzbek.
For further development and issues about the dataset, please use m.mamasaidov@tahrirchi.uz or a.shopolatov@tahrirchi.uz to contact. | 6,394 | [
[
-0.0197601318359375,
-0.02581787109375,
0.00982666015625,
0.01364898681640625,
-0.043304443359375,
0.013916015625,
-0.0226593017578125,
-0.0255126953125,
0.040069580078125,
0.02105712890625,
-0.05084228515625,
-0.06475830078125,
-0.0196380615234375,
0.002166... |
kpriyanshu256/semeval-task-8-a-mono | 2023-11-01T15:10:39.000Z | [
"region:us"
] | kpriyanshu256 | null | null | 0 | 46 | 2023-11-01T15:10:28 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
- name: model
dtype: string
- name: source
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 237254533
num_examples: 83829
- name: val
num_bytes: 101985332
num_examples: 35928
- name: test
num_bytes: 10543757
num_examples: 5000
download_size: 201649583
dataset_size: 349783622
---
# Dataset Card for "semeval-task-8-a-mono"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 787 | [
[
-0.0374755859375,
-0.0181884765625,
0.01241302490234375,
0.0178070068359375,
-0.0289154052734375,
-0.003612518310546875,
0.0196685791015625,
-0.009063720703125,
0.07861328125,
0.034698486328125,
-0.058807373046875,
-0.0426025390625,
-0.052459716796875,
-0.01... |
ScalableMath/rm_data2 | 2023-11-02T10:09:09.000Z | [
"region:us"
] | ScalableMath | null | null | 0 | 46 | 2023-11-02T10:08:55 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
norne | 2023-01-25T14:41:42.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:no",
"license:other",
"arxiv:1911.12146",
"re... | null | NorNE is a manually annotated
corpus of named entities which extends the annotation of the existing
Norwegian Dependency Treebank. Comprising both of the official standards of
written Norwegian (Bokmål and Nynorsk), the corpus contains around 600,000
tokens and annotates a rich set of entity types including persons,
organizations, locations, geo-political entities, products, and events,
in addition to a class corresponding to nominals derived from names. | @inproceedings{johansen2019ner,
title={NorNE: Annotating Named Entities for Norwegian},
author={Fredrik Jørgensen, Tobias Aasmoe, Anne-Stine Ruud Husevåg,
Lilja Øvrelid, and Erik Velldal},
booktitle={LREC 2020},
year={2020},
url={https://arxiv.org/abs/1911.12146}
} | 1 | 45 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- 'no'
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: 'NorNE: Norwegian Named Entities'
dataset_info:
- config_name: bokmaal
features:
- name: idx
dtype: string
- name: lang
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': NOUN
'1': PUNCT
'2': ADP
'3': NUM
'4': SYM
'5': SCONJ
'6': ADJ
'7': PART
'8': DET
'9': CCONJ
'10': PROPN
'11': PRON
'12': X
'13': ADV
'14': INTJ
'15': VERB
'16': AUX
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-GPE_LOC
'6': I-GPE_LOC
'7': B-PROD
'8': I-PROD
'9': B-LOC
'10': I-LOC
'11': B-GPE_ORG
'12': I-GPE_ORG
'13': B-DRV
'14': I-DRV
'15': B-EVT
'16': I-EVT
'17': B-MISC
'18': I-MISC
splits:
- name: train
num_bytes: 10032169
num_examples: 15696
- name: validation
num_bytes: 1501730
num_examples: 2410
- name: test
num_bytes: 1234272
num_examples: 1939
download_size: 20909241
dataset_size: 12768171
- config_name: nynorsk
features:
- name: idx
dtype: string
- name: lang
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': NOUN
'1': PUNCT
'2': ADP
'3': NUM
'4': SYM
'5': SCONJ
'6': ADJ
'7': PART
'8': DET
'9': CCONJ
'10': PROPN
'11': PRON
'12': X
'13': ADV
'14': INTJ
'15': VERB
'16': AUX
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-GPE_LOC
'6': I-GPE_LOC
'7': B-PROD
'8': I-PROD
'9': B-LOC
'10': I-LOC
'11': B-GPE_ORG
'12': I-GPE_ORG
'13': B-DRV
'14': I-DRV
'15': B-EVT
'16': I-EVT
'17': B-MISC
'18': I-MISC
splits:
- name: train
num_bytes: 10072260
num_examples: 14174
- name: validation
num_bytes: 1278029
num_examples: 1890
- name: test
num_bytes: 1023358
num_examples: 1511
download_size: 20209253
dataset_size: 12373647
- config_name: combined
features:
- name: idx
dtype: string
- name: lang
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': NOUN
'1': PUNCT
'2': ADP
'3': NUM
'4': SYM
'5': SCONJ
'6': ADJ
'7': PART
'8': DET
'9': CCONJ
'10': PROPN
'11': PRON
'12': X
'13': ADV
'14': INTJ
'15': VERB
'16': AUX
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-GPE_LOC
'6': I-GPE_LOC
'7': B-PROD
'8': I-PROD
'9': B-LOC
'10': I-LOC
'11': B-GPE_ORG
'12': I-GPE_ORG
'13': B-DRV
'14': I-DRV
'15': B-EVT
'16': I-EVT
'17': B-MISC
'18': I-MISC
splits:
- name: train
num_bytes: 20104393
num_examples: 29870
- name: validation
num_bytes: 2779723
num_examples: 4300
- name: test
num_bytes: 2257594
num_examples: 3450
download_size: 41118494
dataset_size: 25141710
- config_name: bokmaal-7
features:
- name: idx
dtype: string
- name: lang
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': NOUN
'1': PUNCT
'2': ADP
'3': NUM
'4': SYM
'5': SCONJ
'6': ADJ
'7': PART
'8': DET
'9': CCONJ
'10': PROPN
'11': PRON
'12': X
'13': ADV
'14': INTJ
'15': VERB
'16': AUX
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-PROD
'6': I-PROD
'7': B-LOC
'8': I-LOC
'9': B-DRV
'10': I-DRV
'11': B-EVT
'12': I-EVT
'13': B-MISC
'14': I-MISC
splits:
- name: train
num_bytes: 10032169
num_examples: 15696
- name: validation
num_bytes: 1501730
num_examples: 2410
- name: test
num_bytes: 1234272
num_examples: 1939
download_size: 20909241
dataset_size: 12768171
- config_name: nynorsk-7
features:
- name: idx
dtype: string
- name: lang
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': NOUN
'1': PUNCT
'2': ADP
'3': NUM
'4': SYM
'5': SCONJ
'6': ADJ
'7': PART
'8': DET
'9': CCONJ
'10': PROPN
'11': PRON
'12': X
'13': ADV
'14': INTJ
'15': VERB
'16': AUX
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-PROD
'6': I-PROD
'7': B-LOC
'8': I-LOC
'9': B-DRV
'10': I-DRV
'11': B-EVT
'12': I-EVT
'13': B-MISC
'14': I-MISC
splits:
- name: train
num_bytes: 10072260
num_examples: 14174
- name: validation
num_bytes: 1278029
num_examples: 1890
- name: test
num_bytes: 1023358
num_examples: 1511
download_size: 20209253
dataset_size: 12373647
- config_name: combined-7
features:
- name: idx
dtype: string
- name: lang
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': NOUN
'1': PUNCT
'2': ADP
'3': NUM
'4': SYM
'5': SCONJ
'6': ADJ
'7': PART
'8': DET
'9': CCONJ
'10': PROPN
'11': PRON
'12': X
'13': ADV
'14': INTJ
'15': VERB
'16': AUX
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-PROD
'6': I-PROD
'7': B-LOC
'8': I-LOC
'9': B-DRV
'10': I-DRV
'11': B-EVT
'12': I-EVT
'13': B-MISC
'14': I-MISC
splits:
- name: train
num_bytes: 20104393
num_examples: 29870
- name: validation
num_bytes: 2779723
num_examples: 4300
- name: test
num_bytes: 2257594
num_examples: 3450
download_size: 41118494
dataset_size: 25141710
- config_name: bokmaal-8
features:
- name: idx
dtype: string
- name: lang
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': NOUN
'1': PUNCT
'2': ADP
'3': NUM
'4': SYM
'5': SCONJ
'6': ADJ
'7': PART
'8': DET
'9': CCONJ
'10': PROPN
'11': PRON
'12': X
'13': ADV
'14': INTJ
'15': VERB
'16': AUX
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-PROD
'6': I-PROD
'7': B-LOC
'8': I-LOC
'9': B-GPE
'10': I-GPE
'11': B-DRV
'12': I-DRV
'13': B-EVT
'14': I-EVT
'15': B-MISC
'16': I-MISC
splits:
- name: train
num_bytes: 10032169
num_examples: 15696
- name: validation
num_bytes: 1501730
num_examples: 2410
- name: test
num_bytes: 1234272
num_examples: 1939
download_size: 20909241
dataset_size: 12768171
- config_name: nynorsk-8
features:
- name: idx
dtype: string
- name: lang
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': NOUN
'1': PUNCT
'2': ADP
'3': NUM
'4': SYM
'5': SCONJ
'6': ADJ
'7': PART
'8': DET
'9': CCONJ
'10': PROPN
'11': PRON
'12': X
'13': ADV
'14': INTJ
'15': VERB
'16': AUX
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-PROD
'6': I-PROD
'7': B-LOC
'8': I-LOC
'9': B-GPE
'10': I-GPE
'11': B-DRV
'12': I-DRV
'13': B-EVT
'14': I-EVT
'15': B-MISC
'16': I-MISC
splits:
- name: train
num_bytes: 10072260
num_examples: 14174
- name: validation
num_bytes: 1278029
num_examples: 1890
- name: test
num_bytes: 1023358
num_examples: 1511
download_size: 20209253
dataset_size: 12373647
- config_name: combined-8
features:
- name: idx
dtype: string
- name: lang
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': NOUN
'1': PUNCT
'2': ADP
'3': NUM
'4': SYM
'5': SCONJ
'6': ADJ
'7': PART
'8': DET
'9': CCONJ
'10': PROPN
'11': PRON
'12': X
'13': ADV
'14': INTJ
'15': VERB
'16': AUX
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-PROD
'6': I-PROD
'7': B-LOC
'8': I-LOC
'9': B-GPE
'10': I-GPE
'11': B-DRV
'12': I-DRV
'13': B-EVT
'14': I-EVT
'15': B-MISC
'16': I-MISC
splits:
- name: train
num_bytes: 20104393
num_examples: 29870
- name: validation
num_bytes: 2779723
num_examples: 4300
- name: test
num_bytes: 2257594
num_examples: 3450
download_size: 41118494
dataset_size: 25141710
---
# Dataset Card for NorNE: Norwegian Named Entities
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [NorNE](https://github.com/ltgoslo/norne/)
- **Repository:** [Github](https://github.com/ltgoslo/norne/)
- **Paper:** https://arxiv.org/abs/1911.12146
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
NorNE is a manually annotated corpus of named entities which extends the annotation of the existing Norwegian Dependency Treebank. Comprising both of the official standards of written Norwegian (Bokmål and Nynorsk), the corpus contains around 600,000 tokens and annotates a rich set of entity types including persons,organizations, locations, geo-political entities, products, and events, in addition to a class corresponding to nominals derived from names.
There are 3 main configs in this dataset each with 3 versions of the NER tag set. When accessing the `bokmaal`, `nynorsk`, or `combined` configs the NER tag set will be comprised of 9 tags: `GPE_ORG`, `GPE_LOC`, `ORG`, `LOC`, `PER`, `PROD`, `EVT`, `DRV`, and `MISC`. The two special types `GPE_LOC` and `GPE_ORG` can easily be altered depending on the task, choosing either the more general `GPE` tag or the more specific `LOC`/`ORG` tags, conflating them with the other annotations of the same type. To access these reduced versions of the dataset, you can use the configs `bokmaal-7`, `nynorsk-7`, `combined-7` for the NER tag set with 7 tags ( **`ORG`**, **`LOC`**, `PER`, `PROD`, `EVT`, `DRV`, `MISC`), and `bokmaal-8`, `nynorsk-8`, `combined-8` for the NER tag set with 8 tags (`LOC_` and `ORG_`: **`ORG`**, **`LOC`**, **`GPE`**, `PER`, `PROD`, `EVT`, `DRV`, `MISC`). By default, the full set (9 tags) will be used. See Annotations for further details.
### Supported Tasks and Leaderboards
NorNE ads named entity annotations on top of the Norwegian Dependency Treebank.
### Languages
Both Norwegian Bokmål (`bokmaal`) and Nynorsk (`nynorsk`) are supported as different configs in this dataset. An extra config for the combined languages is also included (`combined`). See the Annotation section for details on accessing reduced tag sets for the NER feature.
## Dataset Structure
Each entry contains text sentences, their language, identifiers, tokens, lemmas, and corresponding NER and POS tag lists.
### Data Instances
An example of the `train` split of the `bokmaal` config.
```python
{'idx': '000001',
'lang': 'bokmaal',
'lemmas': ['lam', 'og', 'piggvar', 'på', 'bryllupsmeny'],
'ner_tags': [0, 0, 0, 0, 0],
'pos_tags': [0, 9, 0, 5, 0],
'text': 'Lam og piggvar på bryllupsmenyen',
'tokens': ['Lam', 'og', 'piggvar', 'på', 'bryllupsmenyen']}
```
### Data Fields
Each entry is annotated with the next fields:
- `idx` (`int`), text (sentence) identifier from the NorNE dataset
- `lang` (`str`), language variety, either `bokmaal`, `nynorsk` or `combined`
- `text` (`str`), plain text
- `tokens` (`List[str]`), list of tokens extracted from `text`
- `lemmas` (`List[str]`), list of lemmas extracted from `tokens`
- `ner_tags` (`List[int]`), list of numeric NER tags for each token in `tokens`
- `pos_tags` (`List[int]`), list of numeric PoS tags for each token in `tokens`
An example DataFrame obtained from the dataset:
<table class="dataframe" border="1">
<thead>
<tr style="text-align: right;">
<th></th>
<th>idx</th>
<th>lang</th>
<th>text</th>
<th>tokens</th>
<th>lemmas</th>
<th>ner_tags</th>
<th>pos_tags</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>000001</td>
<td>bokmaal</td>
<td>Lam og piggvar på bryllupsmenyen</td>
<td>[Lam, og, piggvar, på, bryllupsmenyen]</td>
<td>[lam, og, piggvar, på, bryllupsmeny]</td>
<td>[0, 0, 0, 0, 0]</td>
<td>[0, 9, 0, 5, 0]</td>
</tr>
<tr>
<th>1</th>
<td>000002</td>
<td>bokmaal</td>
<td>Kamskjell, piggvar og lammefilet sto på menyen...</td>
<td>[Kamskjell, ,, piggvar, og, lammefilet, sto, p...</td>
<td>[kamskjell, $,, piggvar, og, lammefilet, stå, ...</td>
<td>[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]</td>
<td>[0, 1, 0, 9, 0, 15, 2, 0, 2, 8, 6, 0, 1]</td>
</tr>
<tr>
<th>2</th>
<td>000003</td>
<td>bokmaal</td>
<td>Og til dessert: Parfait à la Mette-Marit.</td>
<td>[Og, til, dessert, :, Parfait, à, la, Mette-Ma...</td>
<td>[og, til, dessert, $:, Parfait, à, la, Mette-M...</td>
<td>[0, 0, 0, 0, 7, 8, 8, 8, 0]</td>
<td>[9, 2, 0, 1, 10, 12, 12, 10, 1]</td>
</tr>
</tbody>
</table>
### Data Splits
There are three splits: `train`, `validation` and `test`.
| Config | Split | Total |
| :---------|-------------:|-------:|
| `bokmaal` | `train` | 15696 |
| `bokmaal` | `validation` | 2410 |
| `bokmaal` | `test` | 1939 |
| `nynorsk` | `train` | 14174 |
| `nynorsk` | `validation` | 1890 |
| `nynorsk` | `test` | 1511 |
| `combined`| `test` | 29870 |
| `combined`| `validation` | 4300 |
| `combined`| `test` | 3450 |
## Dataset Creation
### Curation Rationale
1. A _name_ in this context is close to [Saul Kripke's definition of a name](https://en.wikipedia.org/wiki/Saul_Kripke#Naming_and_Necessity),
in that a name has a unique reference and its meaning is constant (there are exceptions in the annotations, e.g. "Regjeringen" (en. "Government")).
2. It is the usage of a name that determines the entity type, not the default/literal sense of the name,
3. If there is an ambiguity in the type/sense of a name, then the the default/literal sense of the name is chosen
(following [Markert and Nissim, 2002](http://www.lrec-conf.org/proceedings/lrec2002/pdf/11.pdf)).
For more details, see the "Annotation Guidelines.pdf" distributed with the corpus.
### Source Data
Data was collected using blogs and newspapers in Norwegian, as well as parliament speeches and governamental reports.
#### Initial Data Collection and Normalization
The texts in the Norwegian Dependency Treebank (NDT) are manually annotated with morphological features, syntactic functions
and hierarchical structure. The formalism used for the syntactic annotation is dependency grammar.
The treebanks consists of two parts, one part in Norwegian Bokmål (`nob`) and one part in Norwegian Nynorsk (`nno`).
Both parts contain around 300.000 tokens, and are a mix of different non-fictional genres.
See the [NDT webpage](https://www.nb.no/sprakbanken/show?serial=sbr-10) for more details.
### Annotations
The following types of entities are annotated:
- **Person (`PER`):** Real or fictional characters and animals
- **Organization (`ORG`):** Any collection of people, such as firms, institutions, organizations, music groups,
sports teams, unions, political parties etc.
- **Location (`LOC`):** Geographical places, buildings and facilities
- **Geo-political entity (`GPE`):** Geographical regions defined by political and/or social groups.
A GPE entity subsumes and does not distinguish between a nation, its region, its government, or its people
- **Product (`PROD`):** Artificially produced entities are regarded products. This may include more abstract entities, such as speeches,
radio shows, programming languages, contracts, laws and ideas.
- **Event (`EVT`):** Festivals, cultural events, sports events, weather phenomena, wars, etc. Events are bounded in time and space.
- **Derived (`DRV`):** Words (and phrases?) that are dervied from a name, but not a name in themselves. They typically contain a full name and are capitalized, but are not proper nouns. Examples (fictive) are "Brann-treneren" ("the Brann coach") or "Oslo-mannen" ("the man from Oslo").
- **Miscellaneous (`MISC`):** Names that do not belong in the other categories. Examples are animals species and names of medical conditions. Entities that are manufactured or produced are of type Products, whereas thing naturally or spontaneously occurring are of type Miscellaneous.
Furthermore, all `GPE` entities are additionally sub-categorized as being either `ORG` or `LOC`, with the two annotation levels separated by an underscore:
- `GPE_LOC`: Geo-political entity, with a locative sense (e.g. "John lives in _Spain_")
- `GPE_ORG`: Geo-political entity, with an organisation sense (e.g. "_Spain_ declined to meet with Belgium")
The two special types `GPE_LOC` and `GPE_ORG` can easily be altered depending on the task, choosing either the more general `GPE` tag or the more specific `LOC`/`ORG` tags, conflating them with the other annotations of the same type. This means that the following sets of entity types can be derived:
- 7 types, deleting `_GPE`: **`ORG`**, **`LOC`**, `PER`, `PROD`, `EVT`, `DRV`, `MISC`
- 8 types, deleting `LOC_` and `ORG_`: **`ORG`**, **`LOC`**, **`GPE`**, `PER`, `PROD`, `EVT`, `DRV`, `MISC`
- 9 types, keeping all types: **`ORG`**, **`LOC`**, **`GPE_LOC`**, **`GPE_ORG`**, `PER`, `PROD`, `EVT`, `DRV`, `MISC`
The class distribution is as follows, broken down across the data splits of the UD version of NDT, and sorted by total counts (i.e. the number of examples, not tokens within the spans of the annotatons):
| Type | Train | Dev | Test | Total |
| :--------|-------:|-------:|-------:|-------:|
| `PER` | 4033 | 607 | 560 | 5200 |
| `ORG` | 2828 | 400 | 283 | 3511 |
| `GPE_LOC`| 2132 | 258 | 257 | 2647 |
| `PROD` | 671 | 162 | 71 | 904 |
| `LOC` | 613 | 109 | 103 | 825 |
| `GPE_ORG`| 388 | 55 | 50 | 493 |
| `DRV` | 519 | 77 | 48 | 644 |
| `EVT` | 131 | 9 | 5 | 145 |
| `MISC` | 8 | 0 | 0 | 0 |
To access these reduced versions of the dataset, you can use the configs `bokmaal-7`, `nynorsk-7`, `combined-7` for the NER tag set with 7 tags ( **`ORG`**, **`LOC`**, `PER`, `PROD`, `EVT`, `DRV`, `MISC`), and `bokmaal-8`, `nynorsk-8`, `combined-8` for the NER tag set with 8 tags (`LOC_` and `ORG_`: **`ORG`**, **`LOC`**, **`GPE`**, `PER`, `PROD`, `EVT`, `DRV`, `MISC`). By default, the full set (9 tags) will be used.
## Additional Information
### Dataset Curators
NorNE was created as a collaboration between [Schibsted Media Group](https://schibsted.com/), [Språkbanken](https://www.nb.no/forskning/sprakbanken/) at the [National Library of Norway](https://www.nb.no) and the [Language Technology Group](https://www.mn.uio.no/ifi/english/research/groups/ltg/) at the University of Oslo.
NorNE was added to 🤗 Datasets by the AI-Lab at the National Library of Norway.
### Licensing Information
The NorNE corpus is published under the same [license](https://github.com/ltgoslo/norne/blob/master/LICENSE_NDT.txt) as the Norwegian Dependency Treebank
### Citation Information
This dataset is described in the paper _NorNE: Annotating Named Entities for Norwegian_ by
Fredrik Jørgensen, Tobias Aasmoe, Anne-Stine Ruud Husevåg, Lilja Øvrelid, and Erik Velldal, accepted for LREC 2020 and available as pre-print here: https://arxiv.org/abs/1911.12146.
```bibtex
@inproceedings{johansen2019ner,
title={NorNE: Annotating Named Entities for Norwegian},
author={Fredrik Jørgensen, Tobias Aasmoe, Anne-Stine Ruud Husevåg,
Lilja Øvrelid, and Erik Velldal},
booktitle={LREC 2020},
year={2020},
url={https://arxiv.org/abs/1911.12146}
}
```
### Contributions
Thanks to [@versae](https://github.com/versae) for adding this dataset. | 24,596 | [
[
-0.050506591796875,
-0.045166015625,
0.004390716552734375,
0.004901885986328125,
-0.0194854736328125,
-0.01641845703125,
-0.0262908935546875,
-0.032623291015625,
0.038726806640625,
0.038909912109375,
-0.03466796875,
-0.06280517578125,
-0.04266357421875,
0.03... |
CodedotAI/code_clippy | 2022-11-17T19:54:28.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:original",
"language:code",
"license:gpl-3.0",
"arxiv:2107.03374",
"region:us"
] | CodedotAI | This dataset was generated by selecting GitHub repositories from a large collection of repositories. These repositories were collected from https://seart-ghs.si.usi.ch/ and Github portion of [The Pile](https://github.com/EleutherAI/github-downloader) (performed on July 7th, 2021). The goal of this dataset is to provide a training set for pretraining large language models on code data for helping software engineering researchers better understand their impacts on software related tasks such as autocompletion of code. The dataset is split into train, validation, and test splits. There is a version containing duplicates (209GBs compressed) and ones where exact duplicates (132GBs compressed) are removed. Contains mostly JavaScript and Python code, but other programming languages are included as well to various degrees. | @misc{cooper-2021-code-clippy-data,
author = {Nathan Coooper, Artashes Arutiunian, Santiago Hincapié-Potes, Ben Trevett, Arun Raja, Erfan Hossami, Mrinal Mathur, and contributors},
title = {{Code Clippy Data: A large dataset of code data from Github for research into code language models}},
month = jul,
year = 2021,
version = {1.0},
publisher = {GitHub},
url = {https://github.com/ncoop57/gpt-code-clippy}
} | 10 | 45 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- code
license:
- gpl-3.0
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-generation
task_ids:
- language-modeling
pretty_name: Code Clippy
---
# Dataset Card for Code Clippy Data
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://the-eye.eu/public/AI/training_data/code_clippy_data/
- **Repository:** https://github.com/ncoop57/gpt-code-clippy
- **Paper:** [Not yet :)]
- **Leaderboard:** [Not yet :)]
- **Point of Contact:** [Nathan Cooper](mailto@nacooper01@email.wm.edu)
### Dataset Summary
This dataset was generated by selecting GitHub repositories from a large collection of repositories. These repositories were collected from https://seart-ghs.si.usi.ch/ and Github portion of [The Pile](https://github.com/EleutherAI/github-downloader) (performed on July 7th, 2021). The goal of this dataset is to provide a training set for pretraining large language models on code data for helping software engineering researchers better understand their impacts on software related tasks such as autocompletion of code. The dataset is split into train, validation, and test splits. There is a version containing duplicates (209GBs compressed) and ones where exact duplicates (132GBs compressed) are removed. Contains mostly JavaScript and Python code, but other programming languages are included as well to various degrees.
### Supported Tasks and Leaderboards
- `language-modeling`: The dataset can be used to train a model for language modeling for modeling programming languages, which consists of pretraining/finetuning a model to predict missing tokens, either causally or masked, given some context. Success on this task is typically measured by achieving a *low* perplexity score.
### Languages
Multiple programming languages are included in the dataset.
## Dataset Structure
### Data Instances
```
{
"id": datasets.Value("int64"),
"text": datasets.Value("string"),
"repo_name": datasets.Value("string"),
"stars": datasets.Value("string"),
"repo_language": datasets.Value("string"),
"file_name": datasets.Value("string"),
"mime_type": datasets.Value("string")
}
```
### Data Fields
- `id`: A unique identifier for the data instance.
- `text`: The text of the code.
- `repo_name`: The name of the repository.
- `stars`: The number of stars the repository has.
- `repo_language`: The programming language of the repository.
- `file_name`: The name of the file.
- `mime_type`: The MIME type of the file.
### Data Splits
| Size in GBs | Tain | Valid | Test |
| ----- | ------ | ----- | ---- |
| Duplicate | 194 | 9 | 6.3 |
| Deduplicate | 126 | 3.3 | 3.1 |
## Dataset Creation
### Curation Rationale
To have a code dataset that is large enough to properly train a large language model on.
### Source Data
#### Initial Data Collection and Normalization
- [The Pile](https://github.com/EleutherAI/github-downloader)
- [Seart-GHS](https://seart-ghs.si.usi.ch/)
Repositories were collected from both sources and the helper script from https://github.com/EleutherAI/github-downloader was used to download the repositories. Files were scrapped from the downloaded repositories, but ignored files that had certain extensions associated with binary or other non-textual/autogenerated content, and the output was converted into the [LM_Dataformat](https://pypi.org/project/lm-dataformat/) format.
#### Who are the source language producers?
Software developers.
### Annotations
#### Annotation process
No annotation was performed.
#### Who are the annotators?
N/A
### Personal and Sensitive Information
Since this data was collected from public repositories, there exists potential for personal and sensitive information to be included in the data through developers accidentally or on purpose uploading their secret keys, passwords, API keys, emails, etc.
## Considerations for Using the Data
### Social Impact of Dataset
The paper ["Evaluating Large Language Models Trained on Code"](https://arxiv.org/abs/2107.03374) from OpenAI has a good discussion on what the impact of a large language model trained on code could be. Therefore, some parts of their discuss are highlighted here as it pertains to this dataset and models that may be trained from it. **As well as some differences in views from the paper, particularly around legal implications**.
1. **Over-reliance:** A language model trained on large datasets such as this one for the task of autogenerating code may generate plausible solutions that may appear correct, but are not necessarily the correct solution. Not properly evaluating the generated code may cause have negative consequences such as the introduction of bugs, or the introduction of security vulnerabilities. Therefore, it is important that users are aware of the limitations and potential negative consequences of using a language model trained on this dataset.
2. **Economic and labor market impacts:** Large language models trained on large code datasets such as this one that are capable of generating high-quality code have the potential to automate part of the software development process. This may negatively impact software developers. However, as discussed in the paper, as shown in the Summary Report of software developers from [O*NET OnLine](https://www.onetonline.org/link/summary/15-1252.00), developers don't just write software.
3. **Security implications:** No filtering or checking of vulnerabilities or buggy code was performed. This means that the dataset may contain code that may be malicious or contain vulnerabilities. Therefore, any model trained on this dataset may generate vulnerable, buggy, or malicious code. In safety critical software, this could lead to software that may work improperly and could result in serious consequences depending on the software. Additionally, a model trained on this dataset may be used to generate malicious code on purpose in order to perform ransomware or other such attacks.
4. **Legal implications:** No filtering was performed on licensed code. This means that the dataset may contain restrictive licensed code. As discussed in the paper, public Github repositories may fall under "fair use." However, there has been little to no previous cases of such usages of licensed publicly available code. Therefore, any model trained on this dataset may be required to obey license terms that align with the software it was trained on such as GPL-3.0, which is why we purposefully put this dataset under the GPL-3.0 license. It is unclear the legal ramifications of using a language model trained on this dataset.
### Discussion of Biases
The programming languages most represented in this dataset are those of Javascript and Python. Therefore, other, still popular languages such as C and C++, are less represented and therefore model performance for these languages will be less comparatively. Additionally, this dataset only contains public repositories and so may not be representative of code written by private developers. No filtering was performed for potential racist, offensive, or otherwise inappropriate content. Therefore there may be such content in the dataset that will be reflected in models trained on it.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Nathan Coooper, Artashes Arutiunian, Santiago Hincapié-Potes, Ben Trevett, Arun Raja, Erfan Hossami, Mrinal Mathur, and contributors!
### Licensing Information
This repository is under the GPL-3.0 license.
### Citation Information
```
@misc{cooper-2021-code-clippy-data,
author = {Nathan Coooper, Artashes Arutiunian, Santiago Hincapié-Potes, Ben Trevett, Arun Raja, Erfan Hossami, Mrinal Mathur, and contributors},
title = {{Code Clippy Data: A large dataset of code data from Github for research into code language models}},
month = jul,
year = 2021,
version = {1.0},
publisher = {GitHub},
url = {https://github.com/ncoop57/gpt-code-clippy}
}
```
### Contributions
Thanks to [@ncoop57](https://github.com/ncoop57), [@arampacha](https://github.com/arampacha), [@shpotes](https://github.com/shpotes), [@bentrevett](https://github.com/bentrevett), [@arunraja-hub](https://github.com/arunraja-hub), [@taisazero](https://github.com/taisazero), [@Mrinal18](https://github.com/Mrinal18), and contributors for adding this dataset.
| 9,635 | [
[
-0.023468017578125,
-0.03192138671875,
0.004390716552734375,
0.00688934326171875,
-0.00533294677734375,
0.0026569366455078125,
-0.035247802734375,
-0.029693603515625,
0.00897979736328125,
0.049041748046875,
-0.018157958984375,
-0.054290771484375,
-0.035034179687... |
KETI-AIR/nikl | 2021-06-08T06:42:34.000Z | [
"region:us"
] | KETI-AIR | Description is **formatted** as markdown.
It should also contain any processing which has been applied (if any),
(e.g. corrupted example skipped, images cropped,...): | 1 | 45 | 2022-03-02T23:29:22 | <!--
Copyright 2021 san kim
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
# National Institute of Korean Language(NIKL) Corpus
| 625 | [
[
-0.014251708984375,
-0.02630615234375,
0.028289794921875,
0.06439208984375,
-0.043182373046875,
0.0017023086547851562,
-0.0233612060546875,
-0.0287628173828125,
0.004688262939453125,
0.08062744140625,
-0.02532958984375,
-0.05889892578125,
-0.033782958984375,
... | |
MBAH/MOVIESON | 2021-09-18T00:04:34.000Z | [
"region:us"
] | MBAH | null | null | 0 | 45 | 2022-03-02T23:29:22 | https://mahoningmed.org/docs/123movies-watch-after-we-fell-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-cinderella-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-cryptozoo-2021full-movie-hd-free/
https://mahoningmed.org/docs/123movieswatch-breathless-2021hd-full-movie-online/
https://mahoningmed.org/docs/123movieswatch-firebird-2021full-hd-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-paw-patrol-the-movie-2021-full-hd-movie-online-free/
https://mahoningmed.org/docs/atch-sweet-girl-2021free-hd-full-movie-online/
https://mahoningmed.org/docs/123movies-watch-hes-all-that-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movieswatch-im-your-man-2020hd-full-movie-online-free/
https://mahoningmed.org/docs/watchcrazy-fist-2021full-hd-movie-online-free/
https://mahoningmed.org/docs/watchpaw-patrol-the-movie-2021hd-full-movie-online-for-free/
https://mahoningmed.org/docs/123movies-watch-black-widow-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/full-watch-dont-breathe-2-2021-hd-movie-online-free/
https://mahoningmed.org/docs/watchthe-tomorrow-war-2021hd-full-movie-online-for-free/
https://mahoningmed.org/docs/123movies-watch-jurassic-hunt-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-after-we-fell-2021-hd-full-movie-online-free-2/
https://mahoningmed.org/docs/23movies-watch-free-guy-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/watch-candyman-2021-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watch-the-night-house-2021full-hd-movie-online-free/
https://mahoningmed.org/docs/watchsas-red-notice-2021hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watchshang-chi-and-the-legend-of-the-ten-rings-2021hd-full-movie-online-for-free/
https://mahoningmed.org/docs/123movies-watch-luca-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-escape-room-tournament-of-champions-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/watchnarco-sub-2021hd-full-movie-online-for-free/
https://mahoningmed.org/docs/123movies-watch-malignant-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/watch-mortal-kombat-legends-battle-of-the-realms-2021full-online-movie-free-hd/
https://mahoningmed.org/docs/watch-space-jam-a-new-legacy-2021-hd-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-cinderella-2021-hd-full-movie-online-free-2/
https://mahoningmed.org/docs/watcheggs-run-2021hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watch-f9-2021full-online-movie-free-hd-1080p/
https://mahoningmed.org/docs/123movies-watch-jurassic-hunt-2021-hd-full-movie-online-free-2/
https://mahoningmed.org/docs/123movies-watch-vacation-friends-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-silent-night-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-the-card-counter-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-silent-night-2021-hd-full-movie-online-free-2/
https://mahoningmed.org/docs/123movies-watch-jolt-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-the-last-mercenary-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-beckett-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-rogue-hostage-2018-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-the-boss-baby-family-business-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/123movies-watch-cruella-2021-hd-full-movie-online-free/
https://mahoningmed.org/docs/watch-the-manson-brothers-midnight-zombie-massacre-2021full-hd-movie-online-free-123movies/
https://mahoningmed.org/docs/watchthe-suicide-squad-2021-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watch-jungle-cruise-2021full-hd-movie-online-free/
https://mahoningmed.org/docs/watch-after-we-fell-2021full-hd-movie-online-free/
https://mahoningmed.org/docs/123movieswatch-the-last-warrior-root-of-evil-2021hd-full-movie-online-free/
https://mahoningmed.org/docs/123movieswatch-kate-2021hd-full-movie-online-free/
https://mahoningmed.org/docs/23movieswatch-wrath-of-man-2021hd-full-movie-online-free/
https://mahoningmed.org/docs/watchthe-forever-purge-2021-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watchafterlife-of-the-party-2021-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watchthe-conjuring-the-devil-made-me-do-it-2021-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watchold-2021-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watchinsensate-2021-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watchdreamcatcher-2021-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watchthe-kissing-booth-3-2021-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watchjj-plus-e-2021-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watchyoung-sister-in-law-3-2019-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/watchjurassic-world-fallen-kingdom-2018-hd-full-movie-online-for-free/
https://mahoningmed.org/docs/123movieswatch-danny-boy-2021hd-full-movie-online-free/
https://mahoningmed.org/docs/watchsnake-eyes-g-i-joe-origins-2021-hd-full-movie-online-for-free/ | 5,340 | [
[
-0.03411865234375,
0.011016845703125,
0.0240325927734375,
0.0181732177734375,
-0.06756591796875,
-0.00461578369140625,
0.048004150390625,
-0.0499267578125,
0.0726318359375,
0.0272979736328125,
-0.039276123046875,
-0.0085601806640625,
-0.029571533203125,
0.01... |
csarron/25m-img-caps | 2022-03-28T18:51:26.000Z | [
"region:us"
] | csarron | null | null | 1 | 45 | 2022-03-02T23:29:22 | see https://huggingface.co/datasets/csarron/4m-img-caps for example usage | 73 | [
[
-0.04638671875,
-0.018341064453125,
0.0080718994140625,
0.033782958984375,
-0.0092010498046875,
0.002132415771484375,
0.0130615234375,
-0.0301361083984375,
0.048065185546875,
0.034515380859375,
-0.068115234375,
-0.05255126953125,
-0.0360107421875,
0.01771545... |
ehcalabres/ravdess_speech | 2022-10-24T15:51:41.000Z | [
"task_categories:audio-classification",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | ehcalabres | null | null | 3 | 45 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- audio-classification
task_ids:
- speech-emotion-recognition
---
# Dataset Card for ravdess_speech
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://zenodo.org/record/1188976#.YUS4MrozZdS
- **Paper:** https://doi.org/10.1371/journal.pone.0196391
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** ravdess@gmail.com
### Dataset Summary
The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) contains 24 professional actors (12 female, 12 male), vocalizing two lexically-matched statements in a neutral North American accent. Speech includes calm, happy, sad, angry, fearful, surprise, and disgust expressions. Each expression is produced at two levels of emotional intensity (normal, strong), with an additional neutral expression. The conditions of the audio files are: 16bit, 48kHz .wav.
### Supported Tasks and Leaderboards
- audio-classification: The dataset can be used to train a model for Audio Classification tasks, which consists in predict the latent emotion presented on the audios.
### Languages
The audios available in the dataset are in English spoken by actors in a neutral North American accent.
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
The RAVDESS is released under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, CC BY-NC-SA 4.0
Commercial licenses for the RAVDESS can also be purchased. For more information, please visit our license fee page, or contact us at ravdess@gmail.com.
### Citation Information
Livingstone SR, Russo FA (2018) The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. PLoS ONE 13(5): e0196391. https://doi.org/10.1371/journal.pone.0196391. | 3,871 | [
[
-0.031341552734375,
-0.055450439453125,
-0.00908660888671875,
0.034759521484375,
-0.004390716552734375,
0.0121002197265625,
-0.032623291015625,
-0.04144287109375,
0.040985107421875,
0.040985107421875,
-0.07208251953125,
-0.07562255859375,
-0.043670654296875,
... |
transformersbook/codeparrot | 2022-02-05T16:15:40.000Z | [
"python",
"code",
"region:us"
] | transformersbook | null | null | 35 | 45 | 2022-03-02T23:29:22 | ---
tags:
- python
- code
---
# CodeParrot 🦜 Dataset
## What is it?
This is the full CodeParrot dataset. It contains Python files used to train the code generation model in Chapter 10: Training Transformers from Scratch in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/10_transformers-from-scratch.ipynb).
## Creation
It was created with the GitHub dataset available via Google's BigQuery. It contains approximately 22 million Python files and is 180 GB (50 GB compressed) big. The SQL query to create the dataset is the following:
```sql
SELECT
f.repo_name, f.path, c.copies, c.size, c.content, l.license
FROM
`bigquery-public-data.github_repos.files` AS f
JOIN
`bigquery-public-data.github_repos.contents` AS c
ON
f.id = c.id
JOIN
`bigquery-public-data.github_repos.licenses` AS l
ON
f.repo_name = l.repo_name
WHERE
NOT c.binary
AND ((f.path LIKE '%.py')
AND (c.size BETWEEN 1024 AND 1048575))
```
## Duplication
Note that about 70% of the dataset is duplicated. If you use the dataset make sure to deal with them appropriately. See [codeparrot-clean](https://huggingface.co/datasets/lvwerra/codeparrot-clean) for a deduplicated version of this dataset. | 1,389 | [
[
-0.033172607421875,
-0.0181121826171875,
-0.01397705078125,
0.0014476776123046875,
-0.028961181640625,
0.01229095458984375,
-0.01114654541015625,
0.0021381378173828125,
0.020294189453125,
0.054656982421875,
-0.04736328125,
-0.02899169921875,
-0.02166748046875,
... |
turingbench/TuringBench | 2022-10-25T09:56:51.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:found",
"language_creators:found",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:apache-2.0",
... | turingbench | This benchmark environment contains a dataset comprised of generated texts from pre-trained language models.
We also have two benchmark tasks - human vs. machine (i.e., binary classification) and authorship
attribution (i.e., multi-class classification). These benchmark tasks and dataset are hosted on the
TuringBench website with Leaderboards for each task. | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | 0 | 45 | 2022-03-02T23:29:22 | ---
YAML tags:
annotations_creators:
- found
language_creators:
- found
- machine-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# Dataset Card for [TuringBench]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/TuringBench/TuringBench
- **Repository:** https://github.com/TuringBench/TuringBench
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@TuringBench](https://github.com/TuringBench) for adding this dataset.
| 2,856 | [
[
-0.0516357421875,
-0.043243408203125,
0.03143310546875,
0.0080108642578125,
-0.016143798828125,
0.0134429931640625,
-0.01629638671875,
-0.0205078125,
0.036407470703125,
0.038177490234375,
-0.072265625,
-0.0697021484375,
-0.055999755859375,
0.0001953840255737... |
tner/tweetner7 | 2022-11-27T18:50:28.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"size_categories:1k<10K",
"language:en",
"license:other",
"arxiv:2210.03797",
"region:us"
] | tner | [TweetNER7](TBA) | TBA | 1 | 45 | 2022-07-18T10:39:50 | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 1k<10K
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: TweetNER7
---
# Dataset Card for "tner/tweetner7"
## Dataset Description
- **Repository:** [https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper](https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper)
- **Paper:** [https://arxiv.org/abs/2210.03797](https://arxiv.org/abs/2210.03797)
- **Dataset:** TweetNER7
- **Domain:** Twitter
- **Number of Entity:** 7
### Dataset Summary
This is the official repository of TweetNER7 (["Named Entity Recognition in Twitter:
A Dataset and Analysis on Short-Term Temporal Shifts, AACL main conference 2022"](https://arxiv.org/abs/2210.03797)), an NER dataset on Twitter with 7 entity labels. Each instance of TweetNER7 comes with a timestamp which distributes from September 2019 to August 2021.
The tweet collection used in TweetNER7 is same as what used in [TweetTopic](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi).
The dataset is integrated in [TweetNLP](https://tweetnlp.org/) too.
- Entity Types: `corperation`, `creative_work`, `event`, `group`, `location`, `product`, `person`
### Preprocessing
We pre-process tweets before the annotation to normalize some artifacts, converting URLs into a special token `{{URL}}` and non-verified usernames into `{{USERNAME}}`.
For verified usernames, we replace its display name (or account name) with symbols `{@}`.
For example, a tweet
```
Get the all-analog Classic Vinyl Edition
of "Takin' Off" Album from @herbiehancock
via @bluenoterecords link below:
http://bluenote.lnk.to/AlbumOfTheWeek
```
is transformed into the following text.
```
Get the all-analog Classic Vinyl Edition
of "Takin' Off" Album from {@herbiehancock@}
via {@bluenoterecords@} link below: {{URL}}
```
A simple function to format tweet follows below.
```python
import re
from urlextract import URLExtract
extractor = URLExtract()
def format_tweet(tweet):
# mask web urls
urls = extractor.find_urls(tweet)
for url in urls:
tweet = tweet.replace(url, "{{URL}}")
# format twitter account
tweet = re.sub(r"\b(\s*)(@[\S]+)\b", r'\1{\2@}', tweet)
return tweet
target = """Get the all-analog Classic Vinyl Edition of "Takin' Off" Album from @herbiehancock via @bluenoterecords link below: http://bluenote.lnk.to/AlbumOfTheWeek"""
target_format = format_tweet(target)
print(target_format)
'Get the all-analog Classic Vinyl Edition of "Takin\' Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below: {{URL}}'
```
We ask annotators to ignore those special tokens but label the verified users' mentions.
### Data Split
| split | number of instances | description |
|:------------------|------:|------:|
| train_2020 | 4616 | training dataset from September 2019 to August 2020 |
| train_2021 | 2495 | training dataset from September 2020 to August 2021 |
| train_all | 7111 | combined training dataset of `train_2020` and `train_2021` |
| validation_2020 | 576 | validation dataset from September 2019 to August 2020 |
| validation_2021 | 310 | validation dataset from September 2020 to August 2021 |
| test_2020 | 576 | test dataset from September 2019 to August 2020 |
| test_2021 | 2807 | test dataset from September 2020 to August 2021 |
| train_random | 4616 | randomly sampled training dataset with the same size as `train_2020` from `train_all` |
| validation_random | 576 | randomly sampled training dataset with the same size as `validation_2020` from `validation_all` |
| extra_2020 | 87880 | extra tweet without annotations from September 2019 to August 2020 |
| extra_2021 | 93594 | extra tweet without annotations from September 2020 to August 2021 |
For the temporal-shift setting, model should be trained on `train_2020` with `validation_2020` and evaluate on `test_2021`.
In general, model would be trained on `train_all`, the most representative training set with `validation_2021` and evaluate on `test_2021`.
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```
{
'tokens': ['Morning', '5km', 'run', 'with', '{{USERNAME}}', 'for', 'breast', 'cancer', 'awareness', '#', 'pinkoctober', '#', 'breastcancerawareness', '#', 'zalorafit', '#', 'zalorafitxbnwrc', '@', 'The', 'Central', 'Park', ',', 'Desa', 'Parkcity', '{{URL}}'],
'tags': [14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 2, 14, 2, 14, 14, 14, 14, 14, 14, 4, 11, 11, 11, 11, 14],
'id': '1183344337016381440',
'date': '2019-10-13'
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/tweetner7/raw/main/dataset/label.json).
```python
{
"B-corporation": 0,
"B-creative_work": 1,
"B-event": 2,
"B-group": 3,
"B-location": 4,
"B-person": 5,
"B-product": 6,
"I-corporation": 7,
"I-creative_work": 8,
"I-event": 9,
"I-group": 10,
"I-location": 11,
"I-person": 12,
"I-product": 13,
"O": 14
}
```
## Models
See full evaluation metrics [here](https://github.com/asahi417/tner/blob/master/MODEL_CARD.md#models-for-tweetner7).
### Main Models
| Model (link) | Data | Language Model | Micro F1 (2021) | Macro F1 (2021) |
|:--------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------|------------------:|------------------:|
| [`tner/roberta-large-tweetner7-all`](https://huggingface.co/tner/roberta-large-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 65.75 | 61.25 |
| [`tner/roberta-base-tweetner7-all`](https://huggingface.co/tner/roberta-base-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-base`](https://huggingface.co/roberta-base) | 65.16 | 60.81 |
| [`tner/twitter-roberta-base-2019-90m-tweetner7-all`](https://huggingface.co/tner/twitter-roberta-base-2019-90m-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-2019-90m`](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) | 65.68 | 61 |
| [`tner/twitter-roberta-base-dec2020-tweetner7-all`](https://huggingface.co/tner/twitter-roberta-base-dec2020-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2020`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020) | 65.26 | 60.7 |
| [`tner/bertweet-large-tweetner7-all`](https://huggingface.co/tner/bertweet-large-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large) | 66.46 | 61.87 |
| [`tner/bertweet-base-tweetner7-all`](https://huggingface.co/tner/bertweet-base-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`vinai/bertweet-base`](https://huggingface.co/vinai/bertweet-base) | 65.36 | 60.52 |
| [`tner/bert-large-tweetner7-all`](https://huggingface.co/tner/bert-large-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-large`](https://huggingface.co/bert-large) | 63.58 | 59 |
| [`tner/bert-base-tweetner7-all`](https://huggingface.co/tner/bert-base-tweetner7-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-base`](https://huggingface.co/bert-base) | 62.3 | 57.59 |
| [`tner/roberta-large-tweetner7-continuous`](https://huggingface.co/tner/roberta-large-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 66.02 | 60.9 |
| [`tner/roberta-base-tweetner7-continuous`](https://huggingface.co/tner/roberta-base-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-base`](https://huggingface.co/roberta-base) | 65.47 | 60.01 |
| [`tner/twitter-roberta-base-2019-90m-tweetner7-continuous`](https://huggingface.co/tner/twitter-roberta-base-2019-90m-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-2019-90m`](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) | 65.87 | 61.07 |
| [`tner/twitter-roberta-base-dec2020-tweetner7-continuous`](https://huggingface.co/tner/twitter-roberta-base-dec2020-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2020`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020) | 65.51 | 60.57 |
| [`tner/bertweet-large-tweetner7-continuous`](https://huggingface.co/tner/bertweet-large-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large) | 66.41 | 61.66 |
| [`tner/bertweet-base-tweetner7-continuous`](https://huggingface.co/tner/bertweet-base-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`vinai/bertweet-base`](https://huggingface.co/vinai/bertweet-base) | 65.84 | 61.02 |
| [`tner/bert-large-tweetner7-continuous`](https://huggingface.co/tner/bert-large-tweetner7-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-large`](https://huggingface.co/bert-large) | 63.2 | 57.67 |
| [`tner/roberta-large-tweetner7-2021`](https://huggingface.co/tner/roberta-large-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.05 | 59.11 |
| [`tner/roberta-base-tweetner7-2021`](https://huggingface.co/tner/roberta-base-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-base`](https://huggingface.co/roberta-base) | 61.76 | 57 |
| [`tner/twitter-roberta-base-dec2020-tweetner7-2021`](https://huggingface.co/tner/twitter-roberta-base-dec2020-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2020`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020) | 63.98 | 58.91 |
| [`tner/bertweet-large-tweetner7-2021`](https://huggingface.co/tner/bertweet-large-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large) | 62.9 | 58.13 |
| [`tner/bertweet-base-tweetner7-2021`](https://huggingface.co/tner/bertweet-base-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`vinai/bertweet-base`](https://huggingface.co/vinai/bertweet-base) | 63.09 | 57.35 |
| [`tner/bert-large-tweetner7-2021`](https://huggingface.co/tner/bert-large-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-large`](https://huggingface.co/bert-large) | 59.75 | 53.93 |
| [`tner/bert-base-tweetner7-2021`](https://huggingface.co/tner/bert-base-tweetner7-2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-base`](https://huggingface.co/bert-base) | 60.67 | 55.5 |
| [`tner/roberta-large-tweetner7-2020`](https://huggingface.co/tner/roberta-large-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.76 | 60 |
| [`tner/roberta-base-tweetner7-2020`](https://huggingface.co/tner/roberta-base-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-base`](https://huggingface.co/roberta-base) | 64.21 | 59.11 |
| [`tner/twitter-roberta-base-2019-90m-tweetner7-2020`](https://huggingface.co/tner/twitter-roberta-base-2019-90m-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-2019-90m`](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) | 64.28 | 59.31 |
| [`tner/twitter-roberta-base-dec2020-tweetner7-2020`](https://huggingface.co/tner/twitter-roberta-base-dec2020-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2020`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020) | 62.87 | 58.26 |
| [`tner/bertweet-large-tweetner7-2020`](https://huggingface.co/tner/bertweet-large-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large) | 64.01 | 59.47 |
| [`tner/bertweet-base-tweetner7-2020`](https://huggingface.co/tner/bertweet-base-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`vinai/bertweet-base`](https://huggingface.co/vinai/bertweet-base) | 64.06 | 59.44 |
| [`tner/bert-large-tweetner7-2020`](https://huggingface.co/tner/bert-large-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-large`](https://huggingface.co/bert-large) | 61.43 | 56.14 |
| [`tner/bert-base-tweetner7-2020`](https://huggingface.co/tner/bert-base-tweetner7-2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-base`](https://huggingface.co/bert-base) | 60.09 | 54.67 |
Model description follows below.
* Model with suffix `-all`: Model fine-tuned on `train_all` and validated on `validation_2021`.
* Model with suffix `-continuous`: Model fine-tuned on `train_2021` continuously after fine-tuning on `train_2020` and validated on `validation_2021`.
* Model with suffix `-2021`: Model fine-tuned only on `train_2021` and validated on `validation_2021`.
* Model with suffix `-2020`: Model fine-tuned only on `train_2021` and validated on `validation_2020`.
### Sub Models (used in ablation study)
- Model fine-tuned only on `train_random` and validated on `validation_2020`.
| Model (link) | Data | Language Model | Micro F1 (2021) | Macro F1 (2021) |
|:------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------|------------------:|------------------:|
| [`tner/roberta-large-tweetner7-random`](https://huggingface.co/tner/roberta-large-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 66.33 | 60.96 |
| [`tner/twitter-roberta-base-2019-90m-tweetner7-random`](https://huggingface.co/tner/twitter-roberta-base-2019-90m-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-2019-90m`](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) | 63.29 | 58.5 |
| [`tner/roberta-base-tweetner7-random`](https://huggingface.co/tner/roberta-base-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-base`](https://huggingface.co/roberta-base) | 64.04 | 59.23 |
| [`tner/twitter-roberta-base-dec2020-tweetner7-random`](https://huggingface.co/tner/twitter-roberta-base-dec2020-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2020`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020) | 64.72 | 59.97 |
| [`tner/bertweet-large-tweetner7-random`](https://huggingface.co/tner/bertweet-large-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large`](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021vinai/bertweet-large) | 64.86 | 60.49 |
| [`tner/bertweet-base-tweetner7-random`](https://huggingface.co/tner/bertweet-base-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`vinai/bertweet-base`](https://huggingface.co/vinai/bertweet-base) | 65.55 | 59.58 |
| [`tner/bert-large-tweetner7-random`](https://huggingface.co/tner/bert-large-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-large`](https://huggingface.co/bert-large) | 62.39 | 57.54 |
| [`tner/bert-base-tweetner7-random`](https://huggingface.co/tner/bert-base-tweetner7-random) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`bert-base`](https://huggingface.co/bert-base) | 60.91 | 55.92 |
- Model fine-tuned on the self-labeled dataset on `extra_{2020,2021}` and validated on `validation_2020`.
| Model (link) | Data | Language Model | Micro F1 (2021) | Macro F1 (2021) |
|:----------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------|:--------------------------------------------------------|------------------:|------------------:|
| [`tner/roberta-large-tweetner7-selflabel2020`](https://huggingface.co/tner/roberta-large-tweetner7-selflabel2020) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.56 | 59.63 |
| [`tner/roberta-large-tweetner7-selflabel2021`](https://huggingface.co/tner/roberta-large-tweetner7-selflabel2021) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.6 | 59.45 |
| [`tner/roberta-large-tweetner7-2020-selflabel2020-all`](https://huggingface.co/tner/roberta-large-tweetner7-2020-selflabel2020-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 65.46 | 60.39 |
| [`tner/roberta-large-tweetner7-2020-selflabel2021-all`](https://huggingface.co/tner/roberta-large-tweetner7-2020-selflabel2021-all) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.52 | 59.45 |
| [`tner/roberta-large-tweetner7-selflabel2020-continuous`](https://huggingface.co/tner/roberta-large-tweetner7-selflabel2020-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 65.15 | 60.23 |
| [`tner/roberta-large-tweetner7-selflabel2021-continuous`](https://huggingface.co/tner/roberta-large-tweetner7-selflabel2021-continuous) | [`tweetner7`](https://huggingface.co/datasets/tner/tweetner7) | [`roberta-large`](https://huggingface.co/roberta-large) | 64.48 | 59.41 |
Model description follows below.
* Model with suffix `-self2020`: Fine-tuning on the self-annotated data of `extra_2020` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7).
* Model with suffix `-self2021`: Fine-tuning on the self-annotated data of `extra_2021` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7).
* Model with suffix `-2020-self2020-all`: Fine-tuning on the self-annotated data of `extra_2020` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7). Combined training dataset of `extra_2020` and `train_2020`.
* Model with suffix `-2020-self2021-all`: Fine-tuning on the self-annotated data of `extra_2021` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7). Combined training dataset of `extra_2021` and `train_2020`.
* Model with suffix `-2020-self2020-continuous`: Fine-tuning on the self-annotated data of `extra_2020` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7). Fine-tuning on `train_2020` and continuing fine-tuning on `extra_2020`.
* Model with suffix `-2020-self2021-continuous`: Fine-tuning on the self-annotated data of `extra_2021` split of [tweetner7](https://huggingface.co/datasets/tner/tweetner7). Fine-tuning on `train_2020` and continuing fine-tuning on `extra_2020`.
### Reproduce Experimental Result
To reproduce the experimental result on our AACL paper, please see the repository
[https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper](https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper).
## Citation Information
```
@inproceedings{ushio-etal-2022-tweet,
title = "{N}amed {E}ntity {R}ecognition in {T}witter: {A} {D}ataset and {A}nalysis on {S}hort-{T}erm {T}emporal {S}hifts",
author = "Ushio, Asahi and
Neves, Leonardo and
Silva, Vitor and
Barbieri, Francesco. and
Camacho-Collados, Jose",
booktitle = "The 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing",
month = nov,
year = "2022",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
| 27,372 | [
[
-0.025604248046875,
-0.028656005859375,
0.018310546875,
0.0170135498046875,
-0.024383544921875,
0.0101165771484375,
-0.0194854736328125,
-0.029327392578125,
0.0435791015625,
0.0130767822265625,
-0.053863525390625,
-0.062469482421875,
-0.04913330078125,
0.005... |
jamescalam/unsplash-25k-photos | 2022-09-13T13:02:46.000Z | [
"task_categories:image-to-image",
"task_categories:image-classification",
"task_categories:image-to-text",
"task_categories:text-to-image",
"task_categories:zero-shot-image-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n... | jamescalam | This is a dataset that streams photos data from the Unsplash 25K servers. | @InProceedings{huggingface:dataset,
title = {Unsplash Lite Dataset 1.2.0 Photos},
author={Unsplash},
year={2022}
} | 26 | 45 | 2022-08-27T22:01:09 | ---
annotations_creators:
- found
language:
- en
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: Unsplash Lite 25K Photos
size_categories:
- 10K<n<100K
source_datasets: []
tags:
- images
- unsplash
- photos
task_categories:
- image-to-image
- image-classification
- image-to-text
- text-to-image
- zero-shot-image-classification
task_ids: []
---
# Unsplash Lite Dataset Photos
This dataset is linked to the Unsplash Lite dataset containing data on 25K images from Unsplash. The dataset here only includes data from a single file `photos.tsv000`. The dataset builder script streams this data directly from the Unsplash 25K dataset source.
For full details, please see the [Unsplash Dataset GitHub repo](https://github.com/unsplash/datasets), or read the preview (copied from the repo) below.
---
# The Unsplash Dataset

The Unsplash Dataset is made up of over 250,000+ contributing global photographers and data sourced from hundreds of millions of searches across a nearly unlimited number of uses and contexts. Due to the breadth of intent and semantics contained within the Unsplash dataset, it enables new opportunities for research and learning.
The Unsplash Dataset is offered in two datasets:
- the Lite dataset: available for commercial and noncommercial usage, containing 25k nature-themed Unsplash photos, 25k keywords, and 1M searches
- the Full dataset: available for noncommercial usage, containing 3M+ high-quality Unsplash photos, 5M keywords, and over 250M searches
As the Unsplash library continues to grow, we’ll release updates to the dataset with new fields and new images, with each subsequent release being [semantically versioned](https://semver.org/).
We welcome any feedback regarding the content of the datasets or their format. With your input, we hope to close the gap between the data we provide and the data that you would like to leverage. You can [open an issue](https://github.com/unsplash/datasets/issues/new/choose) to report a problem or to let us know what you would like to see in the next release of the datasets.
For more on the Unsplash Dataset, see [our announcement](https://unsplash.com/blog/the-unsplash-dataset/) and [site](https://unsplash.com/data).
## Download
### Lite Dataset
The Lite dataset contains all of the same fields as the Full dataset, but is limited to ~25,000 photos. It can be used for both commercial and non-commercial usage, provided you abide by [the terms](https://github.com/unsplash/datasets/blob/master/TERMS.md).
[⬇️ Download the Lite dataset](https://unsplash.com/data/lite/latest) [~650MB compressed, ~1.4GB raw]
### Full Dataset
The Full dataset is available for non-commercial usage and all uses must abide by [the terms](https://github.com/unsplash/datasets/blob/master/TERMS.md). To access, please go to [unsplash.com/data](https://unsplash.com/data) and request access. The dataset weighs ~20 GB compressed (~43GB raw)).
## Documentation
See the [documentation for a complete list of tables and fields](https://github.com/unsplash/datasets/blob/master/DOCS.md).
## Usage
You can follow these examples to load the dataset in these common formats:
- [Load the dataset in a PostgreSQL database](https://github.com/unsplash/datasets/tree/master/how-to/psql)
- [Load the dataset in a Python environment](https://github.com/unsplash/datasets/tree/master/how-to/python)
- [Submit an example doc](https://github.com/unsplash/datasets/blob/master/how-to/README.md#submit-an-example)
## Share your work
We're making this data open and available with the hopes of enabling researchers and developers to discover interesting and useful connections in the data.
We'd love to see what you create, whether that's a research paper, a machine learning model, a blog post, or just an interesting discovery in the data. Send us an email at [data@unsplash.com](mailto:data@unsplash.com).
If you're using the dataset in a research paper, you can attribute the dataset as `Unsplash Lite Dataset 1.2.0` or `Unsplash Full Dataset 1.2.0` and link to the permalink [`unsplash.com/data`](https://unsplash.com/data).
----
The Unsplash Dataset is made available for research purposes. [It cannot be used to redistribute the images contained within](https://github.com/unsplash/datasets/blob/master/TERMS.md). To use the Unsplash library in a product, see [the Unsplash API](https://unsplash.com/developers).
 | 4,559 | [
[
-0.003337860107421875,
-0.01505279541015625,
0.009613037109375,
0.005893707275390625,
-0.041046142578125,
0.01535797119140625,
-0.021331787109375,
-0.0230865478515625,
0.03173828125,
0.0416259765625,
-0.044830322265625,
-0.055023193359375,
-0.00821685791015625,
... |
sinhala-nlp/SemiSOLD | 2022-12-20T20:21:26.000Z | [
"region:us"
] | sinhala-nlp | null | null | 0 | 45 | 2022-11-04T20:42:38 | # SOLD - A Benchmark for Sinhala Offensive Language Identification
In this repository, we introduce the {S}inhala {O}ffensive {L}anguage {D}ataset **(SOLD)** and present multiple experiments on this dataset. **SOLD** is a manually annotated dataset containing 10,000 posts from Twitter annotated as offensive and not offensive at both sentence-level and token-level. **SOLD** is the largest offensive language dataset compiled for Sinhala. We also introduce **SemiSOLD**, a larger dataset containing more than 145,000 Sinhala tweets, annotated following a semi-supervised approach.
:warning: This repository contains texts that may be offensive and harmful.
## Annotation
We use an annotation scheme split into two levels deciding (a) Offensiveness of a tweet (sentence-level) and (b) Tokens that contribute to the offence at sentence-level (token-level).
### Sentence-level
Our sentence-level offensive language detection follows level A in OLID [(Zampieri et al., 2019)](https://aclanthology.org/N19-1144/). We asked annotators to discriminate between the following types of tweets:
* **Offensive (OFF)**: Posts containing any form of non-acceptable language (profanity) or a targeted offence, which can be veiled or direct. This includes insults, threats, and posts containing profane language or swear words.
* **Not Offensive (NOT)**: Posts that do not contain offense or profanity.
Each tweet was annotated with one of the above labels, which we used as the labels in sentence-level offensive language identification.
### Token-level
To provide a human explanation of labelling, we collect rationales for the offensive language. Following HateXplain [(Mathew et al., 2021)](https://ojs.aaai.org/index.php/AAAI/article/view/17745), we define a rationale as a specific text segment that justifies the human annotator’s decision of the sentence-level labels. Therefore, We ask the annotators to highlight particular tokens in a tweet that supports their judgement about the sentence-level label (offensive, not offensive). Specifically, if a tweet is offensive, we guide the annotators to highlight tokens from the text that supports the judgement while including non-verbal expressions such as emojis and morphemes that are used to convey the intention as well. We use this as token-level offensive labels in SOLD.

## Data
SOLD is released in HuggingFace. It can be loaded in to pandas dataframes using the following code.
```python
from datasets import Dataset
from datasets import load_dataset
sold_train = Dataset.to_pandas(load_dataset('sinhala-nlp/SOLD', split='train'))
sold_test = Dataset.to_pandas(load_dataset('sinhala-nlp/SOLD', split='test'))
```
The dataset contains of the following columns.
* **post_id** - Twitter ID
* **text** - Post text
* **tokens** - Tokenised text. Each token is seperated by a space.
* **rationals** - Offensive tokens. If a token is offensive it is shown as 1 and 0 otherwise.
* **label** - Sentence-level label, offensive or not-offensive.

SemiSOLD is also released HuggingFace and can be loaded to a pandas dataframe using the following code.
```python
from datasets import Dataset
from datasets import load_dataset
semi_sold = Dataset.to_pandas(load_dataset('sinhala-nlp/SemiSOLD', split='train'))
```
The dataset contains following columns
* **post_id** - Twitter ID
* **text** - Post text
Furthermore it contains predicted offensiveness scores from nine classifiers trained on SOLD train; xlmr, xlmt, mbert, sinbert, lstm_ft, cnn_ft, lstm_cbow, cnn_cbow, lstm_sl, cnn_sl and svm
## Experiments
Clone the repository and install the libraries using the following command (preferably inside a conda environment)
~~~
pip install -r requirements.txt
~~~
### Sentence-level
Sentence-level transformer based experiments can be executed using the following command.
~~~
python -m experiments.sentence_level.sinhala_deepoffense
~~~
The command takes the following arguments;
~~~
--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).
--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.
--transfer : Whether to perform transfer learning or not (true or false).
--transfer_language : The initial language if transfer learning is performed (hi, en or si).
* hi - Perform transfer learning from HASOC 2019 Hindi dataset (Modha et al., 2019).
* en - Perform transfer learning from Offenseval English dataset (Zampieri et al., 2019).
* si - Perform transfer learning from CCMS Sinhala dataset (Rathnayake et al., 2021).
--augment : Perform semi supervised data augmentation.
--std : Standard deviation of the models to cut down data augmentation.
--augment_type: The type of the data augmentation.
* off - Augment only the offensive instances.
* normal - Augment both offensive and non-offensive instances.
~~~
Sentence-level CNN and LSTM based experiments can be executed using the following command.
~~~
python -m experiments.sentence_level.sinhala_offensive_nn
~~~
The command takes the following arguments;
~~~
--model_type : Type of the architecture (cnn2D, lstm).
--model_name : The exact word embeddings to use. This may be a gensim model, or the path to a word embeddinng files.
--augment : Perform semi supervised data augmentation.
--std : Standard deviation of the models to cut down data augmentation.
--augment_type: The type of the data augmentation.
* off - Augment only the offensive instances.
* normal - Augment both offensive and non-offensive instances.
~~~
### Token-level
Token-level transformer based experiments can be executed using the following command.
~~~
python -m experiments.sentence_level.sinhala_mudes
~~~
The command takes the following arguments;
~~~
--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).
--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.
--transfer : Whether to perform transfer learning or not (true or false).
--transfer_language : The initial language if transfer learning is performed (hatex or tsd).
* hatex - Perform transfer learning from HateXplain dataset (Mathew et al., 2021).
* tsd - Perform transfer learning from TSD dataset (Pavlopoulos et al., 2021).
~~~
Token-level LIME experiments can be executed using the following command.
~~~
python -m experiments.sentence_level.sinhala_lime
~~~
The command takes the following arguments;
~~~
--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).
--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.
~~~
## Acknowledgments
We want to acknowledge Janitha Hapuarachchi, Sachith Suraweera, Chandika Udaya Kumara and Ridmi Randima, the team of volunteer annotators that provided their free time and efforts to help us produce SOLD.
## Citation
If you are using the dataset or the models please cite the following paper
~~~
@article{ranasinghe2022sold,
title={SOLD: Sinhala Offensive Language Dataset},
author={Ranasinghe, Tharindu and Anuradha, Isuri and Premasiri, Damith and Silva, Kanishka and Hettiarachchi, Hansi and Uyangodage, Lasitha and Zampieri, Marcos},
journal={arXiv preprint arXiv:2212.00851},
year={2022}
}
~~~ | 7,845 | [
[
-0.005702972412109375,
-0.065673828125,
-0.0167236328125,
0.03167724609375,
-0.0199432373046875,
-0.0013637542724609375,
-0.03302001953125,
-0.04217529296875,
0.01654052734375,
0.04144287109375,
-0.0176544189453125,
-0.040740966796875,
-0.052947998046875,
0.... |
bigbio/distemist | 2023-04-01T16:51:57.000Z | [
"multilinguality:monolingual",
"language:es",
"license:cc-by-4.0",
"region:us"
] | bigbio | The DisTEMIST corpus is a collection of 1000 clinical cases with disease annotations linked with Snomed-CT concepts.
All documents are released in the context of the BioASQ DisTEMIST track for CLEF 2022. | @article{miranda2022overview,
title={Overview of DisTEMIST at BioASQ: Automatic detection and normalization of diseases
from clinical texts: results, methods, evaluation and multilingual resources},
author={Miranda-Escalada, Antonio and Gascó, Luis and Lima-López, Salvador and Farré-Maduell,
Eulàlia and Estrada, Darryl and Nentidis, Anastasios and Krithara, Anastasia and Katsimpras,
Georgios and Paliouras, Georgios and Krallinger, Martin},
booktitle={Working Notes of Conference and Labs of the Evaluation (CLEF) Forum.
CEUR Workshop Proceedings},
year={2022}
} | 3 | 45 | 2022-11-13T22:08:11 |
---
language:
- es
bigbio_language:
- Spanish
license: cc-by-4.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_4p0
pretty_name: DisTEMIST
homepage: https://zenodo.org/record/6671292
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
---
# Dataset Card for DisTEMIST
## Dataset Description
- **Homepage:** https://zenodo.org/record/6671292
- **Pubmed:** False
- **Public:** True
- **Tasks:** NER,NED
The DisTEMIST corpus is a collection of 1000 clinical cases with disease annotations linked with Snomed-CT concepts.
All documents are released in the context of the BioASQ DisTEMIST track for CLEF 2022.
## Citation Information
```
@article{miranda2022overview,
title={Overview of DisTEMIST at BioASQ: Automatic detection and normalization of diseases
from clinical texts: results, methods, evaluation and multilingual resources},
author={Miranda-Escalada, Antonio and Gascó, Luis and Lima-López, Salvador and Farré-Maduell,
Eulàlia and Estrada, Darryl and Nentidis, Anastasios and Krithara, Anastasia and Katsimpras,
Georgios and Paliouras, Georgios and Krallinger, Martin},
booktitle={Working Notes of Conference and Labs of the Evaluation (CLEF) Forum.
CEUR Workshop Proceedings},
year={2022}
}
```
| 1,312 | [
[
-0.0187530517578125,
-0.034942626953125,
0.03887939453125,
0.00677490234375,
-0.057647705078125,
-4.76837158203125e-7,
-0.004940032958984375,
-0.0208282470703125,
0.0290985107421875,
0.00868988037109375,
-0.038238525390625,
-0.0782470703125,
-0.075927734375,
... |
bookbot/ljspeech_phonemes | 2022-12-21T23:24:29.000Z | [
"region:us"
] | bookbot | null | null | 2 | 45 | 2022-12-21T23:18:09 | ---
dataset_info:
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 22050
- name: file
dtype: string
- name: text
dtype: string
- name: normalized_text
dtype: string
- name: phonemes
dtype: string
splits:
- name: train
num_bytes: 3863152206.0
num_examples: 13100
download_size: 3787337731
dataset_size: 3863152206.0
---
# Dataset Card for "ljspeech_phonemes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 587 | [
[
-0.0281219482421875,
-0.0199127197265625,
0.0125732421875,
0.01406097412109375,
-0.0208892822265625,
0.00800323486328125,
-0.00308990478515625,
-0.01275634765625,
0.07177734375,
0.041656494140625,
-0.0504150390625,
-0.06317138671875,
-0.028594970703125,
-0.0... |
cahya/wikipedia_id_20230101 | 2023-01-03T16:04:27.000Z | [
"region:us"
] | cahya | null | null | 1 | 45 | 2023-01-03T16:04:05 | ---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1051737365
num_examples: 634559
download_size: 544132473
dataset_size: 1051737365
---
# Dataset Card for "wikipedia_id_20230101"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 470 | [
[
-0.060211181640625,
-0.0185699462890625,
0.01558685302734375,
0.0146331787109375,
-0.0186767578125,
-0.022705078125,
0.016876220703125,
-0.01168060302734375,
0.059173583984375,
0.0196990966796875,
-0.058074951171875,
-0.04937744140625,
-0.027984619140625,
-0... |
EddieChen372/devign_with_vul_lines | 2023-02-04T15:24:46.000Z | [
"region:us"
] | EddieChen372 | null | null | 0 | 45 | 2023-02-01T12:14:23 | ---
dataset_info:
features:
- name: id
dtype: int32
- name: func
dtype: string
- name: target
dtype: bool
- name: project
dtype: string
- name: commit_id
dtype: string
- name: func_clean
dtype: string
- name: vul_lines
struct:
- name: code
sequence: string
- name: line_no
sequence: int64
- name: normalized_func
dtype: string
splits:
- name: validation
num_bytes: 16112369
num_examples: 2732
- name: train
num_bytes: 132054560
num_examples: 21854
- name: test
num_bytes: 16328301
num_examples: 2732
download_size: 60272537
dataset_size: 164495230
---
# Dataset Card for "devign_with_vul_lines"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 833 | [
[
-0.0321044921875,
-0.0284423828125,
0.01090240478515625,
0.0189971923828125,
-0.018096923828125,
-0.006801605224609375,
0.04388427734375,
-0.00832366943359375,
0.047943115234375,
0.06195068359375,
-0.0604248046875,
-0.054290771484375,
-0.0220794677734375,
-0... |
HighCWu/diffusiondb_2m_first_5k_canny | 2023-02-16T14:53:35.000Z | [
"task_categories:text-to-image",
"size_categories:1K<n<10K",
"language:en",
"license:openrail",
"region:us"
] | HighCWu | null | null | 4 | 45 | 2023-02-16T14:16:14 | ---
dataset_info:
features:
- name: image
dtype: image
- name: guide
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 3204091410
num_examples: 5000
download_size: 3203076374
dataset_size: 3204091410
license: openrail
task_categories:
- text-to-image
language:
- en
size_categories:
- 1K<n<10K
---
# Dataset Card for "diffusiondb_2m_first_5k_canny"
Process [diffusiondb 2m first 5k canny](https://huggingface.co/datasets/poloclub/diffusiondb) to edges by Canny algorithm.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
| 665 | [
[
-0.0372314453125,
-0.045654296875,
0.037689208984375,
0.039337158203125,
-0.030181884765625,
-0.0120697021484375,
0.02508544921875,
-0.01470184326171875,
0.058807373046875,
0.03961181640625,
-0.0831298828125,
-0.06597900390625,
-0.0606689453125,
-0.027145385... |
vietgpt/xnli_vi | 2023-07-04T05:38:23.000Z | [
"region:us"
] | vietgpt | null | null | 1 | 45 | 2023-02-22T16:34:58 | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 101417430
num_examples: 392702
- name: test
num_bytes: 1190217
num_examples: 5010
- name: validation
num_bytes: 590680
num_examples: 2490
download_size: 57688285
dataset_size: 103198327
---
# XNLI
- Source: https://huggingface.co/datasets/xnli
- Num examples:
- 392,702 (train)
- 2,490 (validation)
- 5,010 (test)
- Language: Vietnamese
```python
from datasets import load_dataset
load_dataset("tdtunlp/xnli_vi")
```
- Format for NLI task
```python
import random
def preprocess(
sample,
sep_key="<|endofprompt|>",
end_key="<|endoftext|>",
):
premise = sample['premise']
hypothesis = sample['hypothesis']
label = sample['label']
template_idx = random.randint(0, 3)
if template_idx == 0:
answer_choices = ["Đúng", "Không kết luận", "Sai"]
return {'text': """Hãy coi những điều sau đây là sự thật: "{premise}"
Vậy phát biểu sau đây: "{hypothesis}" là Đúng hay Sai, hay Không kết luận?
{sep_key}
{label}
{end_key}""".format(
premise=premise,
hypothesis=hypothesis,
sep_key=sep_key,
label=answer_choices[label],
end_key=end_key,
)}
elif template_idx == 1:
answer_choices = ["Đúng", "Không kết luận", "Sai"]
return {'text': """{premise}
Câu hỏi: Điều này có nghĩa là "{hypothesis}"? Đúng hay Sai, hay Không kết luận?
{sep_key}
{label}
{end_key}""".format(
premise=premise,
hypothesis=hypothesis,
sep_key=sep_key,
label=answer_choices[label],
end_key=end_key,
)}
elif template_idx == 2:
answer_choices = ["Đúng", "Không kết luận", "Sai"]
return {'text': """{premise}
Câu hỏi: {hypothesis} là Đúng hay Sai, hay Không kết luận?
{sep_key}
{label}
{end_key}""".format(
premise=premise,
hypothesis=hypothesis,
sep_key=sep_key,
label=answer_choices[label],
end_key=end_key,
)}
elif template_idx == 3:
answer_choices = ["Yes", "Maybe", "No"]
return {'text': """Cho rằng {premise}, nó có tuân theo giả thiết {hypothesis} không? Trả lời Có hay Không, hay Có thể.
{sep_key}
{label}
{end_key}""".format(
premise=premise,
hypothesis=hypothesis,
sep_key=sep_key,
label=answer_choices[label],
end_key=end_key,
)}
"""
Cho rằng Bạn biết trong mùa giải và tôi đoán ở mức độ của bạn , bạn sẽ mất chúng đến mức độ tiếp theo nếu họ quyết định nhớ lại đội ngũ cha mẹ các chiến binh quyết định gọi để nhớ lại một người từ ba a sau đó một người đàn ông đi lên đến thay thế anh ta và một người đàn ông nào đó đi lên để thay thế anh ta ., nó có tuân theo giả thiết Anh sẽ mất mọi thứ ở mức độ sau nếu người dân nhớ lại . không? Trả lời Có hay Không, hay Có thể.
<|endofprompt|>
Yes
<|endoftext|>
"""
``` | 3,083 | [
[
-0.020416259765625,
-0.03875732421875,
0.033905029296875,
0.0211639404296875,
-0.0075836181640625,
-0.014556884765625,
-0.01226806640625,
-0.01296234130859375,
0.0391845703125,
0.0252227783203125,
-0.040283203125,
-0.038970947265625,
-0.01432037353515625,
0.... |
HuggingFaceGECLM/REDDIT_comments | 2023-03-17T07:52:51.000Z | [
"task_categories:text-generation",
"task_ids:dialogue-modeling",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10B<n<100B",
"language:en",
"reddit",
"social-media",
"arxiv:2001.08435",
"region:us"
... | HuggingFaceGECLM | null | null | 6 | 45 | 2023-03-15T14:14:58 | ---
dataset_info:
features:
- name: archived
dtype: string
- name: author
dtype: string
- name: author_fullname
dtype: string
- name: body
dtype: string
- name: comment_type
dtype: string
- name: controversiality
dtype: string
- name: created_utc
dtype: string
- name: edited
dtype: string
- name: gilded
dtype: string
- name: id
dtype: string
- name: link_id
dtype: string
- name: locked
dtype: string
- name: name
dtype: string
- name: parent_id
dtype: string
- name: permalink
dtype: string
- name: retrieved_on
dtype: string
- name: score
dtype: string
- name: subreddit_id
dtype: string
- name: subreddit_name_prefixed
dtype: string
- name: subreddit_type
dtype: string
- name: total_awards_received
dtype: string
splits:
- name: programming
num_bytes: 3466623746
num_examples: 7503347
- name: tifu
num_bytes: 4761338653
num_examples: 12738669
- name: explainlikeimfive
num_bytes: 8451732573
num_examples: 16392814
- name: WritingPrompts
num_bytes: 4651591771
num_examples: 4436210
- name: changemyview
num_bytes: 8603031915
num_examples: 11600073
- name: LifeProTips
num_bytes: 5272994396
num_examples: 12829459
- name: todayilearned
num_bytes: 22655655241
num_examples: 60199778
- name: science
num_bytes: 7069809765
num_examples: 18112884
- name: askscience
num_bytes: 3144754665
num_examples: 6286702
- name: ifyoulikeblank
num_bytes: 547200329
num_examples: 1332211
- name: Foodforthought
num_bytes: 308377128
num_examples: 567900
- name: IWantToLearn
num_bytes: 408331672
num_examples: 745543
- name: bestof
num_bytes: 2003718831
num_examples: 4347522
- name: IAmA
num_bytes: 9380094090
num_examples: 25778822
- name: socialskills
num_bytes: 1000014402
num_examples: 1842733
- name: relationship_advice
num_bytes: 22298879735
num_examples: 38937398
- name: philosophy
num_bytes: 1494947876
num_examples: 2391695
- name: YouShouldKnow
num_bytes: 1165617658
num_examples: 2639265
- name: history
num_bytes: 1457852402
num_examples: 2962043
- name: books
num_bytes: 4562689426
num_examples: 10187495
- name: Showerthoughts
num_bytes: 13259109532
num_examples: 34123213
- name: personalfinance
num_bytes: 9484869588
num_examples: 18361314
- name: buildapc
num_bytes: 9801044390
num_examples: 21761801
- name: EatCheapAndHealthy
num_bytes: 853462012
num_examples: 1821897
- name: boardgames
num_bytes: 3131627378
num_examples: 6328926
- name: malefashionadvice
num_bytes: 2928017882
num_examples: 7712258
- name: femalefashionadvice
num_bytes: 1619784736
num_examples: 3262969
- name: scifi
num_bytes: 888152056
num_examples: 2193741
- name: Fantasy
num_bytes: 2285934538
num_examples: 4566639
- name: Games
num_bytes: 10396813188
num_examples: 23373965
- name: bodyweightfitness
num_bytes: 794549854
num_examples: 1613634
- name: SkincareAddiction
num_bytes: 3421122597
num_examples: 5660550
- name: podcasts
num_bytes: 464773126
num_examples: 943266
- name: suggestmeabook
num_bytes: 1842944304
num_examples: 3492937
- name: AskHistorians
num_bytes: 2244587909
num_examples: 2714353
- name: gaming
num_bytes: 28374513722
num_examples: 85729253
- name: DIY
num_bytes: 2113533684
num_examples: 4489265
- name: sports
num_bytes: 2230129132
num_examples: 6470079
- name: space
num_bytes: 3081499208
num_examples: 7896182
- name: gadgets
num_bytes: 1683252868
num_examples: 4104833
- name: Documentaries
num_bytes: 1852644771
num_examples: 4051474
- name: GetMotivated
num_bytes: 1211761267
num_examples: 3221980
- name: UpliftingNews
num_bytes: 2003149025
num_examples: 4741948
- name: technology
num_bytes: 10826871436
num_examples: 25404699
- name: Fitness
num_bytes: 6191132755
num_examples: 14319856
- name: travel
num_bytes: 1740556350
num_examples: 3806755
- name: lifehacks
num_bytes: 626791812
num_examples: 1799437
- name: Damnthatsinteresting
num_bytes: 6376694618
num_examples: 15643554
- name: gardening
num_bytes: 1825313940
num_examples: 4568468
- name: mildlyinteresting
num_bytes: 9079894206
num_examples: 26436769
download_size: 109177016105
dataset_size: 255339788158
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: Reddit comments
size_categories:
- 10B<n<100B
source_datasets: []
tags:
- reddit
- social-media
task_categories:
- text-generation
task_ids:
- dialogue-modeling
- language-modeling
---
# Dataset Card for "REDDIT_comments"
## Dataset Description
- **Homepage:**
- **Paper: https://arxiv.org/abs/2001.08435**
### Dataset Summary
Comments of 50 high-quality subreddits, extracted from the REDDIT PushShift data dumps (from 2006 to Jan 2023).
### Supported Tasks
These comments can be used for text generation and language modeling, as well as dialogue modeling.
## Dataset Structure
### Data Splits
Each split corresponds to a specific subreddit in the following list: "tifu", "explainlikeimfive", "WritingPrompts", "changemyview", "LifeProTips", "todayilearned", "science", "askscience", "ifyoulikeblank", "Foodforthought", "IWantToLearn", "bestof", "IAmA", "socialskills", "relationship_advice", "philosophy", "YouShouldKnow", "history", "books", "Showerthoughts", "personalfinance", "buildapc", "EatCheapAndHealthy", "boardgames", "malefashionadvice", "femalefashionadvice", "scifi", "Fantasy", "Games", "bodyweightfitness", "SkincareAddiction", "podcasts", "suggestmeabook", "AskHistorians", "gaming", "DIY", "mildlyinteresting", "sports", "space", "gadgets", "Documentaries", "GetMotivated", "UpliftingNews", "technology", "Fitness", "travel", "lifehacks", "Damnthatsinteresting", "gardening", "programming"
## Dataset Creation
### Curation Rationale
All the information fields have been cast to string, as their format change through time from one dump to the following. A reduced number of keys have been kept: "archived", "author", "author_fullname", "body", "comment_type", "controversiality", "created_utc", "edited", "gilded", "id", "link_id", "locked", "name", "parent_id", "permalink", "retrieved_on", "score", "subreddit", "subreddit_id", "subreddit_name_prefixed", "subreddit_type", "total_awards_received".
### Source Data
The [Reddit PushShift data dumps](https://files.pushshift.io/reddit/) are part of a data collection effort which crawls Reddit at regular intervals, to extract and keep all its data.
#### Initial Data Collection and Normalization
See the paper.
#### Who are the source language producers?
Redditors are mostly young (65% below 30), male (70%), and American (50% of the site).
### Personal and Sensitive Information
The data contains Redditor's usernames associated to their content.
## Considerations for Using the Data
This dataset should be anonymized before any processing.
Though the subreddits selected are considered as being of higher quality, they can still reflect what you can find on the internet in terms of expressions of biases and toxicity.
### Contributions
Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset. | 7,566 | [
[
-0.039947509765625,
-0.054931640625,
0.0218963623046875,
0.01049041748046875,
-0.03033447265625,
0.0127105712890625,
-0.0281982421875,
-0.0087432861328125,
0.040740966796875,
0.041595458984375,
-0.07000732421875,
-0.05938720703125,
-0.051025390625,
0.0280609... |
mstz/shuttle | 2023-04-16T17:58:41.000Z | [
"task_categories:tabular-classification",
"size_categories:10K<n<100K",
"language:en",
"license:cc",
"shuttle",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_statlog_(shuttle)_148,
title = {{Statlog (Shuttle)}},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5WS31}}
} | 0 | 45 | 2023-04-14T12:03:39 | ---
language:
- en
tags:
- shuttle
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Shuttle
size_categories:
- 10K<n<100K
task_categories:
- tabular-classification
configs:
- shuttle
- shuttle_binary
license: cc
---
# Shuttle
The [Shuttle dataset](https://archive-beta.ics.uci.edu/dataset/146/statlog+shuttle+satellite) from the [UCI repository](https://archive-beta.ics.uci.edu/).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-----------------------|---------------------------|-------------------------|
| shuttle | Multiclass classification.| |
| shuttle_0 | Binary classification. | Is the image of class 0? |
| shuttle_1 | Binary classification. | Is the image of class 1? |
| shuttle_2 | Binary classification. | Is the image of class 2? |
| shuttle_3 | Binary classification. | Is the image of class 3? |
| shuttle_4 | Binary classification. | Is the image of class 4? |
| shuttle_5 | Binary classification. | Is the image of class 5? |
| shuttle_6 | Binary classification. | Is the image of class 6? | | 1,279 | [
[
-0.044219970703125,
-0.0035991668701171875,
0.0304412841796875,
0.0155487060546875,
-0.01412200927734375,
0.01331329345703125,
0.011444091796875,
-0.00608062744140625,
0.0219268798828125,
0.0408935546875,
-0.0615234375,
-0.06475830078125,
-0.043121337890625,
... |
tomas-gajarsky/cifar10-lt | 2023-06-24T20:22:30.000Z | [
"task_categories:image-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:cifar10",
"language:en",
"license:apache-2.0",
"region:us"
] | tomas-gajarsky | The CIFAR-10-LT imbalanced dataset is comprised of under 60,000 color images, each measuring 32x32 pixels,
distributed across 10 distinct classes.
The dataset includes 10,000 test images, with 1000 images per class,
and fewer than 50,000 training images.
The number of samples within each class of the train set decreases exponentially with factors of 10, 50 or 100. | @TECHREPORT{Krizhevsky09learningmultiple,
author = {Alex Krizhevsky},
title = {Learning multiple layers of features from tiny images},
institution = {},
year = {2009}
} | 1 | 45 | 2023-05-06T11:56:18 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license: apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- cifar10
task_categories:
- image-classification
task_ids: []
paperswithcode_id: cifar-10
pretty_name: Cifar10-LT
dataset_info:
features:
- name: img
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': automobile
'2': bird
'3': cat
'4': deer
'5': dog
'6': frog
'7': horse
'8': ship
'9': truck
config_name: cifar10
splits:
- name: train
- name: test
num_bytes: 22772838
num_examples: 10000
download_size: 170498071
---
# Dataset Card for CIFAR-10-LT (Long Tail)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [CIFAR Datasets](https://www.cs.toronto.edu/~kriz/cifar.html)
- **Paper:** [Paper imbalanced example](https://openaccess.thecvf.com/content_CVPR_2019/papers/Cui_Class-Balanced_Loss_Based_on_Effective_Number_of_Samples_CVPR_2019_paper.pdf)
- **Leaderboard:** [r-10](https://paperswithcode.com/sota/long-tail-learning-on-cifar-10-lt-r-10) [r-100](https://paperswithcode.com/sota/long-tail-learning-on-cifar-10-lt-r-100)
### Dataset Summary
The CIFAR-10-LT imbalanced dataset is comprised of under 60,000 color images, each measuring 32x32 pixels,
distributed across 10 distinct classes.
The number of samples within each class decreases exponentially with factors of 10 and 100.
The dataset includes 10,000 test images, with 1000 images per class,
and fewer than 50,000 training images.
Each image is assigned one label.
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given image into one of 10 classes. The leaderboard is available [here](https://paperswithcode.com/sota/long-tail-learning-on-cifar-10-lt-r-100).
### Languages
English
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```
{
'img': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=32x32 at 0x2767F58E080>, 'label': 0
}
```
### Data Fields
- img: A `PIL.Image.Image` object containing the 32x32 image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- label: 0-9 with the following correspondence
0 airplane
1 automobile
2 bird
3 cat
4 deer
5 dog
6 frog
7 horse
8 ship
9 truck
### Data Splits
| name |train|test|
|----------|----:|---------:|
|cifar10|<50000| 10000|
### Licensing Information
Apache License 2.0
### Citation Information
```
@TECHREPORT{Krizhevsky09learningmultiple,
author = {Alex Krizhevsky},
title = {Learning multiple layers of features from tiny images},
institution = {},
year = {2009}
}
```
### Contributions
Thanks to [@gchhablani](https://github.com/gchablani) and all contributors for adding the original balanced cifar10 dataset. | 3,874 | [
[
-0.05145263671875,
-0.028656005859375,
-0.0038089752197265625,
0.0234375,
-0.0235443115234375,
0.01654052734375,
-0.01678466796875,
-0.0372314453125,
0.0384521484375,
0.01528167724609375,
-0.02960205078125,
-0.050323486328125,
-0.046783447265625,
-0.00306510... |
edarchimbaud/timeseries-1d-stocks | 2023-11-01T04:38:30.000Z | [
"task_categories:tabular-regression",
"language:en",
"license:mit",
"region:us"
] | edarchimbaud | null | null | 5 | 45 | 2023-05-15T17:02:00 | ---
language:
- en
license: mit
task_categories:
- tabular-regression
dataset_info:
features:
- name: symbol
dtype: string
- name: date
dtype: string
- name: open
dtype: float64
- name: high
dtype: float64
- name: low
dtype: float64
- name: close
dtype: float64
- name: adj_close
dtype: float64
- name: volume
dtype: float64
splits:
- name: train
num_bytes: 598394231
num_examples: 8539186
download_size: 296361775
dataset_size: 598394231
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "timeseries-daily-sp500"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://edarchimbaud.substack.com
- **Repository:** https://github.com/edarchimbaud
- **Point of Contact:** contact@edarchimbaud.com
### Dataset Summary
The timeseries-daily-sp500 dataset provides daily historical data for companies in the S&P 500 index.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for systematic trading. The model performance is evaluated based on the return / risk profile of the positions taken by the model.
### Languages
[N/A]
## Dataset Structure
### Data Instances
[N/A]
### Data Fields
- symbol (string): A string representing the ticker symbol or abbreviation used to identify the company.
- date (timestamp[ns, tz=America/New_York]): A timestamp indicating the date of the recorded data. The timestamps are in the America/New_York time zone.
- open (float64): A floating-point number representing the opening price of the stock on the given date.
- high (float64): A floating-point number representing the highest price of the stock on the given date.
- low (float64): A floating-point number representing the lowest price of the stock on the given date.
- close (float64): A floating-point number representing the closing price of the stock on the given date.
- volume (int64): An integer indicating the trading volume (number of shares) of the stock on the given date.
- dividends (float64): A floating-point number representing the dividends paid by the stock on the given date.
- stock_splits (float64): A floating-point number representing any stock splits that occurred on the given date.
### Data Splits
A single split, called train.
## Dataset Creation
### Curation Rationale
The timeseries-daily-sp500 dataset was developed to support the development of low-frequency trading algorithms.
### Source Data
#### Initial Data Collection and Normalization
This data was sourced from the web, and aggregated.
### Annotations
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
[N/A]
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
The timeseries-daily-sp500 dataset was collected by https://edarchimbaud.substack.com.
### Licensing Information
The timeseries-daily-sp500 dataset is licensed under the MIT License.
### Citation Information
> https://edarchimbaud.substack.com, timeseries-daily-sp500 dataset, GitHub repository, https://github.com/edarchimbaud
### Contributions
Thanks to [@edarchimbaud](https://github.com/edarchimbaud) for adding this dataset. | 4,429 | [
[
-0.032440185546875,
-0.030517578125,
-0.00978851318359375,
0.0307464599609375,
-0.019622802734375,
0.006488800048828125,
0.0010890960693359375,
-0.02532958984375,
0.0509033203125,
0.02001953125,
-0.07952880859375,
-0.0609130859375,
-0.03515625,
0.00322723388... |
Nan-Do/instructional_code-search-net-java | 2023-05-20T05:22:11.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:text2text-generation",
"language:en",
"license:apache-2.0",
"Java",
"Code Generation",
"Instruction Response",
"region:us"
] | Nan-Do | null | null | 0 | 45 | 2023-05-19T03:38:41 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
splits:
- name: train
num_bytes: 388087123
num_examples: 467959
download_size: 132263577
dataset_size: 388087123
license: apache-2.0
task_categories:
- conversational
- text-generation
- text2text-generation
language:
- en
tags:
- Java
- Code Generation
- Instruction Response
pretty_name: Instructional Java Dataset
---
# Dataset Card for "instructional_code-search-net-java"
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/Nan-Do/instructional_code-search-net-java
- **Paper:** None
- **Leaderboard:** None
- **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do)
### Dataset Summary
This is an instructional dataset for Java.
The dataset contains two different kind of tasks:
- Given a piece of code generate a description of what it does.
- Given a description generate a piece of code that fulfils the description.
### Languages
The dataset is in English.
### Data Splits
There are no splits.
## Dataset Creation
May of 2023
### Curation Rationale
This dataset was created to improve the coding capabilities of LLMs.
### Source Data
The summarized version of the code-search-net dataset can be found at https://huggingface.co/datasets/Nan-Do/code-search-net-java
### Annotations
The dataset includes an instruction and response columns.
#### Annotation process
The annotation procedure was done using templates and NLP techniques to generate human-like instructions and responses.
A sample notebook of the process can be found at https://github.com/Nan-Do/OpenAssistantInstructionResponsePython
The annontations have been cleaned to make sure there are no repetitions and/or meaningless summaries.
### Licensing Information
Apache 2.0 | 1,869 | [
[
-0.0271148681640625,
-0.04962158203125,
0.0020751953125,
0.021087646484375,
-0.0088348388671875,
-0.00859832763671875,
-0.0223541259765625,
-0.00612640380859375,
0.03631591796875,
0.043914794921875,
-0.046051025390625,
-0.06805419921875,
-0.037109375,
0.0141... |
jxu124/refcocoplus | 2023-05-20T19:01:22.000Z | [
"region:us"
] | jxu124 | null | null | 0 | 45 | 2023-05-20T19:00:40 | ---
dataset_info:
features:
- name: sent_ids
sequence: int64
- name: file_name
dtype: string
- name: ann_id
dtype: int64
- name: ref_id
dtype: int64
- name: image_id
dtype: int64
- name: split
dtype: string
- name: sentences
list:
- name: raw
dtype: string
- name: sent
dtype: string
- name: sent_id
dtype: int64
- name: tokens
sequence: string
- name: category_id
dtype: int64
- name: raw_anns
dtype: string
- name: raw_image_info
dtype: string
- name: raw_sentences
dtype: string
- name: image_path
dtype: string
- name: bbox
sequence: float64
- name: captions
sequence: string
- name: global_image_id
dtype: string
- name: anns_id
dtype: string
splits:
- name: train
num_bytes: 81937869
num_examples: 42278
- name: testB
num_bytes: 3273927
num_examples: 1798
- name: test
num_bytes: 3969265
num_examples: 1975
- name: validation
num_bytes: 7399541
num_examples: 3805
download_size: 39772801
dataset_size: 96580602
---
# Dataset Card for "refcocoplus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,270 | [
[
-0.044219970703125,
-0.0009608268737792969,
0.003475189208984375,
0.01727294921875,
-0.019134521484375,
-0.0030040740966796875,
0.0217742919921875,
-0.0203857421875,
0.06427001953125,
0.045166015625,
-0.06396484375,
-0.042144775390625,
-0.0246429443359375,
-... |
hhu-dsml/emowoz | 2023-06-01T13:23:58.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"task_ids:sentiment-analysis",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:MultiWOZ",
"source_datasets:Original (human-... | hhu-dsml | EmoWOZ is a user emotion recognition in task-oriented dialogues dataset, consisting all dialogues from MultiWOZ and 1000 additional human-machine dialogues (DialMAGE). Each user utterance is annotated with one of the following emotions: 0: neutral, 1: fearful, 2: dissatisfied, 3: apologetic, 4: abusive, 5: excited, 6: satisfied. System utterances are annotated with -1. For detailed label design and explanation, please refer to the paper and dataset homepage. | @inproceedings{feng-etal-2022-emowoz,
title = "{E}mo{WOZ}: A Large-Scale Corpus and Labelling Scheme for Emotion Recognition in Task-Oriented Dialogue Systems",
author = "Feng, Shutong and
Lubis, Nurul and
Geishauser, Christian and
Lin, Hsien-chin and
Heck, Michael and
van Niekerk, Carel and
Gasic, Milica",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.436",
pages = "4096--4113",
abstract = "The ability to recognise emotions lends a conversational artificial intelligence a human touch. While emotions in chit-chat dialogues have received substantial attention, emotions in task-oriented dialogues remain largely unaddressed. This is despite emotions and dialogue success having equally important roles in a natural system. Existing emotion-annotated task-oriented corpora are limited in size, label richness, and public availability, creating a bottleneck for downstream tasks. To lay a foundation for studies on emotions in task-oriented dialogues, we introduce EmoWOZ, a large-scale manually emotion-annotated corpus of task-oriented dialogues. EmoWOZ is based on MultiWOZ, a multi-domain task-oriented dialogue dataset. It contains more than 11K dialogues with more than 83K emotion annotations of user utterances. In addition to Wizard-of-Oz dialogues from MultiWOZ, we collect human-machine dialogues within the same set of domains to sufficiently cover the space of various emotions that can happen during the lifetime of a data-driven dialogue system. To the best of our knowledge, this is the first large-scale open-source corpus of its kind. We propose a novel emotion labelling scheme, which is tailored to task-oriented dialogues. We report a set of experimental results to show the usability of this corpus for emotion recognition and state tracking in task-oriented dialogues.",
} | 1 | 45 | 2023-05-31T11:55:27 | ---
license: cc-by-nc-4.0
task_categories:
- text-classification
language:
- en
size_categories:
- 10K<n<100K
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
multilinguality:
- monolingual
source_datasets:
- MultiWOZ
- Original (human-machine interaction dialogues)
pretty_name: EmoWOZ
task_ids:
- sentiment-classification
- sentiment-analysis
paperswithcode_id: emowoz-1
configs:
- emowoz
- multiwoz
- dialmage
dataset_info:
- config_name: emowoz
features:
- name: dialogue_id
dtype: string
- name: log
sequence:
- name: text
dtype: string
- name: emotion
dtype: int32
splits:
- name: train
num_bytes: 10661603
num_examples: 9233
- name: validation
num_bytes: 1391634
num_examples: 1100
- name: test
num_bytes: 1409633
num_examples: 1100
- config_name: multiwoz
features:
- name: dialogue_id
dtype: string
- name: log
sequence:
- name: text
dtype: string
- name: emotion
dtype: int32
splits:
- name: train
num_bytes: 10661603
num_examples: 9233
- name: validation
num_bytes: 1391634
num_examples: 1100
- name: test
num_bytes: 1409633
num_examples: 1100
- config_name: dialmage
features:
- name: dialogue_id
dtype: string
- name: log
sequence:
- name: text
dtype: string
- name: emotion
dtype: int32
splits:
- name: train
num_bytes: 10661603
num_examples: 9233
- name: validation
num_bytes: 1391634
num_examples: 1100
- name: test
num_bytes: 1409633
num_examples: 1100
---
# Dataset Card for EmoWOZ Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** [EmoWOZ Dataset repository](https://zenodo.org/record/6506504), [EmoWOZ Benchmark repository](https://gitlab.cs.uni-duesseldorf.de/general/dsml/emowoz-public)
- **Paper:** [EmoWOZ: A Large-Scale Corpus and Labelling Scheme for Emotion Recognition in Task-Oriented Dialogue Systems](https://aclanthology.org/2022.lrec-1.436/)
- **Leaderboard:** [Papers with Code leaderboard for EmoWOZ Dataset](https://paperswithcode.com/dataset/emowoz-1)
- **Point of Contact:** [Shutong Feng](mailto:shutong.feng@hhu.de)
### Dataset Summary
EmoWOZ is based on [MultiWOZ, a multi-domain task-oriented dialogue dataset](https://github.com/budzianowski/multiwoz). It contains more than 11K task-oriented dialogues with more than 83K emotion annotations of user utterances. In addition to Wizard-of-Oz dialogues from MultiWOZ, we collect human-machine dialogues (DialMAGE) within the same set of domains to sufficiently cover the space of various emotions that can happen during the lifetime of a data-driven dialogue system. There are 7 emotion labels, which are adapted from the OCC emotion models: _Neutral_, _Satisfied_, _Dissatisfied_, _Excited_, _Apologetic_, _Fearful_, _Abusive_.
Some of the statistics about the dataset:
| Metirc | Value |
| ---------- | ---------------- |
| # Dialogues | 11434 |
| # Turns | 167234 |
| # Annotations | 83617 |
| # Unique Tokens | 28417 |
| Average Turns per Dialogue | 14.63 |
| Average Tokens per Turn | 12.78 |
Emotion Distribution in EmoWOZ and subsets:
| Emotion | EmoWOZ | # MultiWOZ | DialMAGE |
| ---------- | ---------------- | ---------- | ---------------- |
| Neutral | 58,656 | 51,426 | 7,230 |
| Satisfied | 17,532 | 17,061 | 471 |
| Dissatisfied | 5,117 | 914 | 4,203 |
| Excited | 971 | 860 | 111 |
| Apologetic | 840 | 838 | 2 |
| Fearful | 396 | 381 | 15 |
| Satisfied | 105 | 44 | 61 |
### Supported Tasks and Leaderboards
- 'Emotion Recognition in Conversations': See the [Papers With Code leaderboard](hhttps://paperswithcode.com/sota/emotion-recognition-in-conversation-on-emowoz) for more models.
- 'Additional Classification Tasks': According to the initial benchmark [paper](https://aclanthology.org/2022.lrec-1.436/), emotion labels in EmoWOZ can be mapped to sentiment polarities. Therefore, sentiment classification and sentiment analysis can also be performed. Since EmoWOZ has two subsets: MultiWOZ (human-to-human) and DialMAGE (human-to-machine), it is also possible to perform cross-domain emotion/sentiment recognition.
### Languages
Only English is represented in the data.
## Dataset Structure
### Data Instances
For each instance, there is a string id for the dialogue, a list of strings for the dialogue utterances, and a list of integers for the emotion labels.
```
{
'dialogue_id': 'PMUL4725.json',
'log': {
'text': [
'Hi, i am looking for some museums that I could visit when in town, could you help me find some?',
'Is there an area of town you prefer?',
"No, I don't care.",
"I recommend the Cafe Jello Gallery in the west. It's free to enter!",
'I also need a place to stay',
'Great! There are 33 hotels in the area. What area of town would you like to stay in? What is your preference on price?',
" The attraction should be in the type of museum. I don't care about the price range or the area",
'Just to clarify - did you need a different museum? Or a hotel?',
'That museum from earlier is fine, I just need their postalcode. I need a hotel two in the west and moderately priced. ',
"The postal code for Cafe Jello Gallery is cb30af. Okay, Hobson's House matches your request. ",
'Do they have internet?',
'Yes they do. Would you like me to book a room for you?',
"No thanks. I will do that later. Can you please arrange for taxi service from Cafe Jello to Hobson's House sometime after 04:00?",
'I was able to book that for you. Be expecting a grey Tesla. If you need to reach them, please call 07615015749. ',
'Well that you that is all i need for today',
'Your welcome. Have a great day!'
],
'emotion': [0, -1, 0, -1, 0, -1, 0, -1, 0, -1, 0, -1, 0, -1, 0, -1]
}
}
```
### Data Fields
- `dialogue_id`: a string representing the unique id of the dialogue. For MultiWOZ dialogues, the original id is keeped. For DialMAGE dialogues, all ids are in the format of DMAGExxx.json where xxx is an integer of variable number of digits.
- `text`: a list of strings containing the dialogue turns.
- `emotion`: a list of integers containing the sequence of emotion labels for the dialogue. Specificially,
- -1: system turns with unlabelled emotion
- 0: neutral, no emotion expressed
- 1: fearful, or sad/disappointed, negative emotion elicited by facts/events, which is out of the system's control
- 2: dissatisfied, negative emotion elicited by the system, usually after the system's poor performance
- 3: apologetic, negative emotion from the user, usually expressing apologies for causing confusion or changing search criteria
- 4: abusive, negative emotion elicited by the system, expressed in an impolite way
- 5: excited, positive emotion elicited by facts/events
- 6: satisfied, positive emotion elicited by the system
### Data Splits
The EmoWOZ dataset has 3 splits: _train_, _validation_, and _test_. Below are the statistics for the dataset.
| Dataset Split | Number of Emotion Annotations in Split| Of Which from MultiWOZ | Of Which from DialMage |
| ------------- | ----------------------------| ------------- | ------------------------------------------- |
| Train | 66,474 | 56,778 | 9696 |
| Validation | 8,509 | 7,374 | 1135 |
| Test | 8,634 | 7,372 | 1262 |
## Dataset Creation
### Curation Rationale
EmoWOZ was built on top of MultiWOZ because MultiWOZ is a well-established dataset for task-oriented dialogue modelling, allowing further study of the impact of user emotions on downstream tasks. The additional 1000 human-machine dialogues (DialMAGE) was collected to improve the emotion coverage and emotional expression diversity.
### Source Data
#### Initial Data Collection and Normalization
MultiWOZ dialogues were inherited from the work of [MultiWOZ - A Large-Scale Multi-Domain Wizard-of-Oz Dataset for Task-Oriented Dialogue Modelling](https://aclanthology.org/D18-1547/).
DialMAGE dialogues were collected from a human evaluation of an RNN-based policy trained on MultiWOZ on Amazon Mechanical Turk platform.
#### Who are the source language producers?
The text of both MultiWOZ and DialMAGE was written by workers on Amazon Mechanical Turk platform. For detailed data collection set-ups, please refer to their respective publications.
### Annotations
All dialogues take place between a _user_ and a _system_ (or an _operator_). The dialogue always starts with a user turn, which is always followed by a system response, and ends with a system turn. Only user turns are annotated with a emotion label.
#### Annotation process
Each user utterance was annotated by three annotators. The final label was determined by majority voting. If there was no agreement, the final label would be resolved manually.
For details such as annotator selection process and quality assurance methods, please refer to the EmoWOZ publication.
#### Who are the annotators?
Annotators are crowdsource workers on Amazon Mechanical Turk platform.
### Personal and Sensitive Information
All annotators are anonymised. There is no personal information in EmoWOZ.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop task-oriented dialogue systems that can perceive human emotions and avoid abusive behaviours. This task is useful for building more human-like dialogue agents.
### Discussion of Biases
There is bias in emotion distribution in the MultiWOZ (human-human) and DialMAGE (human-machine) subset of EmoWOZ. The linguistic styles are also different between the two subsets.
As pointed out in [Reevaluating Data Partitioning for Emotion Detection in EmoWOZ](https://arxiv.org/abs/2303.13364), there is also emotion shift in train-dev-test split in the MultiWOZ subset. EmoWOZ keeps the original data split of MultiWOZ, which is suitable for task-oriented dialogue modelling but the emotion distribution in these data splits are different. Further investigations will be needed.
### Other Known Limitations
The emotion distribution is unbalanced where _neutral_, _satisfied_, and _dissatisfied_ make up more than 95% of the labels.
## Additional Information
### Dataset Curators
The collection and annotation of EmoWOZ were conducted by the [Chair for Dialog Systems and Machine Learning at Heinrich Heine Universität Düsseldorf](https://www.cs.hhu.de/lehrstuehle-und-arbeitsgruppen/dialog-systems-and-machine-learning).
### Licensing Information
The EmoWOZ datasetis released under the [CC-BY-NC-4.0 License](https://creativecommons.org/licenses/by-nc/4.0/).
### Citation Information
```
@inproceedings{feng-etal-2022-emowoz,
title = "{E}mo{WOZ}: A Large-Scale Corpus and Labelling Scheme for Emotion Recognition in Task-Oriented Dialogue Systems",
author = "Feng, Shutong and
Lubis, Nurul and
Geishauser, Christian and
Lin, Hsien-chin and
Heck, Michael and
van Niekerk, Carel and
Gasic, Milica",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.436",
pages = "4096--4113",
abstract = "The ability to recognise emotions lends a conversational artificial intelligence a human touch. While emotions in chit-chat dialogues have received substantial attention, emotions in task-oriented dialogues remain largely unaddressed. This is despite emotions and dialogue success having equally important roles in a natural system. Existing emotion-annotated task-oriented corpora are limited in size, label richness, and public availability, creating a bottleneck for downstream tasks. To lay a foundation for studies on emotions in task-oriented dialogues, we introduce EmoWOZ, a large-scale manually emotion-annotated corpus of task-oriented dialogues. EmoWOZ is based on MultiWOZ, a multi-domain task-oriented dialogue dataset. It contains more than 11K dialogues with more than 83K emotion annotations of user utterances. In addition to Wizard-of-Oz dialogues from MultiWOZ, we collect human-machine dialogues within the same set of domains to sufficiently cover the space of various emotions that can happen during the lifetime of a data-driven dialogue system. To the best of our knowledge, this is the first large-scale open-source corpus of its kind. We propose a novel emotion labelling scheme, which is tailored to task-oriented dialogues. We report a set of experimental results to show the usability of this corpus for emotion recognition and state tracking in task-oriented dialogues.",
}
``` | 14,273 | [
[
-0.036773681640625,
-0.06829833984375,
0.01442718505859375,
0.00923919677734375,
-0.00121307373046875,
0.0014629364013671875,
-0.01313018798828125,
-0.03179931640625,
0.036651611328125,
0.04302978515625,
-0.07537841796875,
-0.0679931640625,
-0.03009033203125,
... |
shawarmas/profanity-filter | 2023-06-22T08:31:38.000Z | [
"region:us"
] | shawarmas | null | null | 0 | 45 | 2023-06-03T09:50:17 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
clarin-knext/msmarco-pl-qrels | 2023-06-07T08:21:32.000Z | [
"language:pl",
"arxiv:2305.19840",
"region:us"
] | clarin-knext | null | null | 0 | 45 | 2023-06-06T22:03:21 | ---
language:
- pl
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl | 201 | [
[
-0.015411376953125,
-0.0628662109375,
0.035491943359375,
0.01641845703125,
-0.022186279296875,
-0.0103912353515625,
-0.0115966796875,
-0.034515380859375,
-0.0013151168823242188,
0.028656005859375,
-0.03826904296875,
-0.04815673828125,
-0.0290069580078125,
-0... |
practical-dreamer/RPGPT_PublicDomain-ShareGPT | 2023-07-04T00:04:40.000Z | [
"task_categories:conversational",
"size_categories:10M<n<100M",
"language:en",
"license:mit",
"sharegpt",
"region:us"
] | practical-dreamer | null | null | 7 | 45 | 2023-06-13T15:42:21 | ---
license: mit
task_categories:
- conversational
language:
- en
tags:
- sharegpt
pretty_name: rpgpt-sharegpt
size_categories:
- 10M<n<100M
---
Experimental Synthetic Dataset of Public Domain Character Dialogue in Roleplay Format
Generated using scripts from my https://github.com/practicaldreamer/build-a-dataset repo
---
license: mit
--- | 342 | [
[
-0.013946533203125,
-0.033447265625,
0.0306396484375,
0.017852783203125,
0.0029735565185546875,
-0.00919342041015625,
-0.006134033203125,
-0.011962890625,
0.033050537109375,
0.08074951171875,
-0.08111572265625,
-0.04620361328125,
0.01194000244140625,
0.01797... |
causal-lm/flan-muffin | 2023-06-25T03:11:40.000Z | [
"region:us"
] | causal-lm | null | null | 1 | 45 | 2023-06-25T02:54:51 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
CptNemo/small-shakespear-sonets-1 | 2023-07-06T10:50:10.000Z | [
"license:apache-2.0",
"region:us"
] | CptNemo | null | null | 0 | 45 | 2023-07-06T10:43:29 | ---
license: apache-2.0
---
This dataset is collection of Shakespear sonnet's, with a query for LLM. | 100 | [
[
-0.0162353515625,
-0.01309967041015625,
0.0097808837890625,
-0.010345458984375,
-0.0268096923828125,
0.006725311279296875,
0.017608642578125,
-0.007904052734375,
0.047607421875,
0.09552001953125,
-0.07098388671875,
-0.03021240234375,
-0.06256103515625,
0.001... |
shibing624/sharegpt_gpt4 | 2023-08-07T14:27:34.000Z | [
"task_categories:text-classification",
"task_categories:text-generation",
"task_ids:text-scoring",
"annotations_creators:shibing624",
"language_creators:shibing624",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:https://huggingface.co/datasets/openchat/openchat_sharegpt4... | shibing624 | null | null | 26 | 45 | 2023-07-27T05:45:49 | ---
annotations_creators:
- shibing624
language_creators:
- shibing624
language:
- zh
- en
- gl
- ko
license: cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset/tree/main
task_categories:
- text-classification
- text-generation
task_ids:
- text-scoring
paperswithcode_id: LLM
pretty_name: GPT4 sharegpt
---
# Dataset Card
## Dataset Description
- **Repository:** [shibing624/MedicalGPT](https://github.com/shibing624/MedicalGPT)
- **Dataset:** [shibing624/sharegpt_gpt4](https://huggingface.co/datasets/shibing624/sharegpt_gpt4)
- **Size of downloaded dataset files:** 772 M
- **Total amount of disk used:** 772 M
### Dataset Summary
ShareGPT中挑选出的GPT4多轮问答数据,多语言问答。
### Languages
数据集是多语言,包括中文、英文、汉语、日语等常用语言。
## Dataset Structure
### Data Fields
The data fields are the same among all splits.
- `conversations`: a List of `string` .
```shell
head -n 1 sharegpt_gpt4.jsonl
{"conversations":[
{'from': 'human',
'value': '採用優雅現代中文,用中文繁體字型,回答以下問題。為所有標題或專用字詞提供對應的英語翻譯:Using scholarly style, summarize in detail James Barr\'s book "Semantics of Biblical Language". Provide examples supplied in the book and give corresponding page numbers wherever applicable.'},
{'from': 'gpt',
'value': '《聖經語言的語義學》(Semantics of Biblical Language)是詹姆斯·巴爾(James Barr)於1961年出版的一本研究聖經語言的專著。在這部作品中,巴爾對聖經希伯來語和希臘語的語言學特徵進行了深入的探討,他批判了當時盛行的聖經研究方法,主張應該將語言學原理應用於聖經文本的詮釋,以便更準確地理解聖經的意思。\n\n巴爾在《聖經語言的語義學》中提出了幾個重要的觀點。首先,他主張語言學和語言學家在聖經研究中的角色應該受到重視。他認為,應該避免將聖經語言學和其他古典語言學區分開來,因為聖經語言和其他古代語言一樣,都有其固有的語法、語義和詞彙特點(第18-23頁)。\n\n其次,巴爾批評了當時流行的"字根意義論"(root-meaning fallacy),即過分強調單詞的字根在詮釋聖經含義時的作用。他指出,詞義是由語境決定的,而非僅僅由字根決定。巴爾舉了希伯來語單詞 "כפר"(kāpar)為例,它在不同的語境下可以表示"遮蓋"、"贖回"或"和解",而不是僅僅依賴其字根意義(第183-184頁)。\n\n第三,巴爾強調了詞彙意義在詮釋聖經時的重要性。他認為,了解單詞的意義和用法對於恰當理解聖經文本至關重要。巴爾引用了希伯來語單詞 "ברית"(berîṯ,英文:covenant)和希臘語單詞 "διαθήκη"(diathēkē,英文:testament)為例,解釋了這兩個詞在聖經中的不'},
{'from': 'human', 'value': '繼續'},
{'from': 'gpt',
'value': '同含義和用法。巴爾認為,了解這些詞語在不同語境下的意義對於理解聖經中的盟約和遺囑概念至關重要(第50-53頁)。\n\n第四,巴爾強調語境在詮釋聖經時的重要性。他認為,要理解聖經的含義,就必須考慮到文本的歷史背景、文化環境和語言風格。此外,巴爾提倡從語言學的角度研究聖經,以便更好地理解聖經的結構和風格。他以希伯來詩篇的平行主義(Parallelism)為例,闡述了語言學原理如何有助於揭示詩篇中的意象和修辭結構(第250-252頁)。\n\n總之,《聖經語言的語義學》一書對聖經研究領域產生了深遠的影響。詹姆斯·巴爾通過對聖經語言學的深入研究,提出了許多富有洞察力的觀點,尤其是關於語言學在聖經詮釋中的重要性。這些觀點對後來的聖經研究者產生了很大的啟示,使他們在研究聖經時更加重視語言學原理,從而更加準確地理解聖經的含義。'}
]
}
```
### Data Splits
```shell
> wc -l *
6206 sharegpt_gpt4.jsonl
58674 sharegpt_V3_format.jsonl
38535 sharegpt_zh_38K_format.jsonl
103415 total
```
#### Who are the annotators?
原作者。
### Licensing Information
same to sharegpt.
### Contributions
[shibing624](https://github.com/shibing624) add this dataset. | 2,694 | [
[
-0.04486083984375,
-0.05633544921875,
0.022003173828125,
0.0268402099609375,
-0.04736328125,
-0.025054931640625,
-0.023162841796875,
-0.0226287841796875,
0.03515625,
0.03204345703125,
-0.02728271484375,
-0.051849365234375,
-0.059814453125,
0.0165557861328125... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.