id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
ai2lumos/lumos_maths_plan_onetime | 2023-10-23T22:13:54.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"language-agent",
"maths",
"reasoning",
"region:us"
] | ai2lumos | null | null | 0 | 7 | 2023-10-23T05:46:41 | ---
license: apache-2.0
task_categories:
- conversational
- text-generation
language:
- en
tags:
- language-agent
- maths
- reasoning
size_categories:
- 10K<n<100K
---
# 🪄 Lumos: Language Agents with Unified Formats, Modular Design, and Open-Source LLMs
<p align="center">
🌐<a href="https://allenai.github.io/lumos">[Website]</a>
📝<a href="">[Paper]</a>
🤗<a href="https://huggingface.co/datasets?sort=trending&search=ai2lumos">[Data]</a>
🤗<a href="https://huggingface.co/models?sort=trending&search=ai2lumos">[Model]</a>
</p>
We introduce 🪄**Lumos**, Language Agents with **Unified** Formats, **Modular** Design, and **Open-Source** LLMs. **Lumos** unifies a suite of complex interactive tasks and achieves competitive performance with GPT-4/3.5-based and larger open-source agents.
**Lumos** has following features:
* 🧩 **Modular Architecture**:
- **Lumos** consists of planning, grounding, and execution modules built based on LLAMA-2-7B.
* 🌍 **Diverse Training Data**:
- **Lumos** is trained with ~40K high-quality annotations from ground-truth reasoning steps in existing benchmarks with GPT-4.
* 🚀 **Competitive Performance**:
- 🚀 **Lumos** outperforms **GPT-4/3.5-based** agents on complex QA and web agent tasks, and **larger open agents** on maths tasks.
- 🚀 **Lumos** performs better than open agent baseline formulations including **chain-of-thoughts** and **unmodularized** training.
- 🚀 **Lumos** surpasses larger open LLM agents and domain-specific agents on an unseen task, WebShop.
## Data Overview
`lumos_maths_plan_onetime` is the data for training **planning** module on **maths** task in **Lumos-Onetime (Lumos-O)** formulation.
The source of the training annotation training data is shown below:
| Task | Number |
|---|---|
|PRM800K|10000|
|GSM8K|7473|
|ASDiv|2305|
## Models Trained with the Data
`lumos_maths_plan_onetime` is used to train the following models.
|Model|Huggingface Repo|
|---|---|
|`lumos_maths_plan_onetime`| [🤗Huggingface Repo](https://huggingface.co/ai2lumos/lumos_maths_plan_onetime) |
## Citation
If you find this work is relevant with your research, please feel free to cite our work!
```
@article{yin2023lumos,
title={Lumos: Towards Language Agents that are Unified, Modular, and Open Source},
author={Yin, Da and Brahman, Faeze and Ravichander, Abhilasha and Chandu, Khyathi and Chang, Kai-Wei and Choi, Yejin and Lin, Bill Yuchen},
year={2023}
}
``` | 2,461 | [
[
-0.004856109619140625,
-0.041839599609375,
0.023895263671875,
0.025634765625,
-0.01502227783203125,
-0.0029582977294921875,
-0.0275726318359375,
-0.045196533203125,
0.0308380126953125,
0.033203125,
-0.045654296875,
-0.037384033203125,
-0.0229644775390625,
-0... |
luisa879862/elderly | 2023-10-23T15:51:27.000Z | [
"region:us"
] | luisa879862 | null | null | 0 | 7 | 2023-10-23T14:20:02 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
KODESAM/autotrain-data-test3 | 2023-10-23T21:31:47.000Z | [
"region:us"
] | KODESAM | null | null | 0 | 7 | 2023-10-23T21:20:20 | ---
# For reference on dataset card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
The alert `ClusterOperatorDown` is triggered by [cluster-version-operator]
(CVO) when a `ClusterOperator` is not in the `Available` state for a certain
period of time. An operand is `Available` when it is functional in the cluster.
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/openshift/cluster-version-operator
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
This alert indicates that an outage has occurred in your cluster. Investigate
the issue as soon as possible.
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
The alert message provides the name of the Operator that triggered the alert,
as shown in the following example:
```text
- alertname = ClusterOperatorDown
...
- name = console
...
```
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
* Review the status of all Operators to discover if multiple Operators are
down:
```console
$ oc get clusteroperator
```
* Review information about the current status of the Operator:
```console
$ oc get clusteroperator $CLUSTEROPERATOR -ojson | jq .status.conditions
```
* Review the associated resources for the Operator:
```console
$ oc get clusteroperator $CLUSTEROPERATOR -ojson | jq .status.relatedObjects
```
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | 5,577 | [
[
-0.03570556640625,
-0.038848876953125,
0.0254974365234375,
0.007568359375,
-0.027801513671875,
-0.011810302734375,
-0.01110076904296875,
-0.039337158203125,
0.040496826171875,
0.058258056640625,
-0.051177978515625,
-0.06768798828125,
-0.043670654296875,
-0.0... |
centroIA/MistralInstructScenariosv2 | 2023-10-23T22:59:42.000Z | [
"region:us"
] | centroIA | null | null | 0 | 7 | 2023-10-23T22:59:40 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2687418
num_examples: 967
download_size: 698118
dataset_size: 2687418
---
# Dataset Card for "MistralInstructScenariosv2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 521 | [
[
-0.026123046875,
-0.005107879638671875,
0.00292205810546875,
0.025421142578125,
-0.021148681640625,
-0.0104522705078125,
0.041015625,
-0.005245208740234375,
0.037078857421875,
0.03802490234375,
-0.0584716796875,
-0.03656005859375,
-0.042633056640625,
-0.0337... |
Rewcifer/ct_scans_90pct_2048_cutoff | 2023-10-24T01:25:22.000Z | [
"region:us"
] | Rewcifer | null | null | 0 | 7 | 2023-10-24T01:24:57 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 842235884.5219477
num_examples: 168647
download_size: 154765997
dataset_size: 842235884.5219477
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ct_scans_90pct_2048_cutoff"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 478 | [
[
-0.046905517578125,
-0.0023860931396484375,
0.03851318359375,
0.01511383056640625,
-0.0304412841796875,
-0.0308990478515625,
0.0237579345703125,
0.0028896331787109375,
0.042449951171875,
0.04632568359375,
-0.0653076171875,
-0.04852294921875,
-0.03692626953125,
... |
sayan1101/sft_test_custom_dataset_RLHF_updated | 2023-10-24T07:24:08.000Z | [
"region:us"
] | sayan1101 | null | null | 0 | 7 | 2023-10-24T07:12:20 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: label
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 35042
num_examples: 51
- name: test
num_bytes: 35042
num_examples: 51
- name: valid
num_bytes: 35042
num_examples: 51
download_size: 87354
dataset_size: 105126
---
# Dataset Card for "sft_test_custom_dataset_RLHF_updated"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 686 | [
[
-0.0295867919921875,
-0.041839599609375,
0.001399993896484375,
0.0206451416015625,
-0.012969970703125,
0.016357421875,
0.016448974609375,
-0.0105438232421875,
0.052001953125,
0.04241943359375,
-0.068603515625,
-0.037689208984375,
-0.0168914794921875,
-0.0078... |
PeterLawrence/processed_demo | 2023-10-24T13:46:36.000Z | [
"region:us"
] | PeterLawrence | null | null | 0 | 7 | 2023-10-24T13:46:34 | ---
dataset_info:
features:
- name: Prompt
dtype: string
- name: Completion
dtype: string
splits:
- name: train
num_bytes: 13723
num_examples: 34
download_size: 6610
dataset_size: 13723
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "processed_demo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 474 | [
[
-0.035003662109375,
-0.024749755859375,
0.0223236083984375,
0.0189361572265625,
-0.01323699951171875,
-0.0015649795532226562,
0.00870513916015625,
-0.00276947021484375,
0.060333251953125,
0.038543701171875,
-0.07537841796875,
-0.053558349609375,
-0.0326843261718... |
Icaruas/xml | 2023-10-24T17:07:23.000Z | [
"region:us"
] | Icaruas | null | null | 0 | 7 | 2023-10-24T17:06:41 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
lx-DeanE/fine-tuning-dataset-parquet | 2023-10-25T04:51:39.000Z | [
"region:us"
] | lx-DeanE | null | null | 0 | 7 | 2023-10-25T04:51:19 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
mmcho1157/attackgpt_base_2 | 2023-10-25T06:24:38.000Z | [
"region:us"
] | mmcho1157 | null | null | 0 | 7 | 2023-10-25T06:24:37 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 464
num_examples: 2
download_size: 2339
dataset_size: 464
---
# Dataset Card for "attackgpt_base_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 430 | [
[
-0.036834716796875,
-0.0305328369140625,
0.0017442703247070312,
0.0168304443359375,
-0.0161895751953125,
-0.005687713623046875,
0.0265960693359375,
-0.007442474365234375,
0.04132080078125,
0.0301513671875,
-0.03717041015625,
-0.0411376953125,
-0.059967041015625,... |
Arabic-Clip/Arabic_dataset_13M_translated_cleaned_v2_jsonl_format_ViT-B-16-plus-240 | 2023-10-27T21:09:55.000Z | [
"region:us"
] | Arabic-Clip | null | null | 0 | 7 | 2023-10-25T06:28:50 | This dataset repo contains the dataset (CC3M+CC12M+SBU) translated using opus-mt-en-ar and cleaned. Its size about 13M | 118 | [
[
-0.028839111328125,
-0.01045989990234375,
0.020721435546875,
0.007450103759765625,
-0.046600341796875,
-0.00958251953125,
-0.025115966796875,
-0.026763916015625,
0.007411956787109375,
0.09381103515625,
-0.06329345703125,
-0.038848876953125,
-0.0309295654296875,
... |
w95/fin | 2023-10-25T12:30:26.000Z | [
"region:us"
] | w95 | null | null | 0 | 7 | 2023-10-25T12:26:52 | ---
configs:
- config_name: default
data_files:
- split: train
path: train.jsonl
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
---
| 253 | [
[
-0.00897979736328125,
-0.01299285888671875,
0.045623779296875,
0.03460693359375,
-0.01352691650390625,
0.0164794921875,
0.0252227783203125,
0.00353240966796875,
0.040496826171875,
0.051788330078125,
-0.045501708984375,
-0.016632080078125,
-0.049652099609375,
... |
Rocinante/insturction_merge | 2023-10-27T06:32:19.000Z | [
"region:us"
] | Rocinante | null | null | 0 | 7 | 2023-10-25T15:28:48 | ---
dataset_info:
features:
- name: data_source
dtype: string
- name: history
sequence:
sequence: string
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 134671425
num_examples: 85081
download_size: 69561425
dataset_size: 134671425
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "insturction_merge"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 622 | [
[
-0.05517578125,
-0.0098724365234375,
0.003116607666015625,
0.01450347900390625,
-0.0213775634765625,
0.0091094970703125,
0.02410888671875,
-0.0160369873046875,
0.046478271484375,
0.041290283203125,
-0.05780029296875,
-0.03857421875,
-0.0428466796875,
-0.0239... |
anlp/annotation2_wo_elimination | 2023-10-26T04:41:27.000Z | [
"region:us"
] | anlp | null | null | 0 | 7 | 2023-10-26T04:26:26 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: sentences
sequence: string
- name: ner_tags
sequence: string
splits:
- name: train
num_bytes: 1326274
num_examples: 3384
download_size: 0
dataset_size: 1326274
---
# Dataset Card for "annotation2_wo_elimination"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 496 | [
[
-0.04852294921875,
-0.0338134765625,
0.01273345947265625,
0.001220703125,
-0.02215576171875,
-0.01751708984375,
0.00585174560546875,
-0.033905029296875,
0.040924072265625,
0.03753662109375,
-0.058441162109375,
-0.045989990234375,
-0.047882080078125,
-0.01195... |
sayan1101/instr_finetune_modelv1 | 2023-10-26T09:50:01.000Z | [
"region:us"
] | sayan1101 | null | null | 0 | 7 | 2023-10-26T09:21:29 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 27407564
num_examples: 52000
download_size: 0
dataset_size: 27407564
---
# Dataset Card for "instr_finetune_modelv1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 447 | [
[
-0.039886474609375,
-0.0212860107421875,
0.0024356842041015625,
0.006587982177734375,
-0.0198516845703125,
-0.019012451171875,
0.025848388671875,
0.00875091552734375,
0.031768798828125,
0.039825439453125,
-0.0595703125,
-0.04876708984375,
-0.03277587890625,
... |
riturralde/keywords-umsa | 2023-10-26T11:45:20.000Z | [
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:es",
"region:us"
] | riturralde | null | null | 0 | 7 | 2023-10-26T11:42:10 | ---
dataset_info:
features:
- name: abstract
dtype: string
- name: title
dtype: string
- name: keywords
dtype: string
splits:
- name: train
num_bytes: 28203744
num_examples: 15418
download_size: 15121873
dataset_size: 28203744
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- summarization
language:
- es
size_categories:
- 10K<n<100K
---
# Dataset Card for "keywords-umsa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 598 | [
[
-0.037567138671875,
-0.01137542724609375,
0.0159149169921875,
0.003658294677734375,
-0.03033447265625,
0.00310516357421875,
0.0093841552734375,
0.01241302490234375,
0.051788330078125,
0.04730224609375,
-0.03851318359375,
-0.07147216796875,
-0.040283203125,
-... |
hhhaaahhhaa/text-guided-vc-google-tts-api | 2023-10-27T09:43:13.000Z | [
"region:us"
] | hhhaaahhhaa | null | null | 0 | 7 | 2023-10-26T12:08:00 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: file_id
dtype: string
- name: instruction
dtype: string
- name: transcription
dtype: string
- name: src_encodec_0
sequence: int64
- name: src_encodec_1
sequence: int64
- name: src_encodec_2
sequence: int64
- name: src_encodec_3
sequence: int64
- name: src_encodec_4
sequence: int64
- name: src_encodec_5
sequence: int64
- name: src_encodec_6
sequence: int64
- name: src_encodec_7
sequence: int64
- name: tgt_encodec_0
sequence: int64
- name: tgt_encodec_1
sequence: int64
- name: tgt_encodec_2
sequence: int64
- name: tgt_encodec_3
sequence: int64
- name: tgt_encodec_4
sequence: int64
- name: tgt_encodec_5
sequence: int64
- name: tgt_encodec_6
sequence: int64
- name: tgt_encodec_7
sequence: int64
splits:
- name: train
num_bytes: 3704687470
num_examples: 90000
- name: validation
num_bytes: 203094306
num_examples: 5000
- name: test
num_bytes: 209112202
num_examples: 5000
download_size: 140841385
dataset_size: 4116893978
---
# Dataset Card for "text-guided-vc-google-tts-api"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,475 | [
[
-0.0236663818359375,
-0.0303192138671875,
0.030609130859375,
0.0235748291015625,
-0.021820068359375,
0.01010894775390625,
0.00775146484375,
0.00402069091796875,
0.047637939453125,
0.03790283203125,
-0.0758056640625,
-0.07177734375,
-0.041046142578125,
-0.005... |
Kishore05/kan100 | 2023-10-26T17:40:48.000Z | [
"region:us"
] | Kishore05 | null | null | 0 | 7 | 2023-10-26T17:31:58 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Taj-Mahal/magic-the-gathering | 2023-10-31T20:58:35.000Z | [
"region:us"
] | Taj-Mahal | null | null | 1 | 7 | 2023-10-26T19:43:32 | ---
dataset_info:
features:
- name: name
dtype: string
- name: firstPrinting
dtype: string
- name: manaCost
dtype: string
- name: convertedManaCost
dtype: float64
- name: type
dtype: string
- name: text
dtype: string
- name: power
dtype: string
- name: toughness
dtype: string
- name: loyalty
dtype: string
- name: layout
dtype: string
splits:
- name: train
num_bytes: 6999997
num_examples: 27703
- name: train_clean
num_bytes: 6813519.081146446
num_examples: 26965
download_size: 2539289
dataset_size: 13813516.081146445
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: train_clean
path: data/train_clean-*
---
# Dataset Card for "magic-the-gathering"
This is a HuggingFace adaptation of the [MTGJSON Atomic Card Database](https://mtgjson.com/data-models/card/card-atomic/) from the Taj-Mahal Data Science & Machine Learning Group.
## Usage
```
from datasets import load_dataset
dataset = load_dataset("Taj-Mahal/magic-the-gathering")
``` | 1,082 | [
[
-0.03118896484375,
-0.0479736328125,
-0.021087646484375,
-0.00463104248046875,
-0.0345458984375,
0.006687164306640625,
0.0030651092529296875,
-0.018585205078125,
0.039459228515625,
0.037445068359375,
-0.06512451171875,
-0.044891357421875,
-0.0294189453125,
0... |
ManuBansal/33param_snp500_trainingSet | 2023-10-27T10:02:05.000Z | [
"region:us"
] | ManuBansal | null | null | 0 | 7 | 2023-10-26T22:48:58 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
19kmunz/iot-23-preprocessed | 2023-10-31T14:47:39.000Z | [
"task_categories:question-answering",
"task_categories:tabular-classification",
"language:en",
"code",
"region:us"
] | 19kmunz | null | null | 0 | 7 | 2023-10-27T16:39:01 | ---
dataset_info:
features:
- name: id.orig_p
dtype: int64
- name: id.resp_p
dtype: int64
- name: proto
dtype: string
- name: service
dtype: string
- name: duration
dtype: float64
- name: orig_bytes
dtype: int64
- name: resp_bytes
dtype: int64
- name: conn_state
dtype: string
- name: missed_bytes
dtype: int64
- name: history
dtype: string
- name: orig_pkts
dtype: int64
- name: orig_ip_bytes
dtype: int64
- name: resp_pkts
dtype: int64
- name: resp_ip_bytes
dtype: int64
- name: label
dtype: string
splits:
- name: train
num_bytes: 93994789
num_examples: 819024
download_size: 11805369
dataset_size: 93994789
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- question-answering
- tabular-classification
language:
- en
tags:
- code
pretty_name: d
---
# Aposemat IoT-23 - a Labeled Dataset with Malcious and Benign Iot Network Traffic
**Homepage:** [https://www.stratosphereips.org/datasets-iot23](https://www.stratosphereips.org/datasets-iot23)
This dataset contains a subset of the data from 20 captures of Malcious network traffic and 3 captures from live Benign Traffic on Internet of Things (IoT) devices. Created by Sebastian Garcia, Agustin Parmisano, & Maria Jose Erquiaga at the Avast AIC laboratory with the funding of Avast Software, this dataset is one of the best in the field for Intrusion Detection Systems (IDS) for IoT Devices [(Comparative Analysis of IoT Botnet Datasets)](https://doi.org/10.53070/bbd.1173687).
The selection of the subset was determined by [Aqeel Ahmed on Kaggle](https://www.kaggle.com/datasets/engraqeel/iot23preprocesseddata) and contained 6 million samples. The Kaggle upload, nor this one, have employed data balancing. The Kaggle card does not contain methodology to understand what criteria was used to select these samples. If you want ensure best practice, use this dataset to mock-up processing the data into a model before using the full dataset with data balancing. This will require processing the 8GB of conn.log.labelled files.
This dataset only notes if the data is Malcious or Benign. The original dataset labels the type of malcious traffic aswell. This means this processing of the dataset is only suited for binary classification.
# Feature information:
All features originate from the [Zeek](https://docs.zeek.org/en/master/scripts/base/protocols/conn/main.zeek.html#type-Conn::Info) processing performed by the dataset creators. [See notes here for caviats for each column](https://docs.zeek.org/en/master/scripts/base/protocols/conn/main.zeek.html#type-Conn::Info).
<details>
<summary>Expand for feature names, descriptions, and datatypes</summary>
Name: id.orig_p
Description: The originator’s port number.
Data type: int64 - uint64 in original
Name: id.resp_p
Description: The responder’s port number.
Data type: int64 - uint64 in original
Name: proto
Description: The transport layer protocol of the connection.
Data type: string - enum(unknown_transport, tcp, udp, icmp). Only TCP and UDP in subset
Name: service
Description: An identification of an application protocol being sent over the connection.
Data type: string
Name: duration
Description: How long the connection lasted.
Data type: float64 - time interval
Name: orig_bytes
Description: The number of payload bytes the originator sent.
Data type: int64 - uint64 in original
Name: resp_bytes
Description:The number of payload bytes the responder sent.
Data type: int64 - uint64 in original
Name: conn_state
Description: Value indicating connection state. (S0, S1, SF, REJ, S2, S3, RSTO, RSTR, RSTOS0, RSTRH, SH, SHR, OTH)
Data type: string
Name: missed_bytes
Description: Indicates the number of bytes missed in content gaps, which is representative of packet loss.
Data type: int64 - uint64 in original
Name: history
Description: Records the state history of connections as a string of letters.
Data type: string
Name: orig_pkts
Description: Number of packets that the originator sent.
Data type: int64 - uint64 in original
Name: orig_ip_bytes
Description: Number of IP level bytes that the originator sent.
Data type: int64 - uint64 in original
Name: resp_pkts
Description: Number of packets that the responder sent.
Data type: int64 - uint64 in original
Name: resp_ip_bytes
Description: Number of IP level bytes that the responder sent.
Data type: int64 - uint64 in original
Name: label
Description: Specifies if data point is malicious or benign
Data type: string - enum(Malicious, Benign)
NOTE: ts, uid, id.orig_h, id.resp_h have been removed as they are dataset specific. Models should not be trained with specific timestamps or IP addresses (id.orig_h) using this dataset, as that can lead to over fitting to dataset specific times and addresses.
Further local_orig, local_resp have been removed as they are null in all rows, so they are useless for training.
</details>
## Citation
If you are using this dataset for your research, please reference it as “Sebastian Garcia, Agustin Parmisano, & Maria Jose Erquiaga. (2020). IoT-23: A labeled dataset with malicious and benign IoT network traffic (Version 1.0.0) [Data set]. Zenodo. http://doi.org/10.5281/zenodo.4743746”
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 5,514 | [
[
-0.037139892578125,
-0.05523681640625,
-0.01128387451171875,
0.0145263671875,
-0.0150909423828125,
-0.005725860595703125,
0.02276611328125,
-0.04583740234375,
0.036468505859375,
0.0531005859375,
-0.039703369140625,
-0.02886962890625,
-0.0227813720703125,
0.0... |
MikuHH/hjhgjhjhjhj | 2023-10-27T18:33:33.000Z | [
"region:us"
] | MikuHH | null | null | 0 | 7 | 2023-10-27T18:19:42 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
linhtran92/soict_private_test | 2023-10-28T02:48:16.000Z | [
"region:us"
] | linhtran92 | null | null | 0 | 7 | 2023-10-28T02:48:00 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: id
dtype: string
splits:
- name: train
num_bytes: 378888808.625
num_examples: 2139
download_size: 351233150
dataset_size: 378888808.625
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "soict_private_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 491 | [
[
-0.03094482421875,
-0.02490234375,
0.0082244873046875,
0.024658203125,
-0.0126953125,
-0.0082550048828125,
0.0197296142578125,
-0.0010929107666015625,
0.0443115234375,
0.035064697265625,
-0.0595703125,
-0.055755615234375,
-0.034332275390625,
-0.0169067382812... |
ashokpoudel/English-Nepali-Translation-Instruction-Dataset | 2023-10-28T08:11:10.000Z | [
"region:us"
] | ashokpoudel | null | null | 0 | 7 | 2023-10-28T08:02:31 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1712164438
num_examples: 3560496
download_size: 775881227
dataset_size: 1712164438
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
## Dataset Card: Instruction-Based English-Nepali Translation Dataset
### Dataset Description
This dataset consists of English-Nepali parallel sentences converted into an instruction-based format. Each entry prompts the model to translate a given sentence from English to Nepali or vice versa.
### Source Data
**Original Dataset**: English-Nepali Parallel Sentences
**Paper**: [NepBERTa: Nepali Language Model Trained in a Large Corpus](https://aura.abdn.ac.uk/bitstream/handle/2164/21465/Timilsina_etal_ACLA_NepNERTa_VOR.pdf)
**Authors**: Milan Gautam, Sulav Timilsina, Binod Bhattarai
**Conference**: Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)
### Dataset Format
Each entry in the dataset has the following format:
```
[INST] Please translate "sentence in source language" into target language [/INST] translation in target language
```
The dataset supports both English to Nepali and Nepali to English translations.
### Intended Use
This dataset is designed for fine-tuning models on instruction-based translation tasks, especially suited for models like Llama Instruct. It can be used to develop models capable of translating between English and Nepali using instruction-based prompts.
### Data Collection
The data was derived from the English-Nepali parallel corpus presented in the NepBERTa paper. The sentences were then converted into an instruction-based format to facilitate training with instruction-based models.
### Limitations
- The dataset's performance and utility are tied to the quality of the original English-Nepali corpus.
- The instruction-based format may introduce some redundancy and might not be ideal for all NLP tasks or models.
### Licensing
Ensure you have the right to share the data and understand any licensing implications. Mention the dataset's licensing terms here.
--- | 2,282 | [
[
0.0164337158203125,
-0.052093505859375,
-0.0017938613891601562,
0.043426513671875,
-0.02288818359375,
-0.0101318359375,
-0.0266571044921875,
-0.0117645263671875,
0.007068634033203125,
0.047393798828125,
-0.0289306640625,
-0.05328369140625,
-0.051483154296875,
... |
Harsh-7300/english_to_french | 2023-10-28T12:29:55.000Z | [
"task_categories:translation",
"size_categories:1K<n<10K",
"language:en",
"language:fr",
"license:mit",
"legal",
"region:us"
] | Harsh-7300 | null | null | 0 | 7 | 2023-10-28T10:44:49 | ---
license: mit
dataset_card: H@rsh7300
language:
- en
- fr
task_categories:
- translation
pretty_name: dataset3
size_categories:
- 1K<n<10K
tags:
- legal
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | 4,520 | [
[
-0.04034423828125,
-0.0419921875,
0.009765625,
0.0178070068359375,
-0.0300445556640625,
-0.00893402099609375,
-0.0026874542236328125,
-0.048431396484375,
0.043212890625,
0.059478759765625,
-0.05938720703125,
-0.069580078125,
-0.042205810546875,
0.00993347167... |
proan/fashion | 2023-10-28T20:58:24.000Z | [
"region:us"
] | proan | Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable. | @article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250},
} | 0 | 7 | 2023-10-28T16:59:33 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
tokentale1/Text-SQL-Ethereum_tokentale | 2023-10-28T20:10:16.000Z | [
"region:us"
] | tokentale1 | null | null | 0 | 7 | 2023-10-28T19:34:46 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 404053473
num_examples: 291757
download_size: 0
dataset_size: 404053473
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Text-SQL-Ethereum_tokentale"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 532 | [
[
-0.0187225341796875,
-0.040985107421875,
0.01568603515625,
0.02105712890625,
-0.0269775390625,
0.0033206939697265625,
0.004711151123046875,
-0.019378662109375,
0.06585693359375,
0.037872314453125,
-0.05572509765625,
-0.06756591796875,
-0.0333251953125,
-0.00... |
health360/Ultrachat-Multiple-Conversations-Alpaca-Tinyllama-Tokenized | 2023-10-29T08:06:07.000Z | [
"region:us"
] | health360 | null | null | 0 | 7 | 2023-10-29T06:53:41 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 24337034495
num_examples: 1468352
download_size: 8063172866
dataset_size: 24337034495
---
# Dataset Card for "Ultrachat-Multiple-Conversations-Alpaca-Tinyllama-Tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 585 | [
[
-0.03668212890625,
-0.05291748046875,
0.01003265380859375,
0.023468017578125,
-0.0288543701171875,
0.004642486572265625,
-0.0013761520385742188,
-0.02593994140625,
0.07794189453125,
0.035858154296875,
-0.056427001953125,
-0.051971435546875,
-0.0295257568359375,
... |
tiwes/w1k | 2023-10-30T09:44:59.000Z | [
"region:us"
] | tiwes | null | null | 0 | 7 | 2023-10-30T09:23:15 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
aminlouhichi/donut4 | 2023-10-30T12:35:01.000Z | [
"region:us"
] | aminlouhichi | null | null | 0 | 7 | 2023-10-30T12:34:47 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 25755968.0
num_examples: 60
- name: validation
num_bytes: 25755968.0
num_examples: 60
- name: test
num_bytes: 25755968.0
num_examples: 60
download_size: 55048836
dataset_size: 77267904.0
---
# Dataset Card for "donut4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 698 | [
[
-0.025634765625,
-0.009979248046875,
0.0231781005859375,
0.00632476806640625,
0.00440216064453125,
0.01163482666015625,
0.0149993896484375,
-0.00568389892578125,
0.0556640625,
0.035186767578125,
-0.055206298828125,
-0.047119140625,
-0.036376953125,
-0.006736... |
dwadden/healthver_entailment | 2023-10-31T00:37:09.000Z | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-2.0",
"region:us"
] | dwadden | HealthVer is a dataset of public health claims, verified against scientific research articles. For this version of the dataset, we follow the preprocessing from the MultiVerS modeling paper https://github.com/dwadden/multivers, verifying claims against full article abstracts rather than individual sentences. Entailment labels and rationales are included. | @inproceedings{Sarrouti2021EvidencebasedFO,
title={Evidence-based Fact-Checking of Health-related Claims},
author={Mourad Sarrouti and Asma Ben Abacha and Yassine Mrabet and Dina Demner-Fushman},
booktitle={Conference on Empirical Methods in Natural Language Processing},
year={2021},
url={https://api.semanticscholar.org/CorpusID:244119074}
} | 0 | 7 | 2023-10-30T22:27:12 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-nc-2.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- fact-checking
pretty_name: HealthVer
dataset_info:
features:
- name: claim_id
dtype: int32
- name: claim
dtype: string
- name: abstract_id
dtype: int32
- name: title
dtype: string
- name: abstract
sequence: string
- name: verdict
dtype: string
- name: evidence
sequence: int32
splits:
- name: train
num_bytes: 9490482
num_examples: 5292
- name: validation
num_bytes: 1707997
num_examples: 940
- name: test
num_bytes: 1620257
num_examples: 903
download_size: 3610222
dataset_size: 12818736
---
# Dataset Card for "healthver_entailment"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
## Dataset Description
- **Repository:** <https://github.com/sarrouti/HealthVe>
- **Point of Contact:** [David Wadden](mailto:davidw@allenai.org)
### Dataset Summary
HealthVer is a dataset of public health claims, verified against scientific research articles. For this version of the dataset, we follow the preprocessing from the MultiVerS modeling paper https://github.com/dwadden/multivers, verifying claims against full article abstracts rather than individual sentences. Entailment labels and rationales are included.
## Dataset Structure
### Data fields
- `claim_id`: An `int32` claim identifier.
- `claim`: A `string`.
- `abstract_id`: An `int32` abstract identifier.
- `title`: A `string`.
- `abstract`: A list of `strings`, one for each sentence in the abstract.
- `verdict`: The fact-checking verdict, a `string`.
- `evidence`: A list of sentences from the abstract which provide evidence for the verdict.
| 1,972 | [
[
-0.0150299072265625,
-0.03955078125,
0.036651611328125,
-0.00432586669921875,
-0.0164794921875,
-0.0211334228515625,
0.0018444061279296875,
-0.0251617431640625,
0.0250701904296875,
0.0511474609375,
-0.03167724609375,
-0.0682373046875,
-0.051605224609375,
0.0... |
mmmino/Action | 2023-10-31T02:27:41.000Z | [
"region:us"
] | mmmino | null | null | 0 | 7 | 2023-10-31T01:02:29 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
surajbijjahalli/ISIC2018 | 2023-10-31T07:52:45.000Z | [
"region:us"
] | surajbijjahalli | null | null | 0 | 7 | 2023-10-31T05:27:59 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 2203724361.79
num_examples: 2594
- name: validation
num_bytes: 241025351.0
num_examples: 100
- name: test
num_bytes: 2389508202.0
num_examples: 1000
download_size: 13874599089
dataset_size: 4834257914.79
---
# Dataset Card for "ISIC2018"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 709 | [
[
-0.04547119140625,
-0.01096343994140625,
0.017608642578125,
0.030731201171875,
-0.0245819091796875,
0.003093719482421875,
0.0200653076171875,
-0.037750244140625,
0.07275390625,
0.040130615234375,
-0.07183837890625,
-0.048583984375,
-0.041900634765625,
-0.004... |
thanhnew2001/taipy | 2023-11-01T03:45:05.000Z | [
"region:us"
] | thanhnew2001 | null | null | 0 | 7 | 2023-10-31T06:46:55 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ArunSharmaaaaa/databeta | 2023-10-31T07:06:28.000Z | [
"region:us"
] | ArunSharmaaaaa | null | null | 0 | 7 | 2023-10-31T07:06:07 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
toilaluan/reward_tuned_prompt_v1 | 2023-11-01T13:55:05.000Z | [
"region:us"
] | toilaluan | null | null | 0 | 7 | 2023-10-31T08:53:08 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: model_type
dtype: string
- name: request_id
dtype: int64
- name: topic
dtype: string
- name: reward
dtype: float64
- name: individual_rewards
struct:
- name: clip_aesthetic_rewarder
dtype: float64
- name: pick_rewarder
dtype: float64
- name: image_rewarder
dtype: float64
- name: hps_v2_rewarder
dtype: float64
splits:
- name: train
num_bytes: 463200
num_examples: 4500
download_size: 160093
dataset_size: 463200
---
# Dataset Card for "reward_tuned_prompt_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 803 | [
[
-0.0345458984375,
-0.0198822021484375,
0.01274871826171875,
0.0244598388671875,
-0.019744873046875,
-0.005397796630859375,
0.01824951171875,
0.00856781005859375,
0.06536865234375,
0.0295562744140625,
-0.09222412109375,
-0.044525146484375,
-0.03179931640625,
... |
ArunSharmaaaaa/dataaa | 2023-10-31T12:16:27.000Z | [
"region:us"
] | ArunSharmaaaaa | null | null | 0 | 7 | 2023-10-31T10:43:14 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ContextualAI/wikitext-103-mini | 2023-10-31T20:03:42.000Z | [
"region:us"
] | ContextualAI | null | null | 0 | 7 | 2023-10-31T20:03:38 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1795333.5
num_examples: 6000
download_size: 1601198
dataset_size: 1795333.5
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "wikitext-103-mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 449 | [
[
-0.05291748046875,
-0.01006317138671875,
0.002193450927734375,
0.012451171875,
-0.006992340087890625,
0.0027751922607421875,
0.0223541259765625,
-0.00183868408203125,
0.06951904296875,
0.0098419189453125,
-0.07568359375,
-0.0247955322265625,
-0.02813720703125,
... |
Wayne017/floor_plan | 2023-11-01T15:36:55.000Z | [
"region:us"
] | Wayne017 | null | null | 0 | 7 | 2023-11-01T07:34:10 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Sheokedech/id_instructions-id-small | 2023-11-01T14:44:19.000Z | [
"region:us"
] | Sheokedech | null | null | 0 | 7 | 2023-11-01T14:21:43 | ---
dataset_info:
features: []
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 0
dataset_size: 0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "id_instructions-id-small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 401 | [
[
-0.033905029296875,
-0.031036376953125,
0.030426025390625,
0.01482391357421875,
-0.0226593017578125,
-0.0250396728515625,
0.00609588623046875,
0.00846099853515625,
0.0552978515625,
0.024993896484375,
-0.06884765625,
-0.053070068359375,
-0.0288848876953125,
-... |
amphora/JeongEum-v.0.2 | 2023-11-02T07:14:58.000Z | [
"region:us"
] | amphora | null | null | 0 | 7 | 2023-11-02T06:45:25 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
- name: source
dtype: string
- name: text
dtype: string
- name: token_count
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1403486281
num_examples: 580812
download_size: 785058164
dataset_size: 1403486281
---
# Dataset Card for "JeongEum-v.0.2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 594 | [
[
-0.033477783203125,
-0.022247314453125,
0.01328277587890625,
0.000629425048828125,
-0.0248870849609375,
-0.021575927734375,
0.007198333740234375,
-0.026123046875,
0.06097412109375,
0.045867919921875,
-0.0430908203125,
-0.047027587890625,
-0.038665771484375,
... |
QuyenAnhDE/Diseases_Symptoms | 2023-11-02T08:44:36.000Z | [
"region:us"
] | QuyenAnhDE | null | null | 0 | 7 | 2023-11-02T08:39:12 | ## Dataset Details
The data was sourced from various medical websites accessible through Google search.
Dataset Information: 400 x 4
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Code** [More Information Needed]
- **Name:** [More Information Needed]
- **Symptoms** [More Information Needed]
- **Treatments** [More Information Needed]
| 381 | [
[
-0.004795074462890625,
-0.024200439453125,
0.029510498046875,
0.0110321044921875,
-0.00899505615234375,
-0.01971435546875,
0.0146942138671875,
-0.01499176025390625,
0.04107666015625,
0.0540771484375,
-0.049072265625,
-0.053436279296875,
-0.05413818359375,
0.... |
QuyenAnhDE/Concat_medical | 2023-11-02T11:12:06.000Z | [
"language:en",
"medical",
"region:us"
] | QuyenAnhDE | null | null | 0 | 7 | 2023-11-02T11:05:16 | ---
language:
- en
tags:
- medical
---
## Dataset Details
This is a dataset of disease names, their definitions and descriptions.
The information is extracted from the Disease Ontology.
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Question** [More Information Needed]
- **Context** [More Information Needed]
| 356 | [
[
-0.001125335693359375,
-0.0361328125,
0.005657196044921875,
-0.011505126953125,
-0.01197052001953125,
-0.020843505859375,
0.025909423828125,
-0.021392822265625,
0.04229736328125,
0.0711669921875,
-0.062255859375,
-0.05291748046875,
-0.052764892578125,
0.0092... |
NbAiLab/NCC | 2022-12-06T14:33:13.000Z | [
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:2G<n<1B",
"source_datasets:original",
"language:en",
"language:nb",
"language:no",
"language:nn",
"language:sv",
"language:da",
"language:is",
"la... | NbAiLab | \\nNorwegian Colossal Corpus v2. Short sequences of maximum 100k characters. | @inproceedings{kummervold-etal-2021-operationalizing,
title = "Operationalizing a National Digital Library: The Case for a {N}orwegian Transformer Model",
author = "Kummervold, Per E and
De la Rosa, Javier and
Wetjen, Freddy and
Brygfjeld, Svein Arne",
booktitle = "Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)",
month = may # " 31--2 " # jun,
year = "2021",
address = "Reykjavik, Iceland (Online)",
publisher = {Link{\"o}ping University Electronic Press, Sweden},
url = "https://aclanthology.org/2021.nodalida-main.3",
pages = "20--29",
abstract = "In this work, we show the process of building a large-scale training set from digital and digitized collections at a national library. The resulting Bidirectional Encoder Representations from Transformers (BERT)-based language model for Norwegian outperforms multilingual BERT (mBERT) models in several token and sequence classification tasks for both Norwegian Bokm{\aa}l and Norwegian Nynorsk. Our model also improves the mBERT performance for other languages present in the corpus such as English, Swedish, and Danish. For languages not included in the corpus, the weights degrade moderately while keeping strong multilingual properties. Therefore, we show that building high-quality models within a memory institution using somewhat noisy optical character recognition (OCR) content is feasible, and we hope to pave the way for other memory institutions to follow.",
} | 14 | 6 | 2022-03-02T23:29:22 | ---
YAML tags:
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
- nb
- no
- nn
- sv
- da
- is
- fo
license:
- other
multilinguality:
- multilingual
pretty_name: NCC
size_categories:
- 2G<n<1B
source_datasets:
- original
task_categories:
- sequence-modeling
task_ids:
- language-modeling
extra_gated_prompt: "The Directive on Copyright in the Digital Single Market, which came into force on June 6 2019, amends the European Union copyright and database legislation and allows for Text and Data Mining (TDM) activities for research organizations and cultural heritage institutions. Under the terms of the aforementioned directive, by clicking on 'Access repository' you agree on using the text and data contained in this dataset for non-commercial scientific purposes only."
---
# Dataset Card for NbAiLab/NCC
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Data Fields](#data-fiels)
- [Dataset Creation](#dataset-creation)
- [Statistics](#statistics)
- [Document Types](#document-types)
- [Languages](#languages)
- [Publish Periode](#publish-periode)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/NbAiLab/notram
- **Repository:** https://github.com/NbAiLab/notram
- **Paper:** https://arxiv.org/abs/2104.09617
- **Point of Contact:** [Freddy Wetjen](mailto:freddy.wetjen@nb.no)
The Norwegian Colossal Corpus is a collection of multiple smaller Norwegian corpuses suitable for training large language models. We have done extensive cleaning on the datasets, and have made them available in a common format. The total size of the NCC is currently 45GB.
## How to Use
```python
from datasets import load_dataset
data = load_dataset("NbAiLab/NCC", streaming=True)
```
## Download Data
If you do not want to use the HuggingFace Dataset-library for training, or if you want to do additional pre-processing, it is also possible to download the files locally.
```bash
# Clone the training set
git clone https://huggingface.co/datasets/NbAiLab/NCC
# Create one large training file of all shards without unpacking
cat NCC/data/train*.gz > onefile.json.gz
```
<details>
<summary>List of all the files.</summary>
* [train-shard-0001-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0001-of-0046.json.gz)
* [train-shard-0002-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0002-of-0046.json.gz)
* [train-shard-0003-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0003-of-0046.json.gz)
* [train-shard-0004-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0004-of-0046.json.gz)
* [train-shard-0005-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0005-of-0046.json.gz)
* [train-shard-0006-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0006-of-0046.json.gz)
* [train-shard-0007-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0007-of-0046.json.gz)
* [train-shard-0008-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0008-of-0046.json.gz)
* [train-shard-0009-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0009-of-0046.json.gz)
* [train-shard-0010-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0010-of-0046.json.gz)
* [train-shard-0011-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0011-of-0046.json.gz)
* [train-shard-0012-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0012-of-0046.json.gz)
* [train-shard-0013-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0013-of-0046.json.gz)
* [train-shard-0014-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0014-of-0046.json.gz)
* [train-shard-0015-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0015-of-0046.json.gz)
* [train-shard-0016-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0016-of-0046.json.gz)
* [train-shard-0017-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0017-of-0046.json.gz)
* [train-shard-0018-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0018-of-0046.json.gz)
* [train-shard-0019-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0019-of-0046.json.gz)
* [train-shard-0020-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0020-of-0046.json.gz)
* [train-shard-0021-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0021-of-0046.json.gz)
* [train-shard-0022-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0022-of-0046.json.gz)
* [train-shard-0023-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0023-of-0046.json.gz)
* [train-shard-0024-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0024-of-0046.json.gz)
* [train-shard-0025-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0025-of-0046.json.gz)
* [train-shard-0026-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0026-of-0046.json.gz)
* [train-shard-0027-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0027-of-0046.json.gz)
* [train-shard-0028-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0028-of-0046.json.gz)
* [train-shard-0029-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0029-of-0046.json.gz)
* [train-shard-0030-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0030-of-0046.json.gz)
* [train-shard-0031-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0031-of-0046.json.gz)
* [train-shard-0032-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0032-of-0046.json.gz)
* [train-shard-0033-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0033-of-0046.json.gz)
* [train-shard-0034-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0034-of-0046.json.gz)
* [train-shard-0035-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0035-of-0046.json.gz)
* [train-shard-0036-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0036-of-0046.json.gz)
* [train-shard-0037-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0037-of-0046.json.gz)
* [train-shard-0038-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0038-of-0046.json.gz)
* [train-shard-0039-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0039-of-0046.json.gz)
* [train-shard-0040-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0040-of-0046.json.gz)
* [train-shard-0041-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0041-of-0046.json.gz)
* [train-shard-0042-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0042-of-0046.json.gz)
* [train-shard-0043-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0043-of-0046.json.gz)
* [train-shard-0044-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0044-of-0046.json.gz)
* [train-shard-0045-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0045-of-0046.json.gz)
* [train-shard-0046-of-0046](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/train-shard-0046-of-0046.json.gz)
* [validation-shard-0001-of-0001](https://huggingface.co/datasets/NbAiLab/NCC/resolve/main/data/validation-shard-0001-of-0001.json.gz)
</details>
### Dataset Summary
The NCC dataset contains json lines with language training data. Here is an example json line:
```json
{
"id": "1006205",
"doc_type": "cc100",
"publish_year": 2021,
"lang_fasttext": "nn",
"lang_fasttext_conf": "0.641",
"text": "Eg har ein PLAN! KOS deg og ha ei fin helg"
}
```
## Data Fields
|**id:** | String with id to source of line and a unique identifier|
|:-----------|:------------|
|**doc_type** | String describing type of media text extracted from (I.e. book,newspaper etc)|
|**publish_year** | Integer. The year text published. When year is undetermined it is set to 2021.|
|**lang_fasttext** | String. Language of text identified by FastText|
|**lang_fasttext_conf** | String. Confidence calculated by FastText|
|**text** | String. The complete utf-8 document. If longer than 1M characters it is split.|
### Dataset Creation
We are providing a **train** and a **validation** split. The standard size of the validation is a single 1GB file, while train is sharded in 1GB chunks.
All files are gzipped.
Build date: 21012022
#### Initial Data Collection and Curation
The procedure for the dataset creation is described in detail in our paper.
### Summary
| Words | Documents | Words/Document |
|--------------:|------------:|-----------------:|
| 6,905,570,165 | 20,830,348 | 331 |
### Document Types
| Source | Words | Documents | Words/Document |
|--------------------------------------:|--------------:|------------:|-----------------:|
| newspaper_ocr | 1,974,452,883 | 9,872,470 | 199 |
| parliament | 1,273,353,169 | 9,321 | 136,611 |
| books | 842,936,050 | 23,708 | 35,554 |
| newspapers_online_nb | 487,189,627 | 3,446,348 | 141 |
| maalfrid_regjeringen | 360,349,242 | 919,902 | 391 |
| maalfrid_ssb | 279,732,847 | 851,982 | 328 |
| maalfrid_uio | 181,916,296 | 771,480 | 235 |
| government_nb | 134,127,104 | 3,476 | 38,586 |
| wikipedia_download_nbo | 110,845,615 | 523,593 | 211 |
| maalfrid_fylkesmannen | 102,849,898 | 463,021 | 222 |
| publicreports | 78,347,879 | 3,298 | 23,756 |
| maalfrid_nve | 66,656,315 | 301,966 | 220 |
| maalfrid_patentstyret | 64,985,154 | 213,991 | 303 |
| maalfrid_ntnu | 57,803,460 | 199,307 | 290 |
| newspapers_online_nn | 42,205,558 | 167,347 | 252 |
| lovdata_cd_odelsting_2005 | 36,370,948 | 1,933 | 18,815 |
| maalfrid_vegvesen | 33,431,887 | 166,203 | 201 |
| maalfrid_fhi | 32,784,098 | 144,363 | 227 |
| maalfrid_norad | 32,720,034 | 93,097 | 351 |
| maalfrid_skatteetaten | 32,567,691 | 82,589 | 394 |
| maalfrid_uib | 28,425,322 | 115,729 | 245 |
| wikipedia_download_nno | 27,061,858 | 143,265 | 188 |
| maalfrid_forskningsradet | 24,076,984 | 73,368 | 328 |
| maalfrid_nasjonalparkstyre | 21,309,995 | 93,871 | 227 |
| government_nn | 18,316,345 | 1,063 | 17,230 |
| maalfrid_nmbu | 18,082,476 | 69,719 | 259 |
| maalfrid_oslomet | 17,710,771 | 47,022 | 376 |
| maalfrid_domstol | 16,678,270 | 51,038 | 326 |
| maalfrid_banenor | 16,445,420 | 70,360 | 233 |
| maalfrid_nav | 16,272,635 | 74,101 | 219 |
| maalfrid_landbruksdirektoratet | 13,119,567 | 47,983 | 273 |
| maalfrid_helsedirektoratet | 13,008,787 | 49,344 | 263 |
| maalfrid_nokut | 10,101,424 | 38,552 | 262 |
| maalfrid_hi | 10,046,751 | 39,065 | 257 |
| maalfrid_norges-bank | 9,924,489 | 37,171 | 266 |
| maalfrid_udir | 9,868,345 | 38,736 | 254 |
| maalfrid_vkm | 9,824,529 | 32,230 | 304 |
| maalfrid_nbim | 9,629,725 | 18,131 | 531 |
| maalfrid_miljodirektoratet | 9,496,631 | 34,711 | 273 |
| maalfrid_distriktssenteret | 9,375,506 | 38,525 | 243 |
| maalfrid_ngu | 9,231,905 | 34,619 | 266 |
| maalfrid_ptil | 9,214,434 | 34,250 | 269 |
| maalfrid_nord | 8,992,352 | 44,800 | 200 |
| maalfrid_fiskeridir | 8,297,897 | 33,446 | 248 |
| maalfrid_hivolda | 7,820,709 | 26,473 | 295 |
| maalfrid_difi | 7,789,290 | 35,733 | 217 |
| maalfrid_mattilsynet | 7,492,831 | 27,002 | 277 |
| maalfrid_havarikommisjonen | 7,440,410 | 24,989 | 297 |
| maalfrid_kulturradet | 7,196,423 | 22,437 | 320 |
| maalfrid_ks | 6,915,503 | 27,439 | 252 |
| maalfrid_kystverket | 6,713,070 | 30,975 | 216 |
| maalfrid_udi | 6,433,540 | 19,134 | 336 |
| maalfrid_uia | 5,964,644 | 23,861 | 249 |
| maalfrid_hjelpemiddeldatabasen | 5,892,662 | 34,192 | 172 |
| maalfrid_khrono | 5,859,186 | 19,970 | 293 |
| maalfrid_helsetilsynet | 5,803,000 | 18,365 | 315 |
| maalfrid_moreforsk | 5,622,025 | 21,579 | 260 |
| maalfrid_jernbanedirektoratet | 5,461,268 | 21,666 | 252 |
| maalfrid_veiviseren | 5,316,521 | 18,026 | 294 |
| lovdata_cd_somb_rundskriv_2005 | 5,264,746 | 3,215 | 1,637 |
| maalfrid_dsb | 5,199,259 | 17,814 | 291 |
| lovdata_cd_sentrale_forskrifter_2005 | 5,037,694 | 11,467 | 439 |
| maalfrid_husbanken | 4,711,069 | 15,053 | 312 |
| maalfrid_legemiddelverket | 4,689,988 | 20,192 | 232 |
| maalfrid_vetinst | 4,674,951 | 14,492 | 322 |
| maalfrid_imdi | 4,636,355 | 15,290 | 303 |
| maalfrid_forsvarsbygg | 4,567,318 | 18,886 | 241 |
| maalfrid_sdir | 4,540,110 | 15,202 | 298 |
| maalfrid_konkurransetilsynet | 4,512,807 | 12,617 | 357 |
| maalfrid_dsa | 4,498,837 | 15,898 | 282 |
| maalfrid_arkivverket | 4,493,280 | 16,515 | 272 |
| maalfrid_hiof | 4,473,731 | 23,119 | 193 |
| maalfrid_ehelse | 4,379,984 | 22,553 | 194 |
| maalfrid_inn | 4,326,704 | 26,277 | 164 |
| maalfrid_klagenemndssekretariatet | 4,181,685 | 11,916 | 350 |
| maalfrid_sprakradet | 4,097,815 | 15,187 | 269 |
| maalfrid_dibk | 3,967,428 | 15,509 | 255 |
| maalfrid_nhh | 3,962,033 | 15,678 | 252 |
| maalfrid_kartverket | 3,732,184 | 18,710 | 199 |
| maalfrid_riksrevisjonen | 3,680,555 | 10,922 | 336 |
| maalfrid_toll | 3,510,061 | 13,777 | 254 |
| maalfrid_nibio | 3,456,026 | 17,104 | 202 |
| maalfrid_met | 3,446,762 | 18,282 | 188 |
| maalfrid_bufdir | 3,354,740 | 11,470 | 292 |
| maalfrid_artsdatabanken | 3,193,511 | 9,009 | 354 |
| maalfrid_politiet | 3,167,395 | 10,501 | 301 |
| maalfrid_nkom | 3,127,687 | 10,002 | 312 |
| maalfrid_vestlandfylke | 3,060,166 | 12,075 | 253 |
| maalfrid_uis | 2,924,821 | 9,838 | 297 |
| maalfrid_sykkelbynettverket | 2,820,702 | 11,818 | 238 |
| maalfrid_nlr | 2,646,014 | 15,851 | 166 |
| maalfrid_seniorporten | 2,616,054 | 8,111 | 322 |
| maalfrid_npd | 2,597,831 | 10,742 | 241 |
| maalfrid_aldringoghelse | 2,430,767 | 6,788 | 358 |
| maalfrid_custompublish | 2,430,747 | 9,184 | 264 |
| maalfrid_bioteknologiradet | 2,393,891 | 5,996 | 399 |
| maalfrid_arbeidstilsynet | 2,379,597 | 6,882 | 345 |
| maalfrid_nyemetoder | 2,376,468 | 10,771 | 220 |
| maalfrid_riksantikvaren | 2,257,491 | 8,756 | 257 |
| maalfrid_sjt | 2,238,168 | 11,189 | 200 |
| lovdata_cd_lokaleforskrifter_2005 | 2,176,221 | 22,274 | 97 |
| maalfrid_hvl | 2,149,292 | 9,395 | 228 |
| maalfrid_luftfartstilsynet | 2,101,272 | 9,866 | 212 |
| maalfrid_dfo | 2,073,203 | 9,165 | 226 |
| maalfrid_ldo | 2,047,969 | 7,299 | 280 |
| maalfrid_kompetansenorge | 1,952,035 | 10,245 | 190 |
| maalfrid_forbrukerradet | 1,945,089 | 7,330 | 265 |
| maalfrid_himolde | 1,913,699 | 9,975 | 191 |
| maalfrid_usn | 1,793,297 | 7,403 | 242 |
| lovdata_cd_norgeslover_2005 | 1,760,884 | 1,386 | 1,270 |
| maalfrid_naku | 1,754,510 | 5,239 | 334 |
| maalfrid_medietilsynet | 1,608,424 | 6,611 | 243 |
| maalfrid_matematikksenteret | 1,567,505 | 7,298 | 214 |
| maalfrid_forskningsetikk | 1,545,336 | 5,545 | 278 |
| maalfrid_diku | 1,542,929 | 6,241 | 247 |
| maalfrid_godeidrettsanlegg | 1,506,577 | 6,115 | 246 |
| maalfrid_dirmin | 1,467,255 | 5,303 | 276 |
| maalfrid_diskrimineringsnemnda | 1,463,291 | 4,168 | 351 |
| maalfrid_naturfag | 1,450,662 | 5,976 | 242 |
| maalfrid_arbeidsretten | 1,440,074 | 4,754 | 302 |
| lovdata_cd_rtv_rundskriv_2005 | 1,366,872 | 9,596 | 142 |
| maalfrid_fellesstudentsystem | 1,359,292 | 10,321 | 131 |
| maalfrid_nupi | 1,286,395 | 5,491 | 234 |
| maalfrid_kriminalitetsforebygging | 1,201,477 | 4,667 | 257 |
| maalfrid_anskaffelser | 1,187,544 | 5,479 | 216 |
| maalfrid_folketrygdfondet | 1,183,502 | 4,253 | 278 |
| maalfrid_miljopakken | 1,170,252 | 5,513 | 212 |
| maalfrid_nih | 1,116,791 | 5,271 | 211 |
| maalfrid_statsbygg | 1,103,635 | 4,439 | 248 |
| lovdata_cd_skatt_rundskriv_2005 | 1,102,142 | 398 | 2,769 |
| maalfrid_nb | 1,055,200 | 4,135 | 255 |
| maalfrid_npolar | 1,051,181 | 2,653 | 396 |
| maalfrid_unit | 1,049,621 | 6,329 | 165 |
| maalfrid_valgdirektoratet | 1,009,941 | 9,131 | 110 |
| maalfrid_barneombudet | 980,751 | 2,807 | 349 |
| maalfrid_datatilsynet | 974,679 | 2,965 | 328 |
| maalfrid_lottstift | 959,590 | 3,578 | 268 |
| maalfrid_aho | 953,568 | 4,528 | 210 |
| maalfrid_sykehuspartner | 939,625 | 4,579 | 205 |
| maalfrid_naturfagsenteret | 897,049 | 3,859 | 232 |
| maalfrid_khio | 849,973 | 3,377 | 251 |
| maalfrid_spesialenheten | 824,209 | 2,127 | 387 |
| maalfrid_xn--miljlftet-o8ab | 803,011 | 3,384 | 237 |
| maalfrid_samordnaopptak | 792,595 | 2,368 | 334 |
| maalfrid_helsenorge | 780,465 | 3,034 | 257 |
| maalfrid_skrivesenteret | 777,204 | 4,161 | 186 |
| maalfrid_mareano | 760,645 | 3,724 | 204 |
| maalfrid_fiskeridirektoratet | 755,997 | 2,444 | 309 |
| maalfrid_sykehusinnkjop | 738,720 | 4,340 | 170 |
| maalfrid_matportalen | 630,990 | 2,370 | 266 |
| maalfrid_spk | 613,180 | 2,152 | 284 |
| maalfrid_justervesenet | 595,014 | 1,904 | 312 |
| maalfrid_pasientsikkerhetsprogrammet | 594,399 | 4,684 | 126 |
| maalfrid_nhn | 579,713 | 3,581 | 161 |
| maalfrid_sshf | 572,570 | 1,897 | 301 |
| maalfrid_bibliotekutvikling | 560,126 | 3,216 | 174 |
| maalfrid_nysgjerrigper | 559,207 | 3,019 | 185 |
| maalfrid_nodnett | 538,021 | 2,689 | 200 |
| maalfrid_une | 513,586 | 1,255 | 409 |
| maalfrid_giek | 512,569 | 1,796 | 285 |
| maalfrid_samas | 501,177 | 2,548 | 196 |
| maalfrid_kriminalomsorgen | 496,062 | 1,951 | 254 |
| maalfrid_kjonnsforskning | 483,376 | 1,426 | 338 |
| maalfrid_kunstkultursenteret | 470,009 | 1,435 | 327 |
| lovdata_cd_rundskriv_lovavdeling_2005 | 469,295 | 405 | 1,158 |
| maalfrid_nynorsksenteret | 460,165 | 2,085 | 220 |
| maalfrid_ceres | 448,920 | 1,950 | 230 |
| maalfrid_stami | 445,031 | 1,160 | 383 |
| maalfrid_nsm | 442,110 | 1,536 | 287 |
| maalfrid_gjenopptakelse | 420,205 | 1,467 | 286 |
| maalfrid_nfi | 420,128 | 1,523 | 275 |
| maalfrid_nidsenter | 410,785 | 1,631 | 251 |
| maalfrid_nasjonalmuseet | 390,036 | 1,087 | 358 |
| maalfrid_forbrukertilsynet | 387,579 | 1,227 | 315 |
| maalfrid_natursekken | 378,442 | 3,563 | 106 |
| maalfrid_fordelingsutvalget | 355,121 | 1,385 | 256 |
| maalfrid_digdir | 349,548 | 2,105 | 166 |
| maalfrid_forsvaret | 331,183 | 1,215 | 272 |
| maalfrid_beccle | 329,568 | 1,517 | 217 |
| maalfrid_romsenter | 329,304 | 1,133 | 290 |
| maalfrid_geonorge | 301,869 | 1,622 | 186 |
| maalfrid_universell | 263,740 | 2,155 | 122 |
| maalfrid_ovf | 262,542 | 930 | 282 |
| maalfrid_forbrukereuropa | 259,420 | 1,018 | 254 |
| maalfrid_politihogskolen | 258,615 | 1,229 | 210 |
| maalfrid_vinmonopolet | 245,685 | 671 | 366 |
| maalfrid_energimerking | 237,243 | 1,033 | 229 |
| maalfrid_ombudsmann | 225,947 | 418 | 540 |
| maalfrid_vea-fs | 224,712 | 1,261 | 178 |
| maalfrid_traumebevisst | 224,297 | 2,417 | 92 |
| maalfrid_npe | 205,102 | 1,000 | 205 |
| maalfrid_pkh | 201,503 | 791 | 254 |
| maalfrid_helfo | 193,880 | 988 | 196 |
| maalfrid_opplaringslovutvalget | 193,590 | 549 | 352 |
| maalfrid_regionaleforskningsfond | 187,261 | 989 | 189 |
| maalfrid_nafkam | 177,295 | 571 | 310 |
| maalfrid_jernbanemagasinet | 174,152 | 412 | 422 |
| maalfrid_polarhistorie | 171,386 | 382 | 448 |
| maalfrid_aasentunet | 161,626 | 529 | 305 |
| maalfrid_riksteatret | 159,991 | 798 | 200 |
| maalfrid_realfagsloyper | 157,166 | 748 | 210 |
| maalfrid_koro | 153,304 | 574 | 267 |
| maalfrid_squarespace | 146,931 | 504 | 291 |
| maalfrid_politietssikkerhetstjeneste | 143,781 | 469 | 306 |
| maalfrid_unknown | 139,263 | 700 | 198 |
| maalfrid_whocc | 121,616 | 656 | 185 |
| maalfrid_konfliktraadet | 120,258 | 372 | 323 |
| maalfrid_okokrim | 115,842 | 372 | 311 |
| maalfrid_brreg | 112,787 | 571 | 197 |
| maalfrid_riksmekleren | 110,737 | 558 | 198 |
| maalfrid_sismo | 110,700 | 309 | 358 |
| maalfrid_radetfordyreetikk | 99,241 | 441 | 225 |
| maalfrid_akkreditert | 99,040 | 503 | 196 |
| maalfrid_sivilforsvaret | 97,679 | 514 | 190 |
| maalfrid_lanekassen | 95,286 | 301 | 316 |
| maalfrid_digidel | 95,140 | 607 | 156 |
| maalfrid_generaladvokaten | 91,385 | 294 | 310 |
| maalfrid_uit | 90,273 | 602 | 149 |
| maalfrid_nyinorge | 88,466 | 199 | 444 |
| maalfrid_lokforerskolen | 87,224 | 468 | 186 |
| maalfrid_varsom | 85,382 | 563 | 151 |
| maalfrid_ffi | 80,137 | 220 | 364 |
| maalfrid_kulturminnefondet | 79,767 | 411 | 194 |
| maalfrid_unesco | 76,951 | 382 | 201 |
| maalfrid_yrkesfisker | 74,807 | 501 | 149 |
| maalfrid_dekom | 72,148 | 1,307 | 55 |
| maalfrid_omsorgsforskning | 71,675 | 321 | 223 |
| maalfrid_lektor2 | 67,385 | 549 | 122 |
| maalfrid_openaccess | 63,554 | 192 | 331 |
| maalfrid_ssn | 63,036 | 302 | 208 |
| maalfrid_lokalhistorie | 59,854 | 241 | 248 |
| maalfrid_nlb | 57,872 | 200 | 289 |
| maalfrid_riksadvokaten | 57,563 | 155 | 371 |
| maalfrid_laudim | 57,500 | 393 | 146 |
| maalfrid_denkulturelleskolesekken | 46,018 | 243 | 189 |
| maalfrid_sivilrett | 44,062 | 142 | 310 |
| maalfrid_htu | 43,330 | 169 | 256 |
| maalfrid_yr | 40,646 | 562 | 72 |
| maalfrid_informasjonskompetanse | 40,351 | 330 | 122 |
| maalfrid_dep | 38,882 | 126 | 308 |
| maalfrid_finansportalen | 38,506 | 180 | 213 |
| maalfrid_feide | 36,715 | 267 | 137 |
| maalfrid_kulturped | 36,013 | 96 | 375 |
| maalfrid_fug | 34,158 | 120 | 284 |
| maalfrid_kulturoghelse | 33,424 | 184 | 181 |
| maalfrid_helseklage | 32,756 | 124 | 264 |
| maalfrid_nbsk | 30,674 | 211 | 145 |
| maalfrid_matogindustri | 29,922 | 194 | 154 |
| maalfrid_sinn | 27,541 | 150 | 183 |
| maalfrid_transport21 | 25,317 | 90 | 281 |
| maalfrid_konkursradet | 23,505 | 76 | 309 |
| maalfrid_vergemal | 23,271 | 77 | 302 |
| maalfrid_norec | 22,496 | 78 | 288 |
| maalfrid_pts | 20,459 | 78 | 262 |
| maalfrid_nasjonaleturistveger | 19,922 | 110 | 181 |
| maalfrid_iearth | 19,281 | 146 | 132 |
| maalfrid_hjelpelinjen | 19,209 | 85 | 225 |
| maalfrid_russamtalen | 17,999 | 65 | 276 |
| maalfrid_xn--kvinneligomskjring-1ub | 17,701 | 77 | 229 |
| maalfrid_nynorskbok | 17,600 | 96 | 183 |
| maalfrid_regjeringsadvokaten | 17,416 | 55 | 316 |
| maalfrid_memu | 17,311 | 98 | 176 |
| maalfrid_xn--tilbakefring-2jb | 15,814 | 49 | 322 |
| maalfrid_xn--forskerfr-t8a | 15,724 | 172 | 91 |
| maalfrid_ringerikefengsel | 15,669 | 28 | 559 |
| maalfrid_skeivtarkiv | 15,537 | 69 | 225 |
| maalfrid_samfunnskunnskap | 15,110 | 60 | 251 |
| maalfrid_fordelingsutvalet | 15,017 | 34 | 441 |
| maalfrid_skattefunn | 14,599 | 51 | 286 |
| maalfrid_shiprep | 14,165 | 142 | 99 |
| maalfrid_haldenfengsel | 13,625 | 37 | 368 |
| maalfrid_sevuppt | 13,332 | 52 | 256 |
| maalfrid_forbrukerklageutvalget | 12,698 | 49 | 259 |
| maalfrid_mhfa | 11,999 | 144 | 83 |
| maalfrid_ah | 11,787 | 36 | 327 |
| maalfrid_nettvett | 11,002 | 43 | 255 |
| maalfrid_uh-it | 10,828 | 273 | 39 |
| maalfrid_fishgen | 10,199 | 28 | 364 |
| maalfrid_designavgang | 10,164 | 75 | 135 |
| maalfrid_global | 9,051 | 41 | 220 |
| maalfrid_havmiljo | 8,607 | 68 | 126 |
| maalfrid_valg | 8,516 | 47 | 181 |
| maalfrid_miljoklagenemnda | 7,797 | 35 | 222 |
| maalfrid_altinn | 7,695 | 49 | 157 |
| maalfrid_spinn-inn | 7,674 | 47 | 163 |
| maalfrid_kantinekurset | 7,217 | 53 | 136 |
| maalfrid_bastoyfengsel | 7,142 | 56 | 127 |
| maalfrid_norskpetroleum | 6,083 | 119 | 51 |
| maalfrid_voldsoffererstatning | 5,827 | 26 | 224 |
| maalfrid_musikkbasertmiljobehandling | 5,186 | 39 | 132 |
| maalfrid_prosjektveiviseren | 5,019 | 14 | 358 |
| maalfrid_aldersvennlig | 4,919 | 32 | 153 |
| maalfrid_barentswatch | 4,829 | 32 | 150 |
| maalfrid_fmfiavo@fylkesmannen | 4,702 | 68 | 69 |
| maalfrid_kk-utvalget | 4,697 | 19 | 247 |
| maalfrid_agropub | 4,434 | 17 | 260 |
| maalfrid_utdanningiverden | 4,266 | 13 | 328 |
| maalfrid_overgangsbolig | 3,769 | 35 | 107 |
| maalfrid_forsvaretsmuseer | 3,706 | 34 | 109 |
| maalfrid_okopark | 3,282 | 12 | 273 |
| maalfrid_pst | 2,866 | 14 | 204 |
| maalfrid_sikkerhverdag | 2,697 | 18 | 149 |
| maalfrid_arkitektur | 2,436 | 15 | 162 |
| maalfrid_velgekte | 2,287 | 10 | 228 |
| maalfrid_addlab | 2,109 | 12 | 175 |
| maalfrid_romerikefengsel | 2,088 | 19 | 109 |
| maalfrid_utdanning | 2,009 | 12 | 167 |
| maalfrid_grunderskolen | 1,994 | 7 | 284 |
| maalfrid_umb | 1,934 | 8 | 241 |
| maalfrid_oslofengsel | 1,756 | 8 | 219 |
| maalfrid_hjorteviltregisteret | 1,600 | 5 | 320 |
| maalfrid_alleteller | 1,511 | 7 | 215 |
| maalfrid_webhuset | 1,409 | 5 | 281 |
| maalfrid_lykillinn | 1,349 | 4 | 337 |
| maalfrid_kulturfag | 1,215 | 6 | 202 |
| maalfrid_unimus | 940 | 4 | 235 |
| maalfrid_anleggsregisteret | 928 | 5 | 185 |
| maalfrid_mangfoldsprisen | 597 | 3 | 199 |
| maalfrid_algae2future | 456 | 8 | 57 |
| maalfrid_mammapresenterer | 447 | 2 | 223 |
| maalfrid_karriereveiledning | 391 | 27 | 14 |
| maalfrid_nodsms | 351 | 4 | 87 |
| maalfrid_kildekompasset | 302 | 1 | 302 |
| maalfrid_praksisfou | 297 | 1 | 297 |
| maalfrid_retttilaalese | 246 | 3 | 82 |
| maalfrid_indreostfoldfengsel | 215 | 3 | 71 |
| maalfrid_xn--kroppsvingsforskning-gcc | 205 | 2 | 102 |
| maalfrid_pahoyden | 154 | 1 | 154 |
| maalfrid_norren | 42 | 1 | 42 |
### Languages
| Language | Words | Documents | Words/Document |
|-----------:|--------------:|------------:|-----------------:|
| no | 5,050,752,505 | 17,177,223 | 294 |
| da | 940,216,574 | 574,211 | 1,637 |
| en | 474,855,361 | 1,526,795 | 311 |
| nn | 299,753,996 | 987,701 | 303 |
| fr | 49,409,701 | 108,071 | 457 |
| de | 27,159,878 | 85,230 | 318 |
| sv | 18,773,092 | 118,753 | 158 |
| es | 10,057,791 | 42,177 | 238 |
| fi | 8,104,322 | 46,710 | 173 |
| et | 3,309,661 | 24,183 | 136 |
| cs | 2,652,151 | 21,793 | 121 |
| pt | 2,550,218 | 16,407 | 155 |
| oc | 2,123,730 | 4,927 | 431 |
| nl | 1,984,501 | 11,813 | 167 |
| zh | 1,470,751 | 8,146 | 180 |
| uk | 1,459,484 | 5,096 | 286 |
| ca | 1,370,260 | 4,476 | 306 |
| it | 1,293,230 | 8,479 | 152 |
| la | 1,281,920 | 797 | 1,608 |
| ru | 1,231,482 | 6,796 | 181 |
| pl | 852,304 | 9,396 | 90 |
| eu | 831,276 | 3,195 | 260 |
| hu | 659,973 | 8,499 | 77 |
| fa | 494,551 | 2,047 | 241 |
| ja | 351,634 | 4,994 | 70 |
| is | 309,422 | 1,207 | 256 |
| id | 226,296 | 2,033 | 111 |
| ar | 205,632 | 1,173 | 175 |
| sl | 140,913 | 1,858 | 75 |
| vi | 139,122 | 982 | 141 |
| so | 128,303 | 592 | 216 |
| hr | 124,033 | 1,081 | 114 |
| el | 117,624 | 618 | 190 |
| lv | 106,626 | 123 | 866 |
| tr | 92,680 | 1,630 | 56 |
| ro | 80,804 | 635 | 127 |
| sr | 71,953 | 970 | 74 |
| lt | 70,148 | 869 | 80 |
| gl | 65,152 | 692 | 94 |
| war | 56,369 | 274 | 205 |
| ko | 56,057 | 1,006 | 55 |
| th | 54,067 | 367 | 147 |
| am | 44,818 | 317 | 141 |
| sk | 39,416 | 1,000 | 39 |
| ml | 35,575 | 156 | 228 |
| ceb | 35,337 | 331 | 106 |
| sq | 34,461 | 238 | 144 |
| tl | 30,839 | 177 | 174 |
| kk | 27,827 | 72 | 386 |
| eo | 24,187 | 859 | 28 |
| mn | 21,540 | 22 | 979 |
| sw | 18,670 | 72 | 259 |
| pnb | 18,403 | 80 | 230 |
| sh | 17,807 | 213 | 83 |
| gu | 16,973 | 13 | 1,305 |
| bg | 16,495 | 100 | 164 |
| ur | 15,650 | 169 | 92 |
| mk | 13,305 | 65 | 204 |
| ckb | 9,119 | 43 | 212 |
| ku | 9,071 | 57 | 159 |
| ast | 7,919 | 73 | 108 |
| az | 7,907 | 59 | 134 |
| ms | 7,051 | 483 | 14 |
| uz | 6,924 | 56 | 123 |
| ta | 4,180 | 60 | 69 |
| fy | 3,841 | 68 | 56 |
| ga | 3,761 | 174 | 21 |
| hy | 3,456 | 43 | 80 |
| pa | 3,299 | 17 | 194 |
| hi | 2,783 | 39 | 71 |
| be | 2,556 | 62 | 41 |
| bo | 2,551 | 1 | 2,551 |
| ht | 2,534 | 11 | 230 |
| jv | 2,341 | 91 | 25 |
| min | 2,206 | 18 | 122 |
| cy | 2,052 | 52 | 39 |
| bs | 2,047 | 66 | 31 |
| als | 1,918 | 66 | 29 |
| su | 1,888 | 29 | 65 |
| nds | 1,869 | 162 | 11 |
| ps | 1,832 | 15 | 122 |
| bn | 1,797 | 22 | 81 |
| qu | 1,498 | 14 | 107 |
| ilo | 1,126 | 25 | 45 |
| mt | 968 | 16 | 60 |
| si | 942 | 29 | 32 |
| te | 888 | 18 | 49 |
| my | 784 | 15 | 52 |
| af | 741 | 32 | 23 |
| io | 715 | 15 | 47 |
| tt | 684 | 22 | 31 |
| km | 674 | 11 | 61 |
| br | 645 | 40 | 16 |
| gn | 638 | 11 | 58 |
| jbo | 611 | 27 | 22 |
| as | 584 | 2 | 292 |
| ug | 581 | 6 | 96 |
| kv | 562 | 3 | 187 |
| kn | 544 | 22 | 24 |
| pam | 480 | 2 | 240 |
| kw | 475 | 19 | 25 |
| vep | 419 | 34 | 12 |
| he | 412 | 18 | 22 |
| ka | 351 | 20 | 17 |
| yo | 281 | 9 | 31 |
| wa | 268 | 38 | 7 |
| ky | 228 | 10 | 22 |
| azb | 216 | 1 | 216 |
| ba | 203 | 5 | 40 |
| gom | 174 | 12 | 14 |
| ia | 140 | 15 | 9 |
| mr | 138 | 10 | 13 |
| lmo | 134 | 27 | 4 |
| tg | 129 | 3 | 43 |
| lb | 115 | 26 | 4 |
| pms | 115 | 16 | 7 |
| vec | 67 | 3 | 22 |
| rue | 67 | 2 | 33 |
| sco | 61 | 6 | 10 |
| ie | 59 | 11 | 5 |
| hsb | 57 | 3 | 19 |
| ne | 56 | 6 | 9 |
| bar | 46 | 7 | 6 |
| cbk | 46 | 2 | 23 |
| or | 44 | 2 | 22 |
| mg | 38 | 8 | 4 |
| os | 36 | 3 | 12 |
| tk | 36 | 4 | 9 |
| arz | 31 | 1 | 31 |
| li | 29 | 6 | 4 |
| gd | 29 | 2 | 14 |
| eml | 24 | 5 | 4 |
| diq | 20 | 2 | 10 |
| lrc | 20 | 1 | 20 |
| dsb | 19 | 1 | 19 |
| yue | 19 | 1 | 19 |
| nap | 16 | 1 | 16 |
| nah | 14 | 2 | 7 |
| wuu | 14 | 1 | 14 |
| sd | 14 | 1 | 14 |
| frr | 13 | 3 | 4 |
| rm | 12 | 2 | 6 |
| cv | 12 | 1 | 12 |
| scn | 9 | 2 | 4 |
| bh | 8 | 1 | 8 |
| bcl | 8 | 1 | 8 |
| co | 7 | 1 | 7 |
| ce | 4 | 1 | 4 |
| new | 4 | 1 | 4 |
| vo | 3 | 2 | 1 |
| mzn | 3 | 1 | 3 |
| gv | 3 | 1 | 3 |
| lo | 2 | 1 | 2 |
### Publish Periode
| Decade | Words | Documents | Words/Document |
|---------:|--------------:|------------:|-----------------:|
| 2020 | 4,090,213,596 | 10,934,550 | 523 |
| 2010 | 355,391,417 | 2,415,563 | 1,511 |
| 2000 | 447,853,330 | 1,705,354 | 2,773 |
| 1990 | 767,392,364 | 2,513,364 | 3,051 |
| 1980 | 160,980,586 | 538,665 | 3,011 |
| 1970 | 186,113,674 | 829,646 | 2,222 |
| 1960 | 149,421,535 | 834,219 | 1,807 |
| 1950 | 97,863,608 | 478,628 | 2,041 |
| 1940 | 122,648,278 | 570,154 | 2,307 |
| 1930 | 35,635,053 | 697 | 508,420 |
| 1920 | 50,381,418 | 1,049 | 484,836 |
| 1910 | 62,599,984 | 1,221 | 504,678 |
| 1900 | 60,019,080 | 1,130 | 527,329 |
| 1890 | 86,781,861 | 1,777 | 485,878 |
| 1880 | 58,546,570 | 1,064 | 553,442 |
| 1870 | 26,492,662 | 632 | 407,191 |
| 1860 | 39,176,930 | 698 | 543,151 |
| 1850 | 53,801,490 | 846 | 634,038 |
| 1840 | 30,434,939 | 522 | 581,593 |
| 1830 | 18,189,838 | 368 | 481,719 |
| 1820 | 4,721,154 | 144 | 338,350 |
| 1810 | 910,798 | 57 | 124,880 |
## Considerations for Using the Data
This corpus contains data under copyright and is not allowed to be used outide the National Library of Norway. The dataset should not be distributed.
### Discussion of Biases
Please refer to our paper.
### Dataset Curators
[Freddy Wetjen](mailto:Freddy.wetjen@nb.no) and [Per Egil Kummervold](mailto:Per.Kummervold@nb.no)
## License
Various licences applies to different parts of the corpus. Every document in the corpus has a tag telling what **"doc_type"** it belongs to. If you are unable to accept any of the licenses, you should filter out the **"doc_type"** with a conflicting license.
| Doc_type | License |
| :-------- | :------------- |
| government_nb, government_nn, parliament, publicreports, lovdata_cd_\*, maalfrid_\* | [NLOD 2.0](https://data.norge.no/nlod/en/2.0/)|
| newspapers_ocr, newspapers_pdf, books| [CC0 1.0](https://creativecommons.org/publicdomain/zero/1.0/)|
| newspapers_online_nb, newspapers_online_nn | [CC BY-NC 2.0](https://creativecommons.org/licenses/by-nc/2.0/)|
| opensubtitles, wikipedia | [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/)
|
### Citation Information
We are preparing an article with detailed information about this corpus. Until it is published, please cite out paper discussing the first version of this corpus:
```
@inproceedings{kummervold-etal-2021-operationalizing,
title = {Operationalizing a National Digital Library: The Case for a {N}orwegian Transformer Model},
author = {Kummervold, Per E and
De la Rosa, Javier and
Wetjen, Freddy and
Brygfjeld, Svein Arne",
booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)},
year = "2021",
address = "Reykjavik, Iceland (Online)",
publisher = {Link{"o}ping University Electronic Press, Sweden},
url = "https://aclanthology.org/2021.nodalida-main.3",
pages = "20--29",
abstract = "In this work, we show the process of building a large-scale training set from digital and digitized collections at a national library.
The resulting Bidirectional Encoder Representations from Transformers (BERT)-based language model for Norwegian outperforms multilingual BERT (mBERT) models
in several token and sequence classification tasks for both Norwegian Bokm{aa}l and Norwegian Nynorsk. Our model also improves the mBERT performance for other
languages present in the corpus such as English, Swedish, and Danish. For languages not included in the corpus, the weights degrade moderately while keeping strong multilingual properties. Therefore,
we show that building high-quality models within a memory institution using somewhat noisy optical character recognition (OCR) content is feasible, and we hope to pave the way for other memory institutions to follow.",
}
```
| 54,232 | [
[
-0.0430908203125,
-0.049072265625,
0.00399017333984375,
0.016082763671875,
-0.018218994140625,
0.0141754150390625,
-0.00392913818359375,
-0.034088134765625,
0.056854248046875,
0.01947021484375,
-0.03656005859375,
-0.049591064453125,
-0.042816162109375,
-0.01... |
metaeval/blimp_classification | 2023-01-09T10:50:25.000Z | [
"task_categories:text-classification",
"task_ids:acceptability-classification",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"cola",
"region:us"
] | metaeval | Acceptable/non acceptable sentences (recasted as a classification task) | null | 1 | 6 | 2022-03-02T23:29:22 | ---
license: apache-2.0
size_categories:
- 10K<n<100K
task_categories:
- text-classification
task_ids:
- acceptability-classification
language:
- en
tags:
- cola
---
Blimp with the coarse categories and recasted as a classification task (Cola format). | 252 | [
[
0.01093292236328125,
-0.02679443359375,
-0.014434814453125,
0.04144287109375,
-0.00823974609375,
0.00609588623046875,
-0.021942138671875,
-0.01280975341796875,
0.0245819091796875,
0.032318115234375,
-0.0255279541015625,
-0.016082763671875,
-0.051177978515625,
... |
mozilla-foundation/common_voice_3_0 | 2023-07-29T15:59:59.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:extended|common_voice",
"license:cc0-1.0",
"arxiv:1912.06670",
"region:us"
] | mozilla-foundation | null | @inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
} | 0 | 6 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
license:
- cc0-1.0
multilinguality:
- multilingual
size_categories:
br:
- 10K<n<100K
ca:
- 10K<n<100K
cnh:
- 1K<n<10K
cv:
- 1K<n<10K
cy:
- 10K<n<100K
de:
- 100K<n<1M
dv:
- 1K<n<10K
en:
- 100K<n<1M
eo:
- 10K<n<100K
es:
- 10K<n<100K
et:
- 1K<n<10K
eu:
- 10K<n<100K
fa:
- 10K<n<100K
fr:
- 100K<n<1M
ga-IE:
- 1K<n<10K
it:
- 10K<n<100K
kab:
- 100K<n<1M
ky:
- 10K<n<100K
mn:
- 1K<n<10K
nl:
- 10K<n<100K
ru:
- 10K<n<100K
rw:
- 1K<n<10K
sah:
- 1K<n<10K
sl:
- 1K<n<10K
sv-SE:
- 1K<n<10K
tr:
- 1K<n<10K
tt:
- 10K<n<100K
zh-CN:
- 1K<n<10K
zh-TW:
- 10K<n<100K
source_datasets:
- extended|common_voice
paperswithcode_id: common-voice
pretty_name: Common Voice Corpus 3
language_bcp47:
- br
- ca
- cnh
- cv
- cy
- de
- dv
- en
- eo
- es
- et
- eu
- fa
- fr
- ga-IE
- it
- kab
- ky
- mn
- nl
- ru
- rw
- sah
- sl
- sv-SE
- tr
- tt
- zh-CN
- zh-TW
extra_gated_prompt: By clicking on “Access repository” below, you also agree to not
attempt to determine the identity of speakers in the Common Voice dataset.
task_categories:
- automatic-speech-recognition
---
# Dataset Card for Common Voice Corpus 3
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
- **Leaderboard:** https://paperswithcode.com/dataset/common-voice
- **Point of Contact:** [Anton Lozhkov](mailto:anton@huggingface.co)
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 2454 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 1979 validated hours in 29 languages, but more voices and languages are always added.
Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
### Supported Tasks and Leaderboards
The results for models trained on the Common Voice datasets are available via the
[🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
### Languages
```
Basque, Breton, Catalan, Chinese (China), Chinese (Taiwan), Chuvash, Dhivehi, Dutch, English, Esperanto, Estonian, French, German, Hakha Chin, Irish, Italian, Kabyle, Kinyarwanda, Kyrgyz, Mongolian, Persian, Russian, Sakha, Slovenian, Spanish, Swedish, Tatar, Turkish, Welsh
```
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`.
Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
```python
{
'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
'path': 'et/clips/common_voice_et_18318995.mp3',
'audio': {
'path': 'et/clips/common_voice_et_18318995.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000
},
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
'up_votes': 2,
'down_votes': 0,
'age': 'twenties',
'gender': 'male',
'accent': '',
'locale': 'et',
'segment': ''
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
`sentence` (`string`): The sentence the user was prompted to speak
`up_votes` (`int64`): How many upvotes the audio file has received from reviewers
`down_votes` (`int64`): How many downvotes the audio file has received from reviewers
`age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
`gender` (`string`): The gender of the speaker
`accent` (`string`): Accent of the speaker
`locale` (`string`): The locale of the speaker
`segment` (`string`): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
```python
from datasets import load_dataset
ds = load_dataset("mozilla-foundation/common_voice_3_0", "en", use_auth_token=True)
def prepare_dataset(batch):
"""Function to preprocess the dataset with the .map method"""
transcription = batch["sentence"]
if transcription.startswith('"') and transcription.endswith('"'):
# we can remove trailing quotation marks as they do not affect the transcription
transcription = transcription[1:-1]
if transcription[-1] not in [".", "?", "!"]:
# append a full-stop to sentences that do not end in punctuation
transcription = transcription + "."
batch["sentence"] = transcription
return batch
ds = ds.map(prepare_dataset, desc="preprocess dataset")
```
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
| 9,609 | [
[
-0.040008544921875,
-0.05487060546875,
0.0104827880859375,
0.034332275390625,
-0.01812744140625,
0.0025501251220703125,
-0.0426025390625,
-0.017852783203125,
0.032318115234375,
0.041748046875,
-0.05499267578125,
-0.07061767578125,
-0.031951904296875,
0.01840... |
peixian/rtGender | 2022-10-25T09:54:24.000Z | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | peixian | RtGender is a corpus for studying responses to gender online, including posts and responses from Facebook, TED, Fitocracy, and Reddit where the gender of the source poster/speaker is known. | @inproceedings{voigt-etal-2018-rtgender,
title = "{R}t{G}ender: A Corpus for Studying Differential Responses to Gender",
author = "Voigt, Rob and
Jurgens, David and
Prabhakaran, Vinodkumar and
Jurafsky, Dan and
Tsvetkov, Yulia",
booktitle = "Proceedings of the Eleventh International Conference on Language Resources and Evaluation ({LREC} 2018)",
month = may,
year = "2018",
address = "Miyazaki, Japan",
publisher = "European Language Resources Association (ELRA)",
url = "https://www.aclweb.org/anthology/L18-1445",
} | 1 | 6 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-label-classification
---
# Dataset Card for rtGender
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
RtGender is a corpus for studying responses to gender online, including posts and responses from Facebook, TED, Fitocracy, and Reddit where the gender of the source poster/speaker is known.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
- `source`: a `string` feature.
- `op_gender`: a `string` feature.
- `post_text`: a `string` feature.
- `response_text`: a `string` feature.
- `sentiment`: a `string` feature.
- `relevance`: a `string` feature.
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] | 2,990 | [
[
-0.049346923828125,
-0.040802001953125,
0.007114410400390625,
0.0177001953125,
-0.01136016845703125,
0.01035308837890625,
-0.0237274169921875,
-0.017852783203125,
0.0201263427734375,
0.0369873046875,
-0.07598876953125,
-0.0777587890625,
-0.05462646484375,
0.... |
pierreguillou/lener_br_finetuning_language_model | 2022-10-25T09:54:32.000Z | [
"task_ids:language-modeling",
"multilinguality:monolingual",
"language:pt",
"lener_br",
"region:us"
] | pierreguillou | null | null | 2 | 6 | 2022-03-02T23:29:22 | ---
language:
- pt
multilinguality:
- monolingual
task_ids:
- language-modeling
paperswithcode_id: lener-br
pretty_name: LeNER-Br language modeling
datasets:
- lener_br
tags:
- lener_br
---
# Dataset Card for "LeNER-Br language modeling"
## Dataset Summary
The LeNER-Br language modeling dataset is a collection of legal texts in Portuguese from the [LeNER-Br](https://huggingface.co/datasets/lener_br) dataset ([official site](https://cic.unb.br/~teodecampos/LeNER-Br/)).
The legal texts were downloaded from this [link](https://cic.unb.br/~teodecampos/LeNER-Br/LeNER-Br.zip) (93.6MB) and processed to create a `DatasetDict` with train and validation dataset (20%).
The LeNER-Br language modeling dataset allows the finetuning of language models as BERTimbau [base](https://huggingface.co/neuralmind/bert-base-portuguese-cased) and [large](https://huggingface.co/neuralmind/bert-large-portuguese-cased).
## Language
Portuguese from Brazil.
## Blog post
[NLP | Modelos e Web App para Reconhecimento de Entidade Nomeada (NER) no domínio jurídico brasileiro](https://medium.com/@pierre_guillou/nlp-modelos-e-web-app-para-reconhecimento-de-entidade-nomeada-ner-no-dom%C3%ADnio-jur%C3%ADdico-b658db55edfb) (29/12/2021)
## Dataset structure
```
DatasetDict({
validation: Dataset({
features: ['text'],
num_rows: 3813
})
train: Dataset({
features: ['text'],
num_rows: 15252
})
})
```
## Use
```
!pip install datasets
from datasets import load_dataset
dataset = load_dataset("pierreguillou/lener_br_finetuning_language_model")
``` | 1,585 | [
[
-0.0259552001953125,
-0.052154541015625,
-0.0024852752685546875,
0.0254669189453125,
-0.0282745361328125,
-0.022216796875,
-0.0297393798828125,
-0.017486572265625,
0.01407623291015625,
0.052032470703125,
-0.03338623046875,
-0.0628662109375,
-0.028106689453125,
... |
toloka/CrowdSpeech | 2022-12-06T15:24:36.000Z | [
"task_categories:summarization",
"task_categories:automatic-speech-recognition",
"task_categories:text2text-generation",
"annotations_creators:found",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:cc-... | toloka | CrowdSpeech is a publicly available large-scale dataset of crowdsourced audio transcriptions. It contains annotations for more than 50 hours of English speech transcriptions from more than 1,000 crowd workers. | null | 3 | 6 | 2022-03-02T23:29:22 | ---
annotations_creators:
- found
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- summarization
- automatic-speech-recognition
- text2text-generation
task_ids: []
paperswithcode_id: crowdspeech
pretty_name: CrowdSpeech
language_bcp47:
- en-US
tags:
- conditional-text-generation
- stuctured-to-text
- speech-recognition
---
# Dataset Card for CrowdSpeech
## Dataset Description
- **Repository:** [GitHub](https://github.com/Toloka/CrowdSpeech)
- **Paper:** [Paper](https://openreview.net/forum?id=3_hgF1NAXU7)
- **Point of Contact:** research@toloka.ai
### Dataset Summary
CrowdSpeech is the first publicly available large-scale dataset of crowdsourced audio transcriptions.
The dataset was constructed by annotation [LibriSpeech](https://www.openslr.org/12) on [Toloka crowdsourcing platform](https://toloka.ai).
CrowdSpeech consists of 22K instances having around 155K annotations obtained from crowd workers.
### Supported Tasks and Leaderboards
Aggregation of crowd transcriptions.
### Languages
English
## Dataset Structure
### Data Instances
A data instance contains a url to the audio recording, a list of transcriptions along with the corresponding performers identifiers and ground truth.
For each data instance, seven crowdsourced transcriptions are provided.
```
{'task': 'https://tlk.s3.yandex.net/annotation_tasks/librispeech/train-clean/0.mp3',
'transcriptions': "had laid before her a pair of alternatives now of course you're completely your own mistress and are as free as the bird on the bough i don't mean you were not so before but you're at present on a different footing | had laid before her a pair of alternatives now of course you are completely your own mistress and are as free as the bird on the bowl i don't mean you were not so before but you were present on a different footing | had laid before her a pair of alternatives now of course you're completely your own mistress and are as free as the bird on the bow i don't mean you are not so before but you're at present on a different footing | had laid before her a pair of alternatives now of course you're completely your own mistress and are as free as the bird on the bow i don't mean you are not so before but you're at present on a different footing | laid before her a pair of alternativesnow of course you're completely your own mistress and are as free as the bird on the bow i don't mean you're not so before but you're at present on a different footing | had laid before her a peril alternatives now of course your completely your own mistress and as free as a bird as the back bowl i don't mean you were not so before but you are present on a different footing | a lady before her a pair of alternatives now of course you're completely your own mistress and rs free as the bird on the ball i don't need you or not so before but you're at present on a different footing",
'performers': '1154 | 3449 | 3097 | 461 | 3519 | 920 | 3660',
'gt': "had laid before her a pair of alternatives now of course you're completely your own mistress and are as free as the bird on the bough i don't mean you were not so before but you're at present on a different footing"}
```
### Data Fields
* task: a string containing a url of the audio recording
* transcriptions: a list of the crowdsourced transcriptions separated by '|'
* performers: the corresponding performers' identifiers.
* gt: ground truth transcription
### Data Splits
There are five splits in the data: train, test, test.other, dev.clean and dev.other.
Splits train, test and dev.clean correspond to *clean* part of LibriSpeech that contains audio recordings of higher quality with accents
of the speaker being closer to the US English. Splits dev.other and test.other correspond to *other* part of LibriSpeech with
the recordings more challenging for recognition. The audio recordings are gender-balanced.
## Dataset Creation
### Source Data
[LibriSpeech](https://www.openslr.org/12) is a corpus of approximately 1000 hours of 16kHz read English speech.
### Annotations
Annotation was done on [Toloka crowdsourcing platform](https://toloka.ai) with overlap of 7 (that is, each task was performed by 7 annotators).
Only annotators who self-reported the knowledge of English had access to the annotation task.
Additionally, annotators had to pass *Entrance Exam*. For this, we ask all incoming eligible workers to annotate ten audio
recordings. We then compute our target metric — Word Error Rate (WER) — on these recordings and accept to the main task all workers
who achieve WER of 40% or less (the smaller the value of the metric, the higher the quality of annotation).
The Toloka crowdsourcing platform associates workers with unique identifiers and returns these identifiers to the requester.
To further protect the data, we additionally encode each identifier with an integer that is eventually reported in our released datasets.
See more details in the [paper](https://arxiv.org/pdf/2107.01091.pdf).
### Citation Information
```
@inproceedings{CrowdSpeech,
author = {Pavlichenko, Nikita and Stelmakh, Ivan and Ustalov, Dmitry},
title = {{CrowdSpeech and Vox~DIY: Benchmark Dataset for Crowdsourced Audio Transcription}},
year = {2021},
booktitle = {Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks},
eprint = {2107.01091},
eprinttype = {arxiv},
eprintclass = {cs.SD},
url = {https://openreview.net/forum?id=3_hgF1NAXU7},
language = {english},
pubstate = {forthcoming},
}
``` | 5,665 | [
[
-0.0191650390625,
-0.0248260498046875,
0.01212310791015625,
0.03363037109375,
-0.0284576416015625,
-0.0108795166015625,
-0.01629638671875,
-0.037445068359375,
0.04547119140625,
0.06207275390625,
-0.035736083984375,
-0.044647216796875,
-0.052398681640625,
0.0... |
vershasaxena91/squad_multitask | 2021-05-06T09:29:54.000Z | [
"region:us"
] | vershasaxena91 | \Stanford Question Answering Dataset (SQuAD) is a reading comprehension \dataset, consisting of questions posed by crowdworkers on a set of Wikipedia \articles, where the answer to every question is a segment of text, or span, \from the corresponding reading passage, or the question might be unanswerable. | \@article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250},
} | 0 | 6 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
joangaes/depression | 2022-03-10T13:04:18.000Z | [
"region:us"
] | joangaes | null | null | 0 | 6 | 2022-03-10T09:46:18 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Khedesh/ParsTwiNER | 2022-03-11T16:25:50.000Z | [
"region:us"
] | Khedesh | null | null | 0 | 6 | 2022-03-11T16:22:48 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
wanyu/IteraTeR_human_sent | 2022-10-24T18:58:22.000Z | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"conditional-text-generation",
"text-editing",
"arxiv:2203.03802",
"region:us"
] | wanyu | null | null | 0 | 6 | 2022-03-13T20:46:23 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
pretty_name: IteraTeR_human_sent
language_bcp47:
- en-US
tags:
- conditional-text-generation
- text-editing
---
Paper: [Understanding Iterative Revision from Human-Written Text](https://arxiv.org/abs/2203.03802)
Authors: Wanyu Du, Vipul Raheja, Dhruv Kumar, Zae Myung Kim, Melissa Lopez, Dongyeop Kang
Github repo: https://github.com/vipulraheja/IteraTeR
| 575 | [
[
-0.00531768798828125,
-0.03533935546875,
0.05169677734375,
0.00943756103515625,
-0.0235748291015625,
0.0160980224609375,
-0.018524169921875,
-0.0180816650390625,
0.0008001327514648438,
0.05657958984375,
-0.046051025390625,
-0.0289306640625,
-0.0164947509765625,
... |
sayalaruano/FakeNewsSpanish_Kaggle1 | 2022-03-22T14:59:40.000Z | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | sayalaruano | null | null | 0 | 6 | 2022-03-22T14:53:20 | ---
license: cc-by-nc-sa-4.0
---
This dataset was obtained from: https://www.kaggle.com/datasets/arseniitretiakov/noticias-falsas-en-espaol
| 146 | [
[
-0.01092529296875,
-0.034088134765625,
0.0171966552734375,
0.001964569091796875,
-0.00695037841796875,
-0.0120697021484375,
0.0234222412109375,
-0.01297760009765625,
0.038330078125,
0.06365966796875,
-0.0714111328125,
-0.0601806640625,
-0.04034423828125,
0.0... |
wesamhaddad14/spanishNLP | 2022-03-24T16:46:39.000Z | [
"region:us"
] | wesamhaddad14 | null | null | 0 | 6 | 2022-03-24T16:36:16 | # Dataset Card for SpanishNLP
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Spanish Poems and their Authors and titles
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | 2,495 | [
[
-0.026031494140625,
-0.0184478759765625,
-0.001140594482421875,
0.047821044921875,
-0.020721435546875,
0.0173492431640625,
-0.022430419921875,
-0.040557861328125,
0.0562744140625,
0.052276611328125,
-0.06500244140625,
-0.077392578125,
-0.048187255859375,
0.0... |
andreamorgar/spanish_poetry | 2022-03-30T12:39:22.000Z | [
"license:gpl-3.0",
"region:us"
] | andreamorgar | null | null | 2 | 6 | 2022-03-30T12:29:11 | ---
license: gpl-3.0
---
# Spanish Poetry Dataset
There are not many poetry datasets, and in Spanish language is even worst! With this dataset, we want to give access to these quality Spanish data for NLP tasks.
It is a simple dataset, but its potential is huge. I'm itching to discover new literary structures within Spanish literature data, a wider analysis, and so on!
# Authors
Andrea Morales (@andreamorgar) and Miguel López (@wizmik12)
### Motivation
This dataset was built for the PyConES2020 conference with the purpose of using it for a poem generation task. More information: https://github.com/andreamorgar/poesIA
### Content
Data was acquired in July 2020 from the poetry webpage www.poemas-del-alma.com. It provides a wide amount of data involving poems in Spanish. Data was scraped using Python library BeautifulSoup. For each poem in www.poemas-del-alma.com, we collected the name of the poet, poem, and poem title. Scraping processed is available at https://github.com/andreamorgar/poesIA/blob/master/poetry-scrapper.py.
### Languages
Spanish
### Acknowledgements
We wouldn't be here without www.poemas-del-alma.com, which provides the poetry collection in this dataset. | 1,214 | [
[
-0.00954437255859375,
-0.002780914306640625,
0.0211334228515625,
0.053924560546875,
-0.026641845703125,
-0.000926971435546875,
-0.0193328857421875,
-0.0504150390625,
0.046051025390625,
0.0379638671875,
-0.052398681640625,
-0.05767822265625,
-0.0443115234375,
... |
hackathon-pln-es/scientific_papers_en_es | 2022-04-03T23:59:39.000Z | [
"region:us"
] | hackathon-pln-es | null | null | 1 | 6 | 2022-04-03T23:53:00 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
crystina-z/quora | 2022-04-11T03:39:09.000Z | [
"region:us"
] | crystina-z | null | @misc{bajaj2018ms,
title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset},
author={Payal Bajaj and Daniel Campos and Nick Craswell and Li Deng and Jianfeng Gao and Xiaodong Liu
and Rangan Majumder and Andrew McNamara and Bhaskar Mitra and Tri Nguyen and Mir Rosenberg and Xia Song
and Alina Stoica and Saurabh Tiwary and Tong Wang},
year={2018},
eprint={1611.09268},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 0 | 6 | 2022-04-11T01:31:58 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
mwong/fever-evidence-related | 2022-10-25T10:06:51.000Z | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|fever",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
"region:... | mwong | null | null | 1 | 6 | 2022-04-12T08:39:59 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gpl-3.0
multilinguality:
- monolingual
paperswithcode_id: fever
pretty_name: fever
size_categories:
- 100K<n<1M
source_datasets:
- extended|fever
task_categories:
- text-classification
task_ids:
- fact-checking
---
### Dataset Summary
This dataset is extracted from Fever dataset (https://fever.ai), pre-processed and ready to train and evaluate.
The training objective is a text classification task - given a claim and evidence, predict if evidence is related to claim. | 587 | [
[
-0.004436492919921875,
-0.0251922607421875,
0.00222015380859375,
0.0010900497436523438,
-0.01439666748046875,
-0.01551055908203125,
-0.00652313232421875,
-0.0252685546875,
0.02154541015625,
0.045501708984375,
-0.031494140625,
-0.031768798828125,
-0.0597534179687... |
mwong/climate-claim-related | 2022-10-25T10:06:59.000Z | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|climate_fever",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
... | mwong | null | null | 1 | 6 | 2022-04-15T07:09:18 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gpl-3.0
multilinguality:
- monolingual
paperswithcode_id: climate-fever
pretty_name: climate-fever
size_categories:
- 100K<n<1M
source_datasets:
- extended|climate_fever
task_categories:
- text-classification
task_ids:
- fact-checking
---
### Dataset Summary
This dataset is extracted from Climate Fever dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever.html), pre-processed and, ready to train and evaluate.
The training objective is a text classification task - given a claim and evidence, predict if claim is related to evidence. | 672 | [
[
-0.010894775390625,
-0.027099609375,
0.0126495361328125,
-0.0006203651428222656,
-0.0149078369140625,
-0.00568389892578125,
-0.0089874267578125,
-0.029693603515625,
0.0091552734375,
0.059173583984375,
-0.038238525390625,
-0.0394287109375,
-0.0577392578125,
0... |
agemagician/NetSurfP-SS3 | 2022-04-18T03:43:55.000Z | [
"region:us"
] | agemagician | null | null | 1 | 6 | 2022-04-18T03:43:51 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
wanyu/IteraTeR_v2 | 2022-10-24T18:58:08.000Z | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"conditional-text-generation",
"text-editing",
"arxiv:2204.03685",
"region:us"
] | wanyu | null | null | 1 | 6 | 2022-04-18T20:09:17 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
pretty_name: IteraTeR_v2
language_bcp47:
- en-US
tags:
- conditional-text-generation
- text-editing
---
Paper: [Read, Revise, Repeat: A System Demonstration for Human-in-the-loop Iterative Text Revision](https://arxiv.org/abs/2204.03685)
Authors: Wanyu Du*, Zae Myung Kim*, Vipul Raheja, Dhruv Kumar, Dongyeop Kang
Github repo: https://github.com/vipulraheja/IteraTeR
Watch our system demonstration below!
[](https://www.youtube.com/watch?v=lK08tIpEoaE)
| 736 | [
[
-0.014404296875,
-0.0308685302734375,
0.03668212890625,
0.01422882080078125,
-0.03424072265625,
0.0217132568359375,
-0.0183258056640625,
0.000965118408203125,
0.0182342529296875,
0.04095458984375,
-0.0382080078125,
-0.041534423828125,
-0.0010890960693359375,
... |
kniemiec/crack-segmentation | 2022-04-19T19:16:05.000Z | [
"region:us"
] | kniemiec | null | null | 0 | 6 | 2022-04-19T19:05:00 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
mwong/climatetext-evidence-related-evaluation | 2022-10-25T10:08:46.000Z | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|climate_text",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
"... | mwong | null | null | 1 | 6 | 2022-04-20T12:18:14 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gpl-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|climate_text
task_categories:
- text-classification
task_ids:
- fact-checking
---
### Dataset Summary
This dataset is extracted from Climate Text dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever/climatext.html), pre-processed and, ready to evaluate.
The evaluation objective is a text classification task - given a climate related claim and evidence, predict if evidence is related to claim. | 628 | [
[
-0.01050567626953125,
-0.035247802734375,
0.0245513916015625,
0.00884246826171875,
-0.0179901123046875,
-0.00870513916015625,
-0.01308441162109375,
-0.024749755859375,
0.0027332305908203125,
0.066162109375,
-0.039306640625,
-0.04376220703125,
-0.0540771484375,
... |
h4iku/coconut_c2005_preprocessed | 2022-04-21T11:39:26.000Z | [
"region:us"
] | h4iku | null | null | 0 | 6 | 2022-04-21T08:37:46 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
mwong/climatetext-claim-evidence-pair-related-evaluation | 2022-10-25T10:08:55.000Z | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|climate_text",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
"... | mwong | null | null | 1 | 6 | 2022-04-21T10:26:24 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gpl-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|climate_text
task_categories:
- text-classification
task_ids:
- fact-checking
---
### Dataset Summary
This dataset is extracted from Climate Text dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever/climatext.html), pre-processed and, ready to evaluate.
The evaluation objective is a text classification task - given a claim and climate related evidence, predict if pair is related. | 615 | [
[
-0.01049041748046875,
-0.035797119140625,
0.025299072265625,
0.01055145263671875,
-0.019683837890625,
-0.009124755859375,
-0.01172637939453125,
-0.02740478515625,
0.00484466552734375,
0.06658935546875,
-0.040069580078125,
-0.04443359375,
-0.052978515625,
0.0... |
janck/bigscience-lama | 2022-10-21T08:16:23.000Z | [
"task_categories:text-retrieval",
"task_categories:text-classification",
"task_ids:fact-checking-retrieval",
"task_ids:text-scoring",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"language:en",
"license:cc-by-4.0",
"probing",
"re... | janck | null | null | 0 | 6 | 2022-04-27T09:20:12 | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
trex:
- 1M<n<10M
task_categories:
- text-retrieval
- text-classification
task_ids:
- fact-checking-retrieval
- text-scoring
paperswithcode_id: lama
pretty_name: 'LAMA: LAnguage Model Analysis - BigScience version'
tags:
- probing
---
# Dataset Card for LAMA: LAnguage Model Analysis - a dataset for probing and analyzing the factual and commonsense knowledge contained in pretrained language models.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:**
https://github.com/facebookresearch/LAMA
- **Repository:**
https://github.com/facebookresearch/LAMA
- **Paper:**
@inproceedings{petroni2019language,
title={Language Models as Knowledge Bases?},
author={F. Petroni, T. Rockt{\"{a}}schel, A. H. Miller, P. Lewis, A. Bakhtin, Y. Wu and S. Riedel},
booktitle={In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019},
year={2019}
}
@inproceedings{petroni2020how,
title={How Context Affects Language Models' Factual Predictions},
author={Fabio Petroni and Patrick Lewis and Aleksandra Piktus and Tim Rockt{\"a}schel and Yuxiang Wu and Alexander H. Miller and Sebastian Riedel},
booktitle={Automated Knowledge Base Construction},
year={2020},
url={https://openreview.net/forum?id=025X0zPfn}
}
### Dataset Summary
This dataset provides the data for LAMA. This dataset only contains TRex
(subset of wikidata triples).
The dataset includes some cleanup, and addition of a masked sentence
and associated answers for the [MASK] token. The accuracy in
predicting the [MASK] token shows how well the language model knows
facts and common sense information. The [MASK] tokens are only for the
"object" slots.
This version also contains questions instead of templates that can be used to probe also non-masking models.
See the paper for more details. For more information, also see:
https://github.com/facebookresearch/LAMA
### Languages
en
## Dataset Structure
### Data Instances
The trex config has the following fields:
``
{'uuid': 'a37257ae-4cbb-4309-a78a-623036c96797', 'sub_label': 'Pianos Become the Teeth', 'predicate_id': 'P740', 'obj_label': 'Baltimore', 'template': '[X] was founded in [Y] .', 'type': 'N-1', 'question': 'Where was [X] founded?'}
34039
``
### Data Splits
There are no data splits.
## Dataset Creation
### Curation Rationale
This dataset was gathered and created to probe what language models understand.
### Source Data
#### Initial Data Collection and Normalization
See the reaserch paper and website for more detail. The dataset was
created gathered from various other datasets with cleanups for probing.
#### Who are the source language producers?
The LAMA authors and the original authors of the various configs.
### Annotations
#### Annotation process
Human annotations under the original datasets (conceptnet), and various machine annotations.
#### Who are the annotators?
Human annotations and machine annotations.
### Personal and Sensitive Information
Unkown, but likely names of famous people.
## Considerations for Using the Data
### Social Impact of Dataset
The goal for the work is to probe the understanding of language models.
### Discussion of Biases
Since the data is from human annotators, there is likely to be baises.
[More Information Needed]
### Other Known Limitations
The original documentation for the datafields are limited.
## Additional Information
### Dataset Curators
The authors of LAMA at Facebook and the authors of the original datasets.
### Licensing Information
The Creative Commons Attribution-Noncommercial 4.0 International License. see https://github.com/facebookresearch/LAMA/blob/master/LICENSE
### Citation Information
@inproceedings{petroni2019language,
title={Language Models as Knowledge Bases?},
author={F. Petroni, T. Rockt{\"{a}}schel, A. H. Miller, P. Lewis, A. Bakhtin, Y. Wu and S. Riedel},
booktitle={In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019},
year={2019}
}
@inproceedings{petroni2020how,
title={How Context Affects Language Models' Factual Predictions},
author={Fabio Petroni and Patrick Lewis and Aleksandra Piktus and Tim Rockt{\"a}schel and Yuxiang Wu and Alexander H. Miller and Sebastian Riedel},
booktitle={Automated Knowledge Base Construction},
year={2020},
url={https://openreview.net/forum?id=025X0zPfn}
}
| 5,566 | [
[
-0.024383544921875,
-0.07257080078125,
0.00423431396484375,
0.016571044921875,
-0.009613037109375,
-0.01522064208984375,
-0.0389404296875,
-0.028594970703125,
0.0257110595703125,
0.03515625,
-0.04400634765625,
-0.0655517578125,
-0.031494140625,
0.00277328491... |
mrm8488/ImageNet1K-val | 2022-04-27T19:16:51.000Z | [
"region:us"
] | mrm8488 | null | null | 0 | 6 | 2022-04-27T19:05:28 | mapping:
```
n01440764 tench, Tinca tinca
n01443537 goldfish, Carassius auratus
n01484850 great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias
n01491361 tiger shark, Galeocerdo cuvieri
n01494475 hammerhead, hammerhead shark
n01496331 electric ray, crampfish, numbfish, torpedo
n01498041 stingray
n01514668 cock
n01514859 hen
n01518878 ostrich, Struthio camelus
n01530575 brambling, Fringilla montifringilla
n01531178 goldfinch, Carduelis carduelis
n01532829 house finch, linnet, Carpodacus mexicanus
n01534433 junco, snowbird
n01537544 indigo bunting, indigo finch, indigo bird, Passerina cyanea
n01558993 robin, American robin, Turdus migratorius
n01560419 bulbul
n01580077 jay
n01582220 magpie
n01592084 chickadee
n01601694 water ouzel, dipper
n01608432 kite
n01614925 bald eagle, American eagle, Haliaeetus leucocephalus
n01616318 vulture
n01622779 great grey owl, great gray owl, Strix nebulosa
n01629819 European fire salamander, Salamandra salamandra
n01630670 common newt, Triturus vulgaris
n01631663 eft
n01632458 spotted salamander, Ambystoma maculatum
n01632777 axolotl, mud puppy, Ambystoma mexicanum
n01641577 bullfrog, Rana catesbeiana
n01644373 tree frog, tree-frog
n01644900 tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui
n01664065 loggerhead, loggerhead turtle, Caretta caretta
n01665541 leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea
n01667114 mud turtle
n01667778 terrapin
n01669191 box turtle, box tortoise
n01675722 banded gecko
n01677366 common iguana, iguana, Iguana iguana
n01682714 American chameleon, anole, Anolis carolinensis
n01685808 whiptail, whiptail lizard
n01687978 agama
n01688243 frilled lizard, Chlamydosaurus kingi
n01689811 alligator lizard
n01692333 Gila monster, Heloderma suspectum
n01693334 green lizard, Lacerta viridis
n01694178 African chameleon, Chamaeleo chamaeleon
n01695060 Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus komodoensis
n01697457 African crocodile, Nile crocodile, Crocodylus niloticus
n01698640 American alligator, Alligator mississipiensis
n01704323 triceratops
n01728572 thunder snake, worm snake, Carphophis amoenus
n01728920 ringneck snake, ring-necked snake, ring snake
n01729322 hognose snake, puff adder, sand viper
n01729977 green snake, grass snake
n01734418 king snake, kingsnake
n01735189 garter snake, grass snake
n01737021 water snake
n01739381 vine snake
n01740131 night snake, Hypsiglena torquata
n01742172 boa constrictor, Constrictor constrictor
n01744401 rock python, rock snake, Python sebae
n01748264 Indian cobra, Naja naja
n01749939 green mamba
n01751748 sea snake
n01753488 horned viper, cerastes, sand viper, horned asp, Cerastes cornutus
n01755581 diamondback, diamondback rattlesnake, Crotalus adamanteus
n01756291 sidewinder, horned rattlesnake, Crotalus cerastes
n01768244 trilobite
n01770081 harvestman, daddy longlegs, Phalangium opilio
n01770393 scorpion
n01773157 black and gold garden spider, Argiope aurantia
n01773549 barn spider, Araneus cavaticus
n01773797 garden spider, Aranea diademata
n01774384 black widow, Latrodectus mactans
n01774750 tarantula
n01775062 wolf spider, hunting spider
n01776313 tick
n01784675 centipede
n01795545 black grouse
n01796340 ptarmigan
n01797886 ruffed grouse, partridge, Bonasa umbellus
n01798484 prairie chicken, prairie grouse, prairie fowl
n01806143 peacock
n01806567 quail
n01807496 partridge
n01817953 African grey, African gray, Psittacus erithacus
n01818515 macaw
n01819313 sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita
n01820546 lorikeet
n01824575 coucal
n01828970 bee eater
n01829413 hornbill
n01833805 hummingbird
n01843065 jacamar
n01843383 toucan
n01847000 drake
n01855032 red-breasted merganser, Mergus serrator
n01855672 goose
n01860187 black swan, Cygnus atratus
n01871265 tusker
n01872401 echidna, spiny anteater, anteater
n01873310 platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus anatinus
n01877812 wallaby, brush kangaroo
n01882714 koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus
n01883070 wombat
n01910747 jellyfish
n01914609 sea anemone, anemone
n01917289 brain coral
n01924916 flatworm, platyhelminth
n01930112 nematode, nematode worm, roundworm
n01943899 conch
n01944390 snail
n01945685 slug
n01950731 sea slug, nudibranch
n01955084 chiton, coat-of-mail shell, sea cradle, polyplacophore
n01968897 chambered nautilus, pearly nautilus, nautilus
n01978287 Dungeness crab, Cancer magister
n01978455 rock crab, Cancer irroratus
n01980166 fiddler crab
n01981276 king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes camtschatica
n01983481 American lobster, Northern lobster, Maine lobster, Homarus americanus
n01984695 spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish
n01985128 crayfish, crawfish, crawdad, crawdaddy
n01986214 hermit crab
n01990800 isopod
n02002556 white stork, Ciconia ciconia
n02002724 black stork, Ciconia nigra
n02006656 spoonbill
n02007558 flamingo
n02009229 little blue heron, Egretta caerulea
n02009912 American egret, great white heron, Egretta albus
n02011460 bittern
n02012849 crane
n02013706 limpkin, Aramus pictus
n02017213 European gallinule, Porphyrio porphyrio
n02018207 American coot, marsh hen, mud hen, water hen, Fulica americana
n02018795 bustard
n02025239 ruddy turnstone, Arenaria interpres
n02027492 red-backed sandpiper, dunlin, Erolia alpina
n02028035 redshank, Tringa totanus
n02033041 dowitcher
n02037110 oystercatcher, oyster catcher
n02051845 pelican
n02056570 king penguin, Aptenodytes patagonica
n02058221 albatross, mollymawk
n02066245 grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius robustus
n02071294 killer whale, killer, orca, grampus, sea wolf, Orcinus orca
n02074367 dugong, Dugong dugon
n02077923 sea lion
n02085620 Chihuahua
n02085782 Japanese spaniel
n02085936 Maltese dog, Maltese terrier, Maltese
n02086079 Pekinese, Pekingese, Peke
n02086240 Shih-Tzu
n02086646 Blenheim spaniel
n02086910 papillon
n02087046 toy terrier
n02087394 Rhodesian ridgeback
n02088094 Afghan hound, Afghan
n02088238 basset, basset hound
n02088364 beagle
n02088466 bloodhound, sleuthhound
n02088632 bluetick
n02089078 black-and-tan coonhound
n02089867 Walker hound, Walker foxhound
n02089973 English foxhound
n02090379 redbone
n02090622 borzoi, Russian wolfhound
n02090721 Irish wolfhound
n02091032 Italian greyhound
n02091134 whippet
n02091244 Ibizan hound, Ibizan Podenco
n02091467 Norwegian elkhound, elkhound
n02091635 otterhound, otter hound
n02091831 Saluki, gazelle hound
n02092002 Scottish deerhound, deerhound
n02092339 Weimaraner
n02093256 Staffordshire bullterrier, Staffordshire bull terrier
n02093428 American Staffordshire terrier, Staffordshire terrier, American pit bull terrier, pit bull terrier
n02093647 Bedlington terrier
n02093754 Border terrier
n02093859 Kerry blue terrier
n02093991 Irish terrier
n02094114 Norfolk terrier
n02094258 Norwich terrier
n02094433 Yorkshire terrier
n02095314 wire-haired fox terrier
n02095570 Lakeland terrier
n02095889 Sealyham terrier, Sealyham
n02096051 Airedale, Airedale terrier
n02096177 cairn, cairn terrier
n02096294 Australian terrier
n02096437 Dandie Dinmont, Dandie Dinmont terrier
n02096585 Boston bull, Boston terrier
n02097047 miniature schnauzer
n02097130 giant schnauzer
n02097209 standard schnauzer
n02097298 Scotch terrier, Scottish terrier, Scottie
n02097474 Tibetan terrier, chrysanthemum dog
n02097658 silky terrier, Sydney silky
n02098105 soft-coated wheaten terrier
n02098286 West Highland white terrier
n02098413 Lhasa, Lhasa apso
n02099267 flat-coated retriever
n02099429 curly-coated retriever
n02099601 golden retriever
n02099712 Labrador retriever
n02099849 Chesapeake Bay retriever
n02100236 German short-haired pointer
n02100583 vizsla, Hungarian pointer
n02100735 English setter
n02100877 Irish setter, red setter
n02101006 Gordon setter
n02101388 Brittany spaniel
n02101556 clumber, clumber spaniel
n02102040 English springer, English springer spaniel
n02102177 Welsh springer spaniel
n02102318 cocker spaniel, English cocker spaniel, cocker
n02102480 Sussex spaniel
n02102973 Irish water spaniel
n02104029 kuvasz
n02104365 schipperke
n02105056 groenendael
n02105162 malinois
n02105251 briard
n02105412 kelpie
n02105505 komondor
n02105641 Old English sheepdog, bobtail
n02105855 Shetland sheepdog, Shetland sheep dog, Shetland
n02106030 collie
n02106166 Border collie
n02106382 Bouvier des Flandres, Bouviers des Flandres
n02106550 Rottweiler
n02106662 German shepherd, German shepherd dog, German police dog, alsatian
n02107142 Doberman, Doberman pinscher
n02107312 miniature pinscher
n02107574 Greater Swiss Mountain dog
n02107683 Bernese mountain dog
n02107908 Appenzeller
n02108000 EntleBucher
n02108089 boxer
n02108422 bull mastiff
n02108551 Tibetan mastiff
n02108915 French bulldog
n02109047 Great Dane
n02109525 Saint Bernard, St Bernard
n02109961 Eskimo dog, husky
n02110063 malamute, malemute, Alaskan malamute
n02110185 Siberian husky
n02110341 dalmatian, coach dog, carriage dog
n02110627 affenpinscher, monkey pinscher, monkey dog
n02110806 basenji
n02110958 pug, pug-dog
n02111129 Leonberg
n02111277 Newfoundland, Newfoundland dog
n02111500 Great Pyrenees
n02111889 Samoyed, Samoyede
n02112018 Pomeranian
n02112137 chow, chow chow
n02112350 keeshond
n02112706 Brabancon griffon
n02113023 Pembroke, Pembroke Welsh corgi
n02113186 Cardigan, Cardigan Welsh corgi
n02113624 toy poodle
n02113712 miniature poodle
n02113799 standard poodle
n02113978 Mexican hairless
n02114367 timber wolf, grey wolf, gray wolf, Canis lupus
n02114548 white wolf, Arctic wolf, Canis lupus tundrarum
n02114712 red wolf, maned wolf, Canis rufus, Canis niger
n02114855 coyote, prairie wolf, brush wolf, Canis latrans
n02115641 dingo, warrigal, warragal, Canis dingo
n02115913 dhole, Cuon alpinus
n02116738 African hunting dog, hyena dog, Cape hunting dog, Lycaon pictus
n02117135 hyena, hyaena
n02119022 red fox, Vulpes vulpes
n02119789 kit fox, Vulpes macrotis
n02120079 Arctic fox, white fox, Alopex lagopus
n02120505 grey fox, gray fox, Urocyon cinereoargenteus
n02123045 tabby, tabby cat
n02123159 tiger cat
n02123394 Persian cat
n02123597 Siamese cat, Siamese
n02124075 Egyptian cat
n02125311 cougar, puma, catamount, mountain lion, painter, panther, Felis concolor
n02127052 lynx, catamount
n02128385 leopard, Panthera pardus
n02128757 snow leopard, ounce, Panthera uncia
n02128925 jaguar, panther, Panthera onca, Felis onca
n02129165 lion, king of beasts, Panthera leo
n02129604 tiger, Panthera tigris
n02130308 cheetah, chetah, Acinonyx jubatus
n02132136 brown bear, bruin, Ursus arctos
n02133161 American black bear, black bear, Ursus americanus, Euarctos americanus
n02134084 ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus
n02134418 sloth bear, Melursus ursinus, Ursus ursinus
n02137549 mongoose
n02138441 meerkat, mierkat
n02165105 tiger beetle
n02165456 ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle
n02167151 ground beetle, carabid beetle
n02168699 long-horned beetle, longicorn, longicorn beetle
n02169497 leaf beetle, chrysomelid
n02172182 dung beetle
n02174001 rhinoceros beetle
n02177972 weevil
n02190166 fly
n02206856 bee
n02219486 ant, emmet, pismire
n02226429 grasshopper, hopper
n02229544 cricket
n02231487 walking stick, walkingstick, stick insect
n02233338 cockroach, roach
n02236044 mantis, mantid
n02256656 cicada, cicala
n02259212 leafhopper
n02264363 lacewing, lacewing fly
n02268443 dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk
n02268853 damselfly
n02276258 admiral
n02277742 ringlet, ringlet butterfly
n02279972 monarch, monarch butterfly, milkweed butterfly, Danaus plexippus
n02280649 cabbage butterfly
n02281406 sulphur butterfly, sulfur butterfly
n02281787 lycaenid, lycaenid butterfly
n02317335 starfish, sea star
n02319095 sea urchin
n02321529 sea cucumber, holothurian
n02325366 wood rabbit, cottontail, cottontail rabbit
n02326432 hare
n02328150 Angora, Angora rabbit
n02342885 hamster
n02346627 porcupine, hedgehog
n02356798 fox squirrel, eastern fox squirrel, Sciurus niger
n02361337 marmot
n02363005 beaver
n02364673 guinea pig, Cavia cobaya
n02389026 sorrel
n02391049 zebra
n02395406 hog, pig, grunter, squealer, Sus scrofa
n02396427 wild boar, boar, Sus scrofa
n02397096 warthog
n02398521 hippopotamus, hippo, river horse, Hippopotamus amphibius
n02403003 ox
n02408429 water buffalo, water ox, Asiatic buffalo, Bubalus bubalis
n02410509 bison
n02412080 ram, tup
n02415577 bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain sheep, Ovis canadensis
n02417914 ibex, Capra ibex
n02422106 hartebeest
n02422699 impala, Aepyceros melampus
n02423022 gazelle
n02437312 Arabian camel, dromedary, Camelus dromedarius
n02437616 llama
n02441942 weasel
n02442845 mink
n02443114 polecat, fitch, foulmart, foumart, Mustela putorius
n02443484 black-footed ferret, ferret, Mustela nigripes
n02444819 otter
n02445715 skunk, polecat, wood pussy
n02447366 badger
n02454379 armadillo
n02457408 three-toed sloth, ai, Bradypus tridactylus
n02480495 orangutan, orang, orangutang, Pongo pygmaeus
n02480855 gorilla, Gorilla gorilla
n02481823 chimpanzee, chimp, Pan troglodytes
n02483362 gibbon, Hylobates lar
n02483708 siamang, Hylobates syndactylus, Symphalangus syndactylus
n02484975 guenon, guenon monkey
n02486261 patas, hussar monkey, Erythrocebus patas
n02486410 baboon
n02487347 macaque
n02488291 langur
n02488702 colobus, colobus monkey
n02489166 proboscis monkey, Nasalis larvatus
n02490219 marmoset
n02492035 capuchin, ringtail, Cebus capucinus
n02492660 howler monkey, howler
n02493509 titi, titi monkey
n02493793 spider monkey, Ateles geoffroyi
n02494079 squirrel monkey, Saimiri sciureus
n02497673 Madagascar cat, ring-tailed lemur, Lemur catta
n02500267 indri, indris, Indri indri, Indri brevicaudatus
n02504013 Indian elephant, Elephas maximus
n02504458 African elephant, Loxodonta africana
n02509815 lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens
n02510455 giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca
n02514041 barracouta, snoek
n02526121 eel
n02536864 coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus kisutch
n02606052 rock beauty, Holocanthus tricolor
n02607072 anemone fish
n02640242 sturgeon
n02641379 gar, garfish, garpike, billfish, Lepisosteus osseus
n02643566 lionfish
n02655020 puffer, pufferfish, blowfish, globefish
n02666196 abacus
n02667093 abaya
n02669723 academic gown, academic robe, judge's robe
n02672831 accordion, piano accordion, squeeze box
n02676566 acoustic guitar
n02687172 aircraft carrier, carrier, flattop, attack aircraft carrier
n02690373 airliner
n02692877 airship, dirigible
n02699494 altar
n02701002 ambulance
n02704792 amphibian, amphibious vehicle
n02708093 analog clock
n02727426 apiary, bee house
n02730930 apron
n02747177 ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin
n02749479 assault rifle, assault gun
n02769748 backpack, back pack, knapsack, packsack, rucksack, haversack
n02776631 bakery, bakeshop, bakehouse
n02777292 balance beam, beam
n02782093 balloon
n02783161 ballpoint, ballpoint pen, ballpen, Biro
n02786058 Band Aid
n02787622 banjo
n02788148 bannister, banister, balustrade, balusters, handrail
n02790996 barbell
n02791124 barber chair
n02791270 barbershop
n02793495 barn
n02794156 barometer
n02795169 barrel, cask
n02797295 barrow, garden cart, lawn cart, wheelbarrow
n02799071 baseball
n02802426 basketball
n02804414 bassinet
n02804610 bassoon
n02807133 bathing cap, swimming cap
n02808304 bath towel
n02808440 bathtub, bathing tub, bath, tub
n02814533 beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon
n02814860 beacon, lighthouse, beacon light, pharos
n02815834 beaker
n02817516 bearskin, busby, shako
n02823428 beer bottle
n02823750 beer glass
n02825657 bell cote, bell cot
n02834397 bib
n02835271 bicycle-built-for-two, tandem bicycle, tandem
n02837789 bikini, two-piece
n02840245 binder, ring-binder
n02841315 binoculars, field glasses, opera glasses
n02843684 birdhouse
n02859443 boathouse
n02860847 bobsled, bobsleigh, bob
n02865351 bolo tie, bolo, bola tie, bola
n02869837 bonnet, poke bonnet
n02870880 bookcase
n02871525 bookshop, bookstore, bookstall
n02877765 bottlecap
n02879718 bow
n02883205 bow tie, bow-tie, bowtie
n02892201 brass, memorial tablet, plaque
n02892767 brassiere, bra, bandeau
n02894605 breakwater, groin, groyne, mole, bulwark, seawall, jetty
n02895154 breastplate, aegis, egis
n02906734 broom
n02909870 bucket, pail
n02910353 buckle
n02916936 bulletproof vest
n02917067 bullet train, bullet
n02927161 butcher shop, meat market
n02930766 cab, hack, taxi, taxicab
n02939185 caldron, cauldron
n02948072 candle, taper, wax light
n02950826 cannon
n02951358 canoe
n02951585 can opener, tin opener
n02963159 cardigan
n02965783 car mirror
n02966193 carousel, carrousel, merry-go-round, roundabout, whirligig
n02966687 carpenter's kit, tool kit
n02971356 carton
n02974003 car wheel
n02977058 cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, ATM
n02978881 cassette
n02979186 cassette player
n02980441 castle
n02981792 catamaran
n02988304 CD player
n02992211 cello, violoncello
n02992529 cellular telephone, cellular phone, cellphone, cell, mobile phone
n02999410 chain
n03000134 chainlink fence
n03000247 chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour
n03000684 chain saw, chainsaw
n03014705 chest
n03016953 chiffonier, commode
n03017168 chime, bell, gong
n03018349 china cabinet, china closet
n03026506 Christmas stocking
n03028079 church, church building
n03032252 cinema, movie theater, movie theatre, movie house, picture palace
n03041632 cleaver, meat cleaver, chopper
n03042490 cliff dwelling
n03045698 cloak
n03047690 clog, geta, patten, sabot
n03062245 cocktail shaker
n03063599 coffee mug
n03063689 coffeepot
n03065424 coil, spiral, volute, whorl, helix
n03075370 combination lock
n03085013 computer keyboard, keypad
n03089624 confectionery, confectionary, candy store
n03095699 container ship, containership, container vessel
n03100240 convertible
n03109150 corkscrew, bottle screw
n03110669 cornet, horn, trumpet, trump
n03124043 cowboy boot
n03124170 cowboy hat, ten-gallon hat
n03125729 cradle
n03126707 crane
n03127747 crash helmet
n03127925 crate
n03131574 crib, cot
n03133878 Crock Pot
n03134739 croquet ball
n03141823 crutch
n03146219 cuirass
n03160309 dam, dike, dyke
n03179701 desk
n03180011 desktop computer
n03187595 dial telephone, dial phone
n03188531 diaper, nappy, napkin
n03196217 digital clock
n03197337 digital watch
n03201208 dining table, board
n03207743 dishrag, dishcloth
n03207941 dishwasher, dish washer, dishwashing machine
n03208938 disk brake, disc brake
n03216828 dock, dockage, docking facility
n03218198 dogsled, dog sled, dog sleigh
n03220513 dome
n03223299 doormat, welcome mat
n03240683 drilling platform, offshore rig
n03249569 drum, membranophone, tympan
n03250847 drumstick
n03255030 dumbbell
n03259280 Dutch oven
n03271574 electric fan, blower
n03272010 electric guitar
n03272562 electric locomotive
n03290653 entertainment center
n03291819 envelope
n03297495 espresso maker
n03314780 face powder
n03325584 feather boa, boa
n03337140 file, file cabinet, filing cabinet
n03344393 fireboat
n03345487 fire engine, fire truck
n03347037 fire screen, fireguard
n03355925 flagpole, flagstaff
n03372029 flute, transverse flute
n03376595 folding chair
n03379051 football helmet
n03384352 forklift
n03388043 fountain
n03388183 fountain pen
n03388549 four-poster
n03393912 freight car
n03394916 French horn, horn
n03400231 frying pan, frypan, skillet
n03404251 fur coat
n03417042 garbage truck, dustcart
n03424325 gasmask, respirator, gas helmet
n03425413 gas pump, gasoline pump, petrol pump, island dispenser
n03443371 goblet
n03444034 go-kart
n03445777 golf ball
n03445924 golfcart, golf cart
n03447447 gondola
n03447721 gong, tam-tam
n03450230 gown
n03452741 grand piano, grand
n03457902 greenhouse, nursery, glasshouse
n03459775 grille, radiator grille
n03461385 grocery store, grocery, food market, market
n03467068 guillotine
n03476684 hair slide
n03476991 hair spray
n03478589 half track
n03481172 hammer
n03482405 hamper
n03483316 hand blower, blow dryer, blow drier, hair dryer, hair drier
n03485407 hand-held computer, hand-held microcomputer
n03485794 handkerchief, hankie, hanky, hankey
n03492542 hard disc, hard disk, fixed disk
n03494278 harmonica, mouth organ, harp, mouth harp
n03495258 harp
n03496892 harvester, reaper
n03498962 hatchet
n03527444 holster
n03529860 home theater, home theatre
n03530642 honeycomb
n03532672 hook, claw
n03534580 hoopskirt, crinoline
n03535780 horizontal bar, high bar
n03538406 horse cart, horse-cart
n03544143 hourglass
n03584254 iPod
n03584829 iron, smoothing iron
n03590841 jack-o'-lantern
n03594734 jean, blue jean, denim
n03594945 jeep, landrover
n03595614 jersey, T-shirt, tee shirt
n03598930 jigsaw puzzle
n03599486 jinrikisha, ricksha, rickshaw
n03602883 joystick
n03617480 kimono
n03623198 knee pad
n03627232 knot
n03630383 lab coat, laboratory coat
n03633091 ladle
n03637318 lampshade, lamp shade
n03642806 laptop, laptop computer
n03649909 lawn mower, mower
n03657121 lens cap, lens cover
n03658185 letter opener, paper knife, paperknife
n03661043 library
n03662601 lifeboat
n03666591 lighter, light, igniter, ignitor
n03670208 limousine, limo
n03673027 liner, ocean liner
n03676483 lipstick, lip rouge
n03680355 Loafer
n03690938 lotion
n03691459 loudspeaker, speaker, speaker unit, loudspeaker system, speaker system
n03692522 loupe, jeweler's loupe
n03697007 lumbermill, sawmill
n03706229 magnetic compass
n03709823 mailbag, postbag
n03710193 mailbox, letter box
n03710637 maillot
n03710721 maillot, tank suit
n03717622 manhole cover
n03720891 maraca
n03721384 marimba, xylophone
n03724870 mask
n03729826 matchstick
n03733131 maypole
n03733281 maze, labyrinth
n03733805 measuring cup
n03742115 medicine chest, medicine cabinet
n03743016 megalith, megalithic structure
n03759954 microphone, mike
n03761084 microwave, microwave oven
n03763968 military uniform
n03764736 milk can
n03769881 minibus
n03770439 miniskirt, mini
n03770679 minivan
n03773504 missile
n03775071 mitten
n03775546 mixing bowl
n03776460 mobile home, manufactured home
n03777568 Model T
n03777754 modem
n03781244 monastery
n03782006 monitor
n03785016 moped
n03786901 mortar
n03787032 mortarboard
n03788195 mosque
n03788365 mosquito net
n03791053 motor scooter, scooter
n03792782 mountain bike, all-terrain bike, off-roader
n03792972 mountain tent
n03793489 mouse, computer mouse
n03794056 mousetrap
n03796401 moving van
n03803284 muzzle
n03804744 nail
n03814639 neck brace
n03814906 necklace
n03825788 nipple
n03832673 notebook, notebook computer
n03837869 obelisk
n03838899 oboe, hautboy, hautbois
n03840681 ocarina, sweet potato
n03841143 odometer, hodometer, mileometer, milometer
n03843555 oil filter
n03854065 organ, pipe organ
n03857828 oscilloscope, scope, cathode-ray oscilloscope, CRO
n03866082 overskirt
n03868242 oxcart
n03868863 oxygen mask
n03871628 packet
n03873416 paddle, boat paddle
n03874293 paddlewheel, paddle wheel
n03874599 padlock
n03876231 paintbrush
n03877472 pajama, pyjama, pj's, jammies
n03877845 palace
n03884397 panpipe, pandean pipe, syrinx
n03887697 paper towel
n03888257 parachute, chute
n03888605 parallel bars, bars
n03891251 park bench
n03891332 parking meter
n03895866 passenger car, coach, carriage
n03899768 patio, terrace
n03902125 pay-phone, pay-station
n03903868 pedestal, plinth, footstall
n03908618 pencil box, pencil case
n03908714 pencil sharpener
n03916031 perfume, essence
n03920288 Petri dish
n03924679 photocopier
n03929660 pick, plectrum, plectron
n03929855 pickelhaube
n03930313 picket fence, paling
n03930630 pickup, pickup truck
n03933933 pier
n03935335 piggy bank, penny bank
n03937543 pill bottle
n03938244 pillow
n03942813 ping-pong ball
n03944341 pinwheel
n03947888 pirate, pirate ship
n03950228 pitcher, ewer
n03954731 plane, carpenter's plane, woodworking plane
n03956157 planetarium
n03958227 plastic bag
n03961711 plate rack
n03967562 plow, plough
n03970156 plunger, plumber's helper
n03976467 Polaroid camera, Polaroid Land camera
n03976657 pole
n03977966 police van, police wagon, paddy wagon, patrol wagon, wagon, black Maria
n03980874 poncho
n03982430 pool table, billiard table, snooker table
n03983396 pop bottle, soda bottle
n03991062 pot, flowerpot
n03992509 potter's wheel
n03995372 power drill
n03998194 prayer rug, prayer mat
n04004767 printer
n04005630 prison, prison house
n04008634 projectile, missile
n04009552 projector
n04019541 puck, hockey puck
n04023962 punching bag, punch bag, punching ball, punchball
n04026417 purse
n04033901 quill, quill pen
n04033995 quilt, comforter, comfort, puff
n04037443 racer, race car, racing car
n04039381 racket, racquet
n04040759 radiator
n04041544 radio, wireless
n04044716 radio telescope, radio reflector
n04049303 rain barrel
n04065272 recreational vehicle, RV, R.V.
n04067472 reel
n04069434 reflex camera
n04070727 refrigerator, icebox
n04074963 remote control, remote
n04081281 restaurant, eating house, eating place, eatery
n04086273 revolver, six-gun, six-shooter
n04090263 rifle
n04099969 rocking chair, rocker
n04111531 rotisserie
n04116512 rubber eraser, rubber, pencil eraser
n04118538 rugby ball
n04118776 rule, ruler
n04120489 running shoe
n04125021 safe
n04127249 safety pin
n04131690 saltshaker, salt shaker
n04133789 sandal
n04136333 sarong
n04141076 sax, saxophone
n04141327 scabbard
n04141975 scale, weighing machine
n04146614 school bus
n04147183 schooner
n04149813 scoreboard
n04152593 screen, CRT screen
n04153751 screw
n04154565 screwdriver
n04162706 seat belt, seatbelt
n04179913 sewing machine
n04192698 shield, buckler
n04200800 shoe shop, shoe-shop, shoe store
n04201297 shoji
n04204238 shopping basket
n04204347 shopping cart
n04208210 shovel
n04209133 shower cap
n04209239 shower curtain
n04228054 ski
n04229816 ski mask
n04235860 sleeping bag
n04238763 slide rule, slipstick
n04239074 sliding door
n04243546 slot, one-armed bandit
n04251144 snorkel
n04252077 snowmobile
n04252225 snowplow, snowplough
n04254120 soap dispenser
n04254680 soccer ball
n04254777 sock
n04258138 solar dish, solar collector, solar furnace
n04259630 sombrero
n04263257 soup bowl
n04264628 space bar
n04265275 space heater
n04266014 space shuttle
n04270147 spatula
n04273569 speedboat
n04275548 spider web, spider's web
n04277352 spindle
n04285008 sports car, sport car
n04286575 spotlight, spot
n04296562 stage
n04310018 steam locomotive
n04311004 steel arch bridge
n04311174 steel drum
n04317175 stethoscope
n04325704 stole
n04326547 stone wall
n04328186 stopwatch, stop watch
n04330267 stove
n04332243 strainer
n04335435 streetcar, tram, tramcar, trolley, trolley car
n04336792 stretcher
n04344873 studio couch, day bed
n04346328 stupa, tope
n04347754 submarine, pigboat, sub, U-boat
n04350905 suit, suit of clothes
n04355338 sundial
n04355933 sunglass
n04356056 sunglasses, dark glasses, shades
n04357314 sunscreen, sunblock, sun blocker
n04366367 suspension bridge
n04367480 swab, swob, mop
n04370456 sweatshirt
n04371430 swimming trunks, bathing trunks
n04371774 swing
n04372370 switch, electric switch, electrical switch
n04376876 syringe
n04380533 table lamp
n04389033 tank, army tank, armored combat vehicle, armoured combat vehicle
n04392985 tape player
n04398044 teapot
n04399382 teddy, teddy bear
n04404412 television, television system
n04409515 tennis ball
n04417672 thatch, thatched roof
n04418357 theater curtain, theatre curtain
n04423845 thimble
n04428191 thresher, thrasher, threshing machine
n04429376 throne
n04435653 tile roof
n04442312 toaster
n04443257 tobacco shop, tobacconist shop, tobacconist
n04447861 toilet seat
n04456115 torch
n04458633 totem pole
n04461696 tow truck, tow car, wrecker
n04462240 toyshop
n04465501 tractor
n04467665 trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi
n04476259 tray
n04479046 trench coat
n04482393 tricycle, trike, velocipede
n04483307 trimaran
n04485082 tripod
n04486054 triumphal arch
n04487081 trolleybus, trolley coach, trackless trolley
n04487394 trombone
n04493381 tub, vat
n04501370 turnstile
n04505470 typewriter keyboard
n04507155 umbrella
n04509417 unicycle, monocycle
n04515003 upright, upright piano
n04517823 vacuum, vacuum cleaner
n04522168 vase
n04523525 vault
n04525038 velvet
n04525305 vending machine
n04532106 vestment
n04532670 viaduct
n04536866 violin, fiddle
n04540053 volleyball
n04542943 waffle iron
n04548280 wall clock
n04548362 wallet, billfold, notecase, pocketbook
n04550184 wardrobe, closet, press
n04552348 warplane, military plane
n04553703 washbasin, handbasin, washbowl, lavabo, wash-hand basin
n04554684 washer, automatic washer, washing machine
n04557648 water bottle
n04560804 water jug
n04562935 water tower
n04579145 whiskey jug
n04579432 whistle
n04584207 wig
n04589890 window screen
n04590129 window shade
n04591157 Windsor tie
n04591713 wine bottle
n04592741 wing
n04596742 wok
n04597913 wooden spoon
n04599235 wool, woolen, woollen
n04604644 worm fence, snake fence, snake-rail fence, Virginia fence
n04606251 wreck
n04612504 yawl
n04613696 yurt
n06359193 web site, website, internet site, site
n06596364 comic book
n06785654 crossword puzzle, crossword
n06794110 street sign
n06874185 traffic light, traffic signal, stoplight
n07248320 book jacket, dust cover, dust jacket, dust wrapper
n07565083 menu
n07579787 plate
n07583066 guacamole
n07584110 consomme
n07590611 hot pot, hotpot
n07613480 trifle
n07614500 ice cream, icecream
n07615774 ice lolly, lolly, lollipop, popsicle
n07684084 French loaf
n07693725 bagel, beigel
n07695742 pretzel
n07697313 cheeseburger
n07697537 hotdog, hot dog, red hot
n07711569 mashed potato
n07714571 head cabbage
n07714990 broccoli
n07715103 cauliflower
n07716358 zucchini, courgette
n07716906 spaghetti squash
n07717410 acorn squash
n07717556 butternut squash
n07718472 cucumber, cuke
n07718747 artichoke, globe artichoke
n07720875 bell pepper
n07730033 cardoon
n07734744 mushroom
n07742313 Granny Smith
n07745940 strawberry
n07747607 orange
n07749582 lemon
n07753113 fig
n07753275 pineapple, ananas
n07753592 banana
n07754684 jackfruit, jak, jack
n07760859 custard apple
n07768694 pomegranate
n07802026 hay
n07831146 carbonara
n07836838 chocolate sauce, chocolate syrup
n07860988 dough
n07871810 meat loaf, meatloaf
n07873807 pizza, pizza pie
n07875152 potpie
n07880968 burrito
n07892512 red wine
n07920052 espresso
n07930864 cup
n07932039 eggnog
n09193705 alp
n09229709 bubble
n09246464 cliff, drop, drop-off
n09256479 coral reef
n09288635 geyser
n09332890 lakeside, lakeshore
n09399592 promontory, headland, head, foreland
n09421951 sandbar, sand bar
n09428293 seashore, coast, seacoast, sea-coast
n09468604 valley, vale
n09472597 volcano
n09835506 ballplayer, baseball player
n10148035 groom, bridegroom
n10565667 scuba diver
n11879895 rapeseed
n11939491 daisy
n12057211 yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus, Cypripedium parviflorum
n12144580 corn
n12267677 acorn
n12620546 hip, rose hip, rosehip
n12768682 buckeye, horse chestnut, conker
n12985857 coral fungus
n12998815 agaric
n13037406 gyromitra
n13040303 stinkhorn, carrion fungus
n13044778 earthstar
n13052670 hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola frondosa
n13054560 bolete
n13133613 ear, spike, capitulum
n15075141 toilet tissue, toilet paper, bathroom tissue
``` | 31,691 | [
[
-0.06854248046875,
-0.01552581787109375,
0.020416259765625,
0.0258941650390625,
-0.00916290283203125,
0.019317626953125,
0.0106048583984375,
-0.030487060546875,
0.052215576171875,
-0.018829345703125,
-0.01531219482421875,
-0.030853271484375,
-0.060211181640625,
... |
ai4bharat/Aksharantar | 2023-08-31T07:05:34.000Z | [
"task_categories:text-generation",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"language_creators:machine-generated",
"language_creators:found",
"language_creators:other",
"multilinguality:multilingual",
"source_datasets:original",
"language:asm",
"language:ben",
"lan... | ai4bharat | null | null | 3 | 6 | 2022-05-06T12:35:15 | ---
annotations_creators: []
language_creators:
- crowdsourced
- expert-generated
- machine-generated
- found
- other
language:
- asm
- ben
- brx
- doi
- guj
- hin
- kan
- kas
- kok
- mai
- mal
- mar
- mni
- nep
- ori
- pan
- san
- sid
- tam
- tel
- urd
license: cc
multilinguality:
- multilingual
pretty_name: Aksharantar
source_datasets:
- original
task_categories:
- text-generation
task_ids: []
---
# Dataset Card for Aksharantar
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://indicnlp.ai4bharat.org/indic-xlit/
- **Repository:** https://github.com/AI4Bharat/IndicXlit/
- **Paper:** [Aksharantar: Towards building open transliteration tools for the next billion users](https://arxiv.org/abs/2205.03018)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Aksharantar is the largest publicly available transliteration dataset for 20 Indic languages. The corpus has 26M Indic language-English transliteration pairs.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
| <!-- --> | <!-- --> | <!-- --> | <!-- --> | <!-- --> | <!-- --> |
| -------------- | -------------- | -------------- | --------------- | -------------- | ------------- |
| Assamese (asm) | Hindi (hin) | Maithili (mai) | Marathi (mar) | Punjabi (pan) | Tamil (tam) |
| Bengali (ben) | Kannada (kan) | Malayalam (mal)| Nepali (nep) | Sanskrit (san) | Telugu (tel) |
| Bodo(brx) | Kashmiri (kas) | Manipuri (mni) | Oriya (ori) | Sindhi (snd) | Urdu (urd) |
| Gujarati (guj) | Konkani (kok) | Dogri (doi) |
## Dataset Structure
### Data Instances
```
A random sample from Hindi (hin) Train dataset.
{
'unique_identifier': 'hin1241393',
'native word': 'स्वाभिमानिक',
'english word': 'swabhimanik',
'source': 'IndicCorp',
'score': -0.1028788579
}
```
### Data Fields
- `unique_identifier` (string): 3-letter language code followed by a unique number in each set (Train, Test, Val).
- `native word` (string): A word in Indic language.
- `english word` (string): Transliteration of native word in English (Romanised word).
- `source` (string): Source of the data.
- `score` (num): Character level log probability of indic word given roman word by IndicXlit (model). Pairs with average threshold of the 0.35 are considered.
For created data sources, depending on the destination/sampling method of a pair in a language, it will be one of:
- Dakshina Dataset
- IndicCorp
- Samanantar
- Wikidata
- Existing sources
- Named Entities Indian (AK-NEI)
- Named Entities Foreign (AK-NEF)
- Data from Uniform Sampling method. (Ak-Uni)
- Data from Most Frequent words sampling method. (Ak-Freq)
### Data Splits
| Subset | asm-en | ben-en | brx-en | guj-en | hin-en | kan-en | kas-en | kok-en | mai-en | mal-en | mni-en | mar-en | nep-en | ori-en | pan-en | san-en | sid-en | tam-en | tel-en | urd-en |
|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|
| Training | 179K | 1231K | 36K | 1143K | 1299K | 2907K | 47K | 613K | 283K | 4101K | 10K | 1453K | 2397K | 346K | 515K | 1813K | 60K | 3231K | 2430K | 699K |
| Validation | 4K | 11K | 3K | 12K | 6K | 7K | 4K | 4K | 4K | 8K | 3K | 8K | 3K | 3K | 9K | 3K | 8K | 9K | 8K | 12K |
| Test | 5531 | 5009 | 4136 | 7768 | 5693 | 6396 | 7707 | 5093 | 5512 | 6911 | 4925 | 6573 | 4133 | 4256 | 4316 | 5334 | - | 4682 | 4567 | 4463 |
## Dataset Creation
Information in the paper. [Aksharantar: Towards building open transliteration tools for the next billion users](https://arxiv.org/abs/2205.03018)
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Information in the paper. [Aksharantar: Towards building open transliteration tools for the next billion users](https://arxiv.org/abs/2205.03018)
#### Who are the source language producers?
[More Information Needed]
### Annotations
Information in the paper. [Aksharantar: Towards building open transliteration tools for the next billion users](https://arxiv.org/abs/2205.03018)
#### Annotation process
Information in the paper. [Aksharantar: Towards building open transliteration tools for the next billion users](https://arxiv.org/abs/2205.03018)
#### Who are the annotators?
Information in the paper. [Aksharantar: Towards building open transliteration tools for the next billion users](https://arxiv.org/abs/2205.03018)
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
<!-- <a rel="license" float="left" href="http://creativecommons.org/publicdomain/zero/1.0/">
<img src="https://licensebuttons.net/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" width="100" />
<img src="https://mirrors.creativecommons.org/presskit/buttons/88x31/png/by.png" style="border-style: none;" alt="CC-BY" width="100" href="http://creativecommons.org/publicdomain/zero/1.0/"/>
</a>
<br/> -->
This data is released under the following licensing scheme:
- Manually collected data: Released under CC-BY license.
- Mined dataset (from Samanantar and IndicCorp): Released under CC0 license.
- Existing sources: Released under CC0 license.
**CC-BY License**
<a rel="license" float="left" href="https://creativecommons.org/about/cclicenses/">
<img src="https://mirrors.creativecommons.org/presskit/buttons/88x31/png/by.png" style="border-style: none;" alt="CC-BY" width="100"/>
</a>
<br>
<br>
<!--
and the Aksharantar benchmark and all manually transliterated data under the [Creative Commons CC-BY license (“no rights reserved”)](https://creativecommons.org/licenses/by/4.0/). -->
**CC0 License Statement**
<a rel="license" float="left" href="https://creativecommons.org/about/cclicenses/">
<img src="https://licensebuttons.net/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" width="100"/>
</a>
<br>
<br>
- We do not own any of the text from which this data has been extracted.
- We license the actual packaging of the mined data under the [Creative Commons CC0 license (“no rights reserved”)](http://creativecommons.org/publicdomain/zero/1.0).
- To the extent possible under law, <a rel="dct:publisher" href="https://indicnlp.ai4bharat.org/aksharantar/"> <span property="dct:title">AI4Bharat</span></a> has waived all copyright and related or neighboring rights to <span property="dct:title">Aksharantar</span> manually collected data and existing sources.
- This work is published from: India.
### Citation Information
```
@misc{madhani2022aksharantar,
title={Aksharantar: Towards Building Open Transliteration Tools for the Next Billion Users},
author={Yash Madhani and Sushane Parthan and Priyanka Bedekar and Ruchi Khapra and Anoop Kunchukuttan and Pratyush Kumar and Mitesh Shantadevi Khapra},
year={2022},
eprint={},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions | 8,387 | [
[
-0.026519775390625,
-0.0267791748046875,
-0.00818634033203125,
0.01197052001953125,
-0.0291748046875,
0.0094451904296875,
-0.015167236328125,
-0.0301666259765625,
0.0196685791015625,
0.01071929931640625,
-0.0350341796875,
-0.044464111328125,
-0.0406494140625,
... |
NLPC-UOM/Student_feedback_analysis_dataset | 2022-10-25T10:13:19.000Z | [
"region:us"
] | NLPC-UOM | null | null | 1 | 6 | 2022-05-07T03:17:15 | # README
## Annotated Student Feedback
---
annotations_creators: []
language:
- en
license:
- mit
---
This resource contains 3000 student feedback data that have been annotated for aspect terms, opinion terms, polarities of the opinion terms towards targeted aspects, document-level opinion polarities, and sentence separations.
### Folder Structure of the resource,
```bash
└───Annotated Student Feedback Data
├───Annotator_1
│ ├───Annotated_part_1
│ ├───Annotated_part_2
│ └───towe-eacl_recreation_data_set
│ ├───defomative comment removed
│ └───less than 100 lengthy comment
├───Annotator_2
│ ├───Annotated_part_3
│ ├───Annotated_part_4
│ └───Annotated_part_5
└───Annotator_3
└───Annotated_part_6
```
Each Annotated_part_# folders contain three files. Those are in XMI, XML, and ZIP formats.
XMI files contain the annotated student feedback data and XML files contain tagsets used for annotation.
Find the code for reading data from XML and XMI files in `code_for_read_annotated_data.py`
| 1,077 | [
[
-0.05157470703125,
-0.0283660888671875,
0.036834716796875,
0.0225677490234375,
-0.015716552734375,
0.0282440185546875,
-0.0028972625732421875,
-0.01358795166015625,
0.0224151611328125,
0.047393798828125,
-0.050506591796875,
-0.0531005859375,
-0.00989532470703125... |
bigscience-data/roots_ar_tashkeela | 2022-12-12T11:02:22.000Z | [
"language:ar",
"license:gpl-2.0",
"region:us"
] | bigscience-data | null | null | 0 | 6 | 2022-05-18T09:07:28 | ---
language: ar
license: gpl-2.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_ar_tashkeela
# Tashkeela
- Dataset uid: `tashkeela`
### Description
The dataset collected from 97 books in both modern and classic arabic. The dataset contains Arabic diacritics. The dataset is
### Homepage
https://sourceforge.net/projects/tashkeela/
### Licensing
- gpl-2.0: GNU General Public License v2.0 only
### Speaker Locations
### Sizes
- 0.2533 % of total
- 2.3340 % of ar
### BigScience processing steps
#### Filters applied to: ar
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
| 898 | [
[
-0.0240936279296875,
-0.0310516357421875,
0.007781982421875,
-0.01244354248046875,
-0.036712646484375,
0.01399993896484375,
0.00696563720703125,
-0.009521484375,
0.0196990966796875,
0.036163330078125,
-0.046173095703125,
-0.08648681640625,
-0.056121826171875,
... |
bigscience-data/roots_fr_book_dash_books | 2022-12-12T10:35:08.000Z | [
"language:fr",
"license:cc-by-4.0",
"region:us"
] | bigscience-data | null | null | 1 | 6 | 2022-05-18T09:13:23 | ---
language: fr
license: cc-by-4.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_fr_book_dash_books
# Book Dash Books
- Dataset uid: `book_dash_books`
### Description
Book Dash believes that every child should own one hundred books by the age of five.
To that end, we gather creative professionals who volunteer to create new, African storybooks that anyone can freely translate, print and distribute. In this way, we have vastly reduced the costs involved in putting high-quality books in children’s hands and hearts.
### Homepage
https://bookdash.org/books/
### Licensing
Creative Commons Attribution 4.0
### Speaker Locations
- Africa
- South Africa
### Sizes
- 0.0000 % of total
- 0.0000 % of en
- 0.0000 % of fr
### BigScience processing steps
#### Filters applied to: en
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: fr
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
| 1,236 | [
[
-0.0484619140625,
-0.0086517333984375,
0.01393890380859375,
0.0237579345703125,
-0.032806396484375,
0.002819061279296875,
0.01474761962890625,
-0.0309906005859375,
0.025146484375,
0.046539306640625,
-0.077392578125,
-0.03985595703125,
-0.046142578125,
0.0095... |
ibm/vira-intents | 2022-06-01T07:39:11.000Z | [
"region:us"
] | ibm | null | null | 1 | 6 | 2022-05-31T08:49:22 | The COVID-19 Vaccine Intent Expressions dataset contains 7,990 varying expressions for common questions about COVID-19 vaccines.
We collaborated with a team at Johns Hopkins University to curate a list 181 such common questions.
We then showed annotators a question from the list and asked them to express it in their words, imagining they are chatting with a knowledgable friend.
A subset of 324 expressions in this dataset are utterances taken from VIRADialogs, a dataset of conversations of users with a chatbot about COVID-19 vaccines.
The data is split to 3 files, train.csv and dev.csv and test.csv.
Each file contains the following columns:
1. text - the expression written by an annotator (or taken from VIRADialogs)
2. label - the running class index associated with this label
If you use this dataset please cite:
Benchmark Data and Evaluation Framework for Intent Discovery Around COVID-19 Vaccine Hesitancy
Shai Gretz, Assaf Toledo, Roni Friedman, Dan Lahav, Rose Weeks, Naor Bar-Zeev, João Sedoc, Pooja Sangha, Yoav Katz, Noam Slonim.
arXiv. 2022.
============================
License: Community Data License Agreement - Sharing - Version 1.0
https://cdla.dev/sharing-1-0/
This dataset contains parts of VIRADialogs as-is. All credit for VIRADialogs belongs to Johns Hopkins University, they are the sole owners of VIRADialogs. VIRADialogs is available at vaxchat.org/research. | 1,402 | [
[
-0.0102691650390625,
-0.06378173828125,
0.0213165283203125,
-0.0019006729125976562,
-0.011138916015625,
-0.0201873779296875,
-0.01314544677734375,
-0.0318603515625,
0.0240020751953125,
0.0160980224609375,
-0.0301513671875,
-0.0435791015625,
-0.037353515625,
... |
yoshitomo-matsubara/srsd-feynman_medium | 2023-10-11T02:06:32.000Z | [
"task_categories:tabular-regression",
"annotations_creators:expert",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended",
"language:en",
"license:cc-by-4.0",
"arxiv:2206.10540",
"doi:10.57967/hf/0762",
"region:us"
] | yoshitomo-matsubara | null | null | 0 | 6 | 2022-06-08T06:22:10 | ---
pretty_name: SRSD-Feynman (Medium)
annotations_creators:
- expert
language_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended
task_categories:
- tabular-regression
task_ids: []
---
# Dataset Card for SRSD-Feynman (Medium set)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/omron-sinicx/srsd-benchmark
- **Paper:** [Rethinking Symbolic Regression Datasets and Benchmarks for Scientific Discovery](https://arxiv.org/abs/2206.10540)
- **Point of Contact:** [Yoshitaka Ushiku](mailto:yoshitaka.ushiku@sinicx.com)
### Dataset Summary
Our SRSD (Feynman) datasets are designed to discuss the performance of Symbolic Regression for Scientific Discovery.
We carefully reviewed the properties of each formula and its variables in [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html) to design reasonably realistic sampling range of values so that our SRSD datasets can be used for evaluating the potential of SRSD such as whether or not an SR method con (re)discover physical laws from such datasets.
This is the ***Medium set*** of our SRSD-Feynman datasets, which consists of the following 40 different physics formulas:
[](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_medium/resolve/main/problem_table.pdf)
More details of these datasets are provided in [the paper and its supplementary material](https://arxiv.org/abs/2206.10540).
### Supported Tasks and Leaderboards
Symbolic Regression
## Dataset Structure
### Data Instances
Tabular data + Ground-truth equation per equation
Tabular data: (num_samples, num_variables+1), where the last (rightmost) column indicate output of the target function for given variables.
Note that the number of variables (`num_variables`) varies from equation to equation.
Ground-truth equation: *pickled* symbolic representation (equation with symbols in sympy) of the target function.
### Data Fields
For each dataset, we have
1. train split (txt file, whitespace as a delimiter)
2. val split (txt file, whitespace as a delimiter)
3. test split (txt file, whitespace as a delimiter)
4. true equation (pickle file for sympy object)
### Data Splits
- train: 8,000 samples per equation
- val: 1,000 samples per equation
- test: 1,000 samples per equation
## Dataset Creation
### Curation Rationale
We chose target equations based on [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html).
### Annotations
#### Annotation process
We significantly revised the sampling range for each variable from the annotations in the Feynman Symbolic Regression Database.
First, we checked the properties of each variable and treat physical constants (e.g., light speed, gravitational constant) as constants.
Next, variable ranges were defined to correspond to each typical physics experiment to confirm the physical phenomenon for each equation.
In cases where a specific experiment is difficult to be assumed, ranges were set within which the corresponding physical phenomenon can be seen.
Generally, the ranges are set to be sampled on log scales within their orders as 10^2 in order to take both large and small changes in value as the order changes.
Variables such as angles, for which a linear distribution is expected are set to be sampled uniformly.
In addition, variables that take a specific sign were set to be sampled within that range.
#### Who are the annotators?
The main annotators are
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
### Personal and Sensitive Information
N/A
## Considerations for Using the Data
### Social Impact of Dataset
We annotated this dataset, assuming typical physical experiments. The dataset will engage research on symbolic regression for scientific discovery (SRSD) and help researchers discuss the potential of symbolic regression methods towards data-driven scientific discovery.
### Discussion of Biases
Our choices of target equations are based on [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html), which are focused on a field of Physics.
### Other Known Limitations
Some variables used in our datasets indicate some numbers (counts), which should be treated as integer.
Due to the capacity of 32-bit integer, however, we treated some of such variables as float e.g., number of molecules (10^{23} - 10^{25})
## Additional Information
### Dataset Curators
The main curators are
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
### Licensing Information
Creative Commons Attribution 4.0
### Citation Information
[[Preprint](https://arxiv.org/abs/2206.10540)]
```bibtex
@article{matsubara2022rethinking,
title={Rethinking Symbolic Regression Datasets and Benchmarks for Scientific Discovery},
author={Matsubara, Yoshitomo and Chiba, Naoya and Igarashi, Ryo and Ushiku, Yoshitaka},
journal={arXiv preprint arXiv:2206.10540},
year={2022}
}
```
### Contributions
Authors:
- Yoshitomo Matsubara (@yoshitomo-matsubara)
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
- Yoshitaka Ushiku (@yushiku)
| 6,348 | [
[
-0.00958251953125,
-0.036163330078125,
0.0303192138671875,
0.017333984375,
-0.01485443115234375,
-0.018280029296875,
-0.0000413060188293457,
-0.018524169921875,
0.027496337890625,
0.0267791748046875,
-0.056365966796875,
-0.03436279296875,
-0.04718017578125,
... |
yoshitomo-matsubara/srsd-feynman_hard | 2023-10-11T02:07:04.000Z | [
"task_categories:tabular-regression",
"annotations_creators:expert",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended",
"language:en",
"license:cc-by-4.0",
"arxiv:2206.10540",
"doi:10.57967/hf/0761",
"region:us"
] | yoshitomo-matsubara | null | null | 0 | 6 | 2022-06-08T06:22:25 | ---
pretty_name: SRSD-Feynman (Hard)
annotations_creators:
- expert
language_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended
task_categories:
- tabular-regression
task_ids: []
---
# Dataset Card for SRSD-Feynman (Hard set)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/omron-sinicx/srsd-benchmark
- **Paper:** [Rethinking Symbolic Regression Datasets and Benchmarks for Scientific Discovery](https://arxiv.org/abs/2206.10540)
- **Point of Contact:** [Yoshitaka Ushiku](mailto:yoshitaka.ushiku@sinicx.com)
### Dataset Summary
Our SRSD (Feynman) datasets are designed to discuss the performance of Symbolic Regression for Scientific Discovery.
We carefully reviewed the properties of each formula and its variables in [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html) to design reasonably realistic sampling range of values so that our SRSD datasets can be used for evaluating the potential of SRSD such as whether or not an SR method con (re)discover physical laws from such datasets.
This is the ***Hard set*** of our SRSD-Feynman datasets, which consists of the following 50 different physics formulas:
[](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_hard/resolve/main/problem_table.pdf)
More details of these datasets are provided in [the paper and its supplementary material](https://arxiv.org/abs/2206.10540).
### Supported Tasks and Leaderboards
Symbolic Regression
## Dataset Structure
### Data Instances
Tabular data + Ground-truth equation per equation
Tabular data: (num_samples, num_variables+1), where the last (rightmost) column indicate output of the target function for given variables.
Note that the number of variables (`num_variables`) varies from equation to equation.
Ground-truth equation: *pickled* symbolic representation (equation with symbols in sympy) of the target function.
### Data Fields
For each dataset, we have
1. train split (txt file, whitespace as a delimiter)
2. val split (txt file, whitespace as a delimiter)
3. test split (txt file, whitespace as a delimiter)
4. true equation (pickle file for sympy object)
### Data Splits
- train: 8,000 samples per equation
- val: 1,000 samples per equation
- test: 1,000 samples per equation
## Dataset Creation
### Curation Rationale
We chose target equations based on [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html).
### Annotations
#### Annotation process
We significantly revised the sampling range for each variable from the annotations in the Feynman Symbolic Regression Database.
First, we checked the properties of each variable and treat physical constants (e.g., light speed, gravitational constant) as constants.
Next, variable ranges were defined to correspond to each typical physics experiment to confirm the physical phenomenon for each equation.
In cases where a specific experiment is difficult to be assumed, ranges were set within which the corresponding physical phenomenon can be seen.
Generally, the ranges are set to be sampled on log scales within their orders as 10^2 in order to take both large and small changes in value as the order changes.
Variables such as angles, for which a linear distribution is expected are set to be sampled uniformly.
In addition, variables that take a specific sign were set to be sampled within that range.
#### Who are the annotators?
The main annotators are
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
### Personal and Sensitive Information
N/A
## Considerations for Using the Data
### Social Impact of Dataset
We annotated this dataset, assuming typical physical experiments. The dataset will engage research on symbolic regression for scientific discovery (SRSD) and help researchers discuss the potential of symbolic regression methods towards data-driven scientific discovery.
### Discussion of Biases
Our choices of target equations are based on [the Feynman Symbolic Regression Database](https://space.mit.edu/home/tegmark/aifeynman.html), which are focused on a field of Physics.
### Other Known Limitations
Some variables used in our datasets indicate some numbers (counts), which should be treated as integer.
Due to the capacity of 32-bit integer, however, we treated some of such variables as float e.g., number of molecules (10^{23} - 10^{25})
## Additional Information
### Dataset Curators
The main curators are
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
### Licensing Information
Creative Commons Attribution 4.0
### Citation Information
[[Preprint](https://arxiv.org/abs/2206.10540)]
```bibtex
@article{matsubara2022rethinking,
title={Rethinking Symbolic Regression Datasets and Benchmarks for Scientific Discovery},
author={Matsubara, Yoshitomo and Chiba, Naoya and Igarashi, Ryo and Ushiku, Yoshitaka},
journal={arXiv preprint arXiv:2206.10540},
year={2022}
}
```
### Contributions
Authors:
- Yoshitomo Matsubara (@yoshitomo-matsubara)
- Naoya Chiba (@nchiba)
- Ryo Igarashi (@rigarash)
- Yoshitaka Ushiku (@yushiku)
| 6,341 | [
[
-0.00849151611328125,
-0.034576416015625,
0.031890869140625,
0.0164337158203125,
-0.01326751708984375,
-0.0168914794921875,
0.00048041343688964844,
-0.017578125,
0.0238037109375,
0.0275726318359375,
-0.05706787109375,
-0.0362548828125,
-0.045501708984375,
0.... |
BeIR/scifact-generated-queries | 2022-10-23T06:12:34.000Z | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | BeIR | null | null | 0 | 6 | 2022-06-17T12:52:14 | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
- zero-shot-retrieval
- information-retrieval
- zero-shot-information-retrieval
task_ids:
- passage-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
- tweet-retrieval
- citation-prediction-retrieval
- duplication-question-retrieval
- argument-retrieval
- news-retrieval
- biomedical-information-retrieval
- question-answering-retrieval
---
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. | 13,988 | [
[
-0.0396728515625,
-0.03985595703125,
0.01094818115234375,
0.0036602020263671875,
0.00423431396484375,
0.00009590387344360352,
-0.0081939697265625,
-0.0188751220703125,
0.021697998046875,
0.00595855712890625,
-0.034332275390625,
-0.0545654296875,
-0.0263824462890... |
spencer/dialogsum_reformat | 2022-06-20T22:27:54.000Z | [
"region:us"
] | spencer | null | null | 1 | 6 | 2022-06-20T22:27:43 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
autoevaluate/mnist-sample | 2022-06-21T13:49:41.000Z | [
"region:us"
] | autoevaluate | null | null | 0 | 6 | 2022-06-21T13:49:37 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
imvladikon/nemo_corpus | 2023-01-04T12:03:22.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-reuters-corpus",
"language:he",
"license:other",
"region:us"
] | imvladikon | \ | @article{10.1162/tacl_a_00404,
author = {Bareket, Dan and Tsarfaty, Reut},
title = "{Neural Modeling for Named Entities and Morphology (NEMO2)}",
journal = {Transactions of the Association for Computational Linguistics},
volume = {9},
pages = {909-928},
year = {2021},
month = {09},
abstract = "{Named Entity Recognition (NER) is a fundamental NLP task, commonly formulated as classification over a sequence of tokens. Morphologically rich languages (MRLs) pose a challenge to this basic formulation, as the boundaries of named entities do not necessarily coincide with token boundaries, rather, they respect morphological boundaries. To address NER in MRLs we then need to answer two fundamental questions, namely, what are the basic units to be labeled, and how can these units be detected and classified in realistic settings (i.e., where no gold morphology is available). We empirically investigate these questions on a novel NER benchmark, with parallel token- level and morpheme-level NER annotations, which we develop for Modern Hebrew, a morphologically rich-and-ambiguous language. Our results show that explicitly modeling morphological boundaries leads to improved NER performance, and that a novel hybrid architecture, in which NER precedes and prunes morphological decomposition, greatly outperforms the standard pipeline, where morphological decomposition strictly precedes NER, setting a new performance bar for both Hebrew NER and Hebrew morphological decomposition tasks.}",
issn = {2307-387X},
doi = {10.1162/tacl_a_00404},
url = {https://doi.org/10.1162/tacl\_a\_00404},
eprint = {https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl\_a\_00404/1962472/tacl\_a\_00404.pdf},
} | 0 | 6 | 2022-06-28T16:51:45 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- he
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-reuters-corpus
task_categories:
- token-classification
task_ids:
- named-entity-recognition
train-eval-index:
- config: bmc
task: token-classification
task_id: entity_extraction
splits:
train_split: train
eval_split: validation
test_split: test
col_mapping:
tokens: tokens
ner_tags: tags
metrics:
- type: seqeval
name: seqeval
---
# NEMO-Corpus - The Hebrew Named Entities and Morphology Corpus
**Disclaimer**: It's just a huggingface datasets convenient interface for research purpose which is fetching the original data from [github](https://github.com/OnlpLab/NEMO-Corpus). I'm not an author of this work.
```python
from datasets import load_dataset
# the main corpus
ds = load_dataset('imvladikon/nemo_corpus')
for sample in ds["train"]:
print(sample)
# the nested corpus
ds = load_dataset('imvladikon/nemo_corpus', "nested")
```
Getting classes and encoding/decoding could be done through these functions:
```
idx2label = dataset["train"].features["ner_tags"].feature.int2str
label2idx = dataset["train"].features["ner_tags"].feature.str2int
```
or just use raw_tags field.
## Fields
available fields (flat):
* "id"
* "sentence"
* "tokens"
* "raw_tags"
* "ner_tags"
* "spans"
Example of the one record for `flat`:
```json
{'id': '0', 'tokens': ['"', 'תהיה', 'נקמה', 'ו', 'בגדול', '.'], 'sentence': '" תהיה נקמה ו בגדול .', 'raw_tags': ['O', 'O', 'O', 'O', 'O', 'O'], 'ner_tags': [24, 24, 24, 24, 24, 24], 'spans': {'span': [], 'start': [], 'end': [], 'entity': [], 'start_char': [], 'end_char': []}}
```
Example of the one record for `nested`:
```json
{'id': '0', 'tokens': ['"', 'תהיה', 'נקמה', 'ו', 'בגדול', '.'], 'ner_tags': [24, 24, 24, 24, 24, 24], 'ner_tags_2': [24, 24, 24, 24, 24, 24], 'ner_tags_3': [24, 24, 24, 24, 24, 24], 'ner_tags_4': [24, 24, 24, 24, 24, 24]}
```
## Dataset Description
it's README.md of the [original repository](https://github.com/OnlpLab/NEMO-Corpus)
Named Entity (NER) annotations of the Hebrew Treebank (Haaretz newspaper) corpus, including: morpheme and token level NER labels, nested mentions, and more.
We publish the NEMO corpus in the TACL paper [*"Neural Modeling for Named Entities and Morphology (NEMO<sup>2</sup>)"*](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00404/107206/Neural-Modeling-for-Named-Entities-and-Morphology) [1], where we use it in extensive experiments and analyses, showing the importance of morphological boundaries for neural modeling of NER in morphologically rich languages. Code for these models and experiments can be found in the [NEMO code repo](https://github.com/OnlpLab/NEMO).
## Main features:
1. Morpheme, token-single and token-multi sequence labels. Morpheme labels provide exact boundaries, token-multi provide partial sub-word morphological but no exact boundaries, token-single provides only token-level information.
1. All annotations are in `BIOSE` format (`B`=Begin, `I`=Inside, `O`=Outside, `S`=Singleton, `E`=End).
1. Widely-used OntoNotes entity category set: `GPE` (geo-political entity), `PER` (person), `LOC` (location), `ORG` (organization), `FAC` (facility), `EVE` (event), `WOA` (work-of-art), `ANG` (language), `DUC` (product).
1. NEMO includes NER annotations for the two major versions of the Hebrew Treebank, UD (Universal Dependency) and SPMRL. These can be aligned to the other morphosyntactic information layers of the treebank using [bclm](https://github.com/OnlpLab/bclm)
1. We provide nested mentions. Only the first, widest, layer is used in the NEMO<sup>2</sup> paper. We invite you to take on this challenge!
1. Guidelines used for annotation are provided [here](./guidelines/).
1. Corpus was annotated by two native Hebrew speakers of academic education, and curated by the project manager. We provide the original annotations made by the annotators as well to promote work on [learning with disagreements](https://sites.google.com/view/semeval2021-task12/home).
1. Annotation was performed using [WebAnno](https://webanno.github.io/webanno/) (version 3.4.5)
## Legend for Files and Folder Structure
1. The two main [data](./data/) folders are [ud](./data/ud/) and [spmrl](./data/spmrl/), corresponding to the relevant Hebrew Treebank corpus version.
1. Both contain a `gold` folder ([spmrl/gold](./data/spmrl/gold/), [ud/gold](./data/ud/gold/)) of gold curated annotations.
1. Each `gold` folder contains files of the three input-output variants (morph, token-multi, token-single), for each of the treebank splits (train,dev,test).
1. Each `gold` folder also contains a `nested` subfolder ([spmrl/nested](./data/spmrl/gold/nested/), [ud/nested](./data/ud/gold/nested/)), which contains all layers of nested mentions (the first layer is the layer used in the non-nested files, and in the NEMO<sup>2</sup> paper [1])
1. The `ud` folder also contains an [ab_annotators](./data/ud/ab_annotators/) folder. This folder contains the original annotations made by each annotator (named `a`, `b`), including first-layer and nested annotatations.
1. *\*UPDATE 2021-09-06\** `ud` folder now contains a [pilot_annotations](./data/ud/pilot_annotations/) folder. This folder contains the original annotations made by each annotator in our two phase pilot (phase I - sentences 1-200 of dev; phase II - sentences 201-400 of dev).
## Basic Corpus Statistics
| | train | dev | test |
|------------------------------| --:| --:| --:|
| Sentences | 4,937 | 500 | 706 |
| Tokens | 93,504 | 8,531 | 12,619 |
| Morphemes | 127,031 | 11,301 | 16,828 |
| All mentions | 6,282 | 499 | 932 |
| Type: Person (PER) | 2,128 | 193 | 267 |
| Type: Organization (ORG) | 2,043 | 119 | 408 |
| Type: Geo-Political (GPE) | 1,377 | 121 | 195 |
| Type: Location (LOC) | 331 | 28 | 41 |
| Type: Facility (FAC) | 163 | 12 | 11 |
| Type: Work-of-Art (WOA) | 114 | 9 | 6 |
| Type: Event (EVE) | 57 | 12 | 0 |
| Type: Product (DUC) | 36 | 2 | 3 |
| Type: Language (ANG) | 33 | 3 | 1 |
## Aligned Treenbank Versions
The NEMO corpus matches the treebank version of [bclm v.1.0.0](https://github.com/OnlpLab/bclm/releases/tag/v1.0.0-alpha).
This version is based on the [HTB UD v2.2](https://github.com/UniversalDependencies/UD_Hebrew-HTB/releases/tag/r2.2) and the [latest SPMRL HTB version](https://github.com/OnlpLab/HebrewResources/tree/102674bb030f5836e1ab827feb63954ad7a6f8fe/HebrewTreebank/hebtb).
The changes contain (but might not be limited to the following):
1. Flagged and dropped duplicate and leaking sentences (between train and test). In addition to the sentences already removed in the bclm v1.0.0 HTB version, the following duplicate sentences were dropped as well (SPMRL sentence IDs): 5438, 5444, 5445, 5446, 5448, 5449, 5450, 5451, 5453, 5459 (in the bclm dataframes, these are marked in the `duplicate_sent_id` column).
To read the treebank (UD/SPMRL) in a way that matches the NEMO corpus, you can use the following:
```python
import bclm
dropped = [5438, 5444, 5445, 5446, 5448, 5449, 5450, 5451, 5453, 5459]
spdf = bclm.read_dataframe('spmrl') # load SPMRL treebank dataframe
global_dropped = [spdf[spdf.sent_id==d].global_sent_id.iat[0] for d in dropped]
uddf = bclm.read_dataframe('ud') # load UD treebank dataframe
uddf = uddf[(~uddf.global_sent_id.isin(global_dropped))] # remove extra duplicates
spdf = spdf[(~spdf.sent_id.isin(dropped))] # remove extra duplicates
# The resulting dataframes contain gold morph NER labels in the `biose_layer0`, `biose_layer1`... columns.
```
2. The UD treebank contains many more duplicates. In this version: all sentences exist in both UD and SPMRL versions, and all sentences and tokens are aligned between UD and SPMRL.
2. Fixed numbers that were originally reversed.
2. Fixed mismatches between tokens and morphemes.
2. Added Binyan feature.
2. No individual morphemes or tokens were added or removed, only complete sentences.
## Evaluation
An evaluation script is provided in the [NEMO code repo](https://github.com/OnlpLab/NEMO#evaluation) along with evaluation instructions.
## Citations
##### [1]
If you use the NEMO corpus in your research, please cite the NEMO<sup>2</sup> paper:
```bibtex
@article{10.1162/tacl_a_00404,
author = {Bareket, Dan and Tsarfaty, Reut},
title = "{Neural Modeling for Named Entities and Morphology (NEMO2)}",
journal = {Transactions of the Association for Computational Linguistics},
volume = {9},
pages = {909-928},
year = {2021},
month = {09},
abstract = "{Named Entity Recognition (NER) is a fundamental NLP task, commonly formulated as classification over a sequence of tokens. Morphologically rich languages (MRLs) pose a challenge to this basic formulation, as the boundaries of named entities do not necessarily coincide with token boundaries, rather, they respect morphological boundaries. To address NER in MRLs we then need to answer two fundamental questions, namely, what are the basic units to be labeled, and how can these units be detected and classified in realistic settings (i.e., where no gold morphology is available). We empirically investigate these questions on a novel NER benchmark, with parallel token- level and morpheme-level NER annotations, which we develop for Modern Hebrew, a morphologically rich-and-ambiguous language. Our results show that explicitly modeling morphological boundaries leads to improved NER performance, and that a novel hybrid architecture, in which NER precedes and prunes morphological decomposition, greatly outperforms the standard pipeline, where morphological decomposition strictly precedes NER, setting a new performance bar for both Hebrew NER and Hebrew morphological decomposition tasks.}",
issn = {2307-387X},
doi = {10.1162/tacl_a_00404},
url = {https://doi.org/10.1162/tacl\_a\_00404},
eprint = {https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl\_a\_00404/1962472/tacl\_a\_00404.pdf},
}
```
##### [2]
Please cite the Hebrew Treebank as well, described the following paper:
```bibtex
@article{sima2001building,
title={Building a tree-bank of modern Hebrew text},
author={Sima’an, Khalil and Itai, Alon and Winter, Yoad and Altman, Alon and Nativ, Noa},
journal={Traitement Automatique des Langues},
volume={42},
number={2},
pages={247--380},
year={2001},
publisher={Citeseer}
}
```
##### [3]
The UD version of the Hebrew Treebank is described in:
```bibtex
@inproceedings{sade-etal-2018-hebrew,
title = "The {H}ebrew {U}niversal {D}ependency Treebank: Past Present and Future",
author = "Sade, Shoval and
Seker, Amit and
Tsarfaty, Reut",
booktitle = "Proceedings of the Second Workshop on Universal Dependencies ({UDW} 2018)",
month = nov,
year = "2018",
address = "Brussels, Belgium",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/W18-6016",
doi = "10.18653/v1/W18-6016",
pages = "133--143",
abstract = "The Hebrew treebank (HTB), consisting of 6221 morpho-syntactically annotated newspaper sentences, has been the only resource for training and validating statistical parsers and taggers for Hebrew, for almost two decades now. During these decades, the HTB has gone through a trajectory of automatic and semi-automatic conversions, until arriving at its UDv2 form. In this work we manually validate the UDv2 version of the HTB, and, according to our findings, we apply scheme changes that bring the UD HTB to the same theoretical grounds as the rest of UD. Our experimental parsing results with UDv2New confirm that improving the coherence and internal consistency of the UD HTB indeed leads to improved parsing performance. At the same time, our analysis demonstrates that there is more to be done at the point of intersection of UD with other linguistic processing layers, in particular, at the points where UD interfaces external morphological and lexical resources.",
}
``` | 12,673 | [
[
-0.0310211181640625,
-0.050811767578125,
0.00199127197265625,
0.0097198486328125,
-0.0164947509765625,
-0.01107025146484375,
-0.0167694091796875,
-0.042022705078125,
0.03424072265625,
0.037261962890625,
-0.03875732421875,
-0.056427001953125,
-0.050201416015625,
... |
ZeyadAhmed/Arabic-SQuADv2.0 | 2022-06-29T16:04:58.000Z | [
"region:us"
] | ZeyadAhmed | null | null | 0 | 6 | 2022-06-29T15:14:11 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
MicPie/unpredictable_gamefaqs-com | 2022-08-04T20:08:30.000Z | [
"task_categories:multiple-choice",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:text2text-generation",
"task_categories:table-question-answering",
"task_categories:text-generation",
"task_categories:text-classification",
"task_categories:tabular-cl... | MicPie | The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. For more details please see the accompanying dataset card. | @misc{chan2022few,
author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan},
title = {Few-shot Adaptation Works with UnpredicTable Data},
publisher={arXiv},
year = {2022},
url = {https://arxiv.org/abs/2208.01009}
} | 0 | 6 | 2022-07-03T10:10:20 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: UnpredicTable-gamefaqs-com
size_categories:
- 100K<n<1M
source_datasets: []
task_categories:
- multiple-choice
- question-answering
- zero-shot-classification
- text2text-generation
- table-question-answering
- text-generation
- text-classification
- tabular-classification
task_ids:
- multiple-choice-qa
- extractive-qa
- open-domain-qa
- closed-domain-qa
- closed-book-qa
- open-book-qa
- language-modeling
- multi-class-classification
- natural-language-inference
- topic-classification
- multi-label-classification
- tabular-multi-class-classification
- tabular-multi-label-classification
---
# Dataset Card for "UnpredicTable-gamefaqs-com" - Dataset of Few-shot Tasks from Tables
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://ethanperez.net/unpredictable
- **Repository:** https://github.com/JunShern/few-shot-adaptation
- **Paper:** Few-shot Adaptation Works with UnpredicTable Data
- **Point of Contact:** junshern@nyu.edu, perez@nyu.edu
### Dataset Summary
The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.
There are several dataset versions available:
* [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites.
* [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites.
* [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset.
* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):
* [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low)
* [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium)
* [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high)
* UnpredicTable data subsets based on the website of origin:
* [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com)
* [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net)
* [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com)
* [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com)
* [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com)
* [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com)
* [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org)
* [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org)
* [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com)
* [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com)
* [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com)
* [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com)
* [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com)
* [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com)
* [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com)
* [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com)
* [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com)
* [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org)
* [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org)
* [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org)
* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):
* [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00)
* [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01)
* [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02)
* [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03)
* [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04)
* [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05)
* [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06)
* [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07)
* [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08)
* [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09)
* [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10)
* [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11)
* [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12)
* [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13)
* [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14)
* [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15)
* [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16)
* [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17)
* [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18)
* [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19)
* [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20)
* [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21)
* [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22)
* [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23)
* [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24)
* [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25)
* [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26)
* [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27)
* [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28)
* [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29)
* [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise)
### Supported Tasks and Leaderboards
Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.
The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.
### Languages
English
## Dataset Structure
### Data Instances
Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.
There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.
### Data Fields
'task': task identifier
'input': column elements of a specific row in the table.
'options': for multiple choice classification, it provides the options to choose from.
'output': target column element of the same row as input.
'pageTitle': the title of the page containing the table.
'outputColName': output column name
'url': url to the website containing the table
'wdcFile': WDC Web Table Corpus file
### Data Splits
The UnpredicTable datasets do not come with additional data splits.
## Dataset Creation
### Curation Rationale
Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.
### Source Data
#### Initial Data Collection and Normalization
We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.
#### Who are the source language producers?
The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/).
### Annotations
#### Annotation process
Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low),
[UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.
#### Who are the annotators?
Annotations were carried out by a lab assistant.
### Personal and Sensitive Information
The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.
### Discussion of Biases
Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.
### Other Known Limitations
No additional known limitations.
## Additional Information
### Dataset Curators
Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez
### Licensing Information
Apache 2.0
### Citation Information
```
@misc{chan2022few,
author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan},
title = {Few-shot Adaptation Works with UnpredicTable Data},
publisher={arXiv},
year = {2022},
url = {https://arxiv.org/abs/2208.01009}
}
```
| 14,803 | [
[
-0.040557861328125,
-0.04107666015625,
0.0307769775390625,
0.0236968994140625,
0.006137847900390625,
0.0111236572265625,
-0.00673675537109375,
-0.0423583984375,
0.035369873046875,
0.02142333984375,
-0.0751953125,
-0.0462646484375,
-0.04473876953125,
0.015464... |
MicPie/unpredictable_cappex-com | 2022-08-04T19:41:09.000Z | [
"task_categories:multiple-choice",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:text2text-generation",
"task_categories:table-question-answering",
"task_categories:text-generation",
"task_categories:text-classification",
"task_categories:tabular-cl... | MicPie | The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. For more details please see the accompanying dataset card. | @misc{chan2022few,
author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan},
title = {Few-shot Adaptation Works with UnpredicTable Data},
publisher={arXiv},
year = {2022},
url = {https://arxiv.org/abs/2208.01009}
} | 0 | 6 | 2022-07-03T11:04:27 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: UnpredicTable-cappex.com
size_categories:
- 100K<n<1M
source_datasets: []
task_categories:
- multiple-choice
- question-answering
- zero-shot-classification
- text2text-generation
- table-question-answering
- text-generation
- text-classification
- tabular-classification
task_ids:
- multiple-choice-qa
- extractive-qa
- open-domain-qa
- closed-domain-qa
- closed-book-qa
- open-book-qa
- language-modeling
- multi-class-classification
- natural-language-inference
- topic-classification
- multi-label-classification
- tabular-multi-class-classification
- tabular-multi-label-classification
---
# Dataset Card for "UnpredicTable-cappex.com" - Dataset of Few-shot Tasks from Tables
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://ethanperez.net/unpredictable
- **Repository:** https://github.com/JunShern/few-shot-adaptation
- **Paper:** Few-shot Adaptation Works with UnpredicTable Data
- **Point of Contact:** junshern@nyu.edu, perez@nyu.edu
### Dataset Summary
The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.
There are several dataset versions available:
* [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites.
* [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites.
* [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset.
* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):
* [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low)
* [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium)
* [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high)
* UnpredicTable data subsets based on the website of origin:
* [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com)
* [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net)
* [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com)
* [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com)
* [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com)
* [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com)
* [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org)
* [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org)
* [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com)
* [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com)
* [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com)
* [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com)
* [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com)
* [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com)
* [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com)
* [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com)
* [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com)
* [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org)
* [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org)
* [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org)
* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):
* [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00)
* [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01)
* [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02)
* [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03)
* [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04)
* [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05)
* [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06)
* [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07)
* [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08)
* [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09)
* [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10)
* [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11)
* [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12)
* [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13)
* [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14)
* [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15)
* [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16)
* [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17)
* [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18)
* [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19)
* [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20)
* [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21)
* [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22)
* [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23)
* [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24)
* [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25)
* [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26)
* [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27)
* [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28)
* [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29)
* [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise)
### Supported Tasks and Leaderboards
Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.
The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.
### Languages
English
## Dataset Structure
### Data Instances
Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.
There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.
### Data Fields
'task': task identifier
'input': column elements of a specific row in the table.
'options': for multiple choice classification, it provides the options to choose from.
'output': target column element of the same row as input.
'pageTitle': the title of the page containing the table.
'outputColName': output column name
'url': url to the website containing the table
'wdcFile': WDC Web Table Corpus file
### Data Splits
The UnpredicTable datasets do not come with additional data splits.
## Dataset Creation
### Curation Rationale
Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.
### Source Data
#### Initial Data Collection and Normalization
We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.
#### Who are the source language producers?
The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/).
### Annotations
#### Annotation process
Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low),
[UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.
#### Who are the annotators?
Annotations were carried out by a lab assistant.
### Personal and Sensitive Information
The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.
### Discussion of Biases
Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.
### Other Known Limitations
No additional known limitations.
## Additional Information
### Dataset Curators
Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez
### Licensing Information
Apache 2.0
### Citation Information
```
@misc{chan2022few,
author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan},
title = {Few-shot Adaptation Works with UnpredicTable Data},
publisher={arXiv},
year = {2022},
url = {https://arxiv.org/abs/2208.01009}
}
```
| 14,799 | [
[
-0.040557861328125,
-0.03961181640625,
0.031524658203125,
0.02410888671875,
0.0079498291015625,
0.0120391845703125,
-0.01055145263671875,
-0.04351806640625,
0.037200927734375,
0.0209503173828125,
-0.07391357421875,
-0.0460205078125,
-0.045196533203125,
0.015... |
MicPie/unpredictable_en-wikipedia-org | 2022-08-04T20:05:44.000Z | [
"task_categories:multiple-choice",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:text2text-generation",
"task_categories:table-question-answering",
"task_categories:text-generation",
"task_categories:text-classification",
"task_categories:tabular-cl... | MicPie | The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. For more details please see the accompanying dataset card. | @misc{chan2022few,
author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan},
title = {Few-shot Adaptation Works with UnpredicTable Data},
publisher={arXiv},
year = {2022},
url = {https://arxiv.org/abs/2208.01009}
} | 1 | 6 | 2022-07-03T11:17:38 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: UnpredicTable-en-wikipedia-org
size_categories:
- 100K<n<1M
source_datasets: []
task_categories:
- multiple-choice
- question-answering
- zero-shot-classification
- text2text-generation
- table-question-answering
- text-generation
- text-classification
- tabular-classification
task_ids:
- multiple-choice-qa
- extractive-qa
- open-domain-qa
- closed-domain-qa
- closed-book-qa
- open-book-qa
- language-modeling
- multi-class-classification
- natural-language-inference
- topic-classification
- multi-label-classification
- tabular-multi-class-classification
- tabular-multi-label-classification
---
# Dataset Card for "UnpredicTable-en-wikipedia-org" - Dataset of Few-shot Tasks from Tables
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://ethanperez.net/unpredictable
- **Repository:** https://github.com/JunShern/few-shot-adaptation
- **Paper:** Few-shot Adaptation Works with UnpredicTable Data
- **Point of Contact:** junshern@nyu.edu, perez@nyu.edu
### Dataset Summary
The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.
There are several dataset versions available:
* [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites.
* [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites.
* [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset.
* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):
* [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low)
* [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium)
* [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high)
* UnpredicTable data subsets based on the website of origin:
* [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com)
* [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net)
* [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com)
* [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com)
* [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com)
* [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com)
* [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org)
* [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org)
* [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com)
* [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com)
* [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com)
* [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com)
* [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com)
* [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com)
* [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com)
* [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com)
* [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com)
* [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org)
* [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org)
* [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org)
* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):
* [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00)
* [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01)
* [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02)
* [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03)
* [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04)
* [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05)
* [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06)
* [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07)
* [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08)
* [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09)
* [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10)
* [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11)
* [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12)
* [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13)
* [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14)
* [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15)
* [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16)
* [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17)
* [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18)
* [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19)
* [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20)
* [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21)
* [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22)
* [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23)
* [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24)
* [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25)
* [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26)
* [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27)
* [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28)
* [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29)
* [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise)
### Supported Tasks and Leaderboards
Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.
The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.
### Languages
English
## Dataset Structure
### Data Instances
Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.
There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.
### Data Fields
'task': task identifier
'input': column elements of a specific row in the table.
'options': for multiple choice classification, it provides the options to choose from.
'output': target column element of the same row as input.
'pageTitle': the title of the page containing the table.
'outputColName': output column name
'url': url to the website containing the table
'wdcFile': WDC Web Table Corpus file
### Data Splits
The UnpredicTable datasets do not come with additional data splits.
## Dataset Creation
### Curation Rationale
Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.
### Source Data
#### Initial Data Collection and Normalization
We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.
#### Who are the source language producers?
The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/).
### Annotations
#### Annotation process
Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low),
[UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.
#### Who are the annotators?
Annotations were carried out by a lab assistant.
### Personal and Sensitive Information
The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.
### Discussion of Biases
Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.
### Other Known Limitations
No additional known limitations.
## Additional Information
### Dataset Curators
Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez
### Licensing Information
Apache 2.0
### Citation Information
```
@misc{chan2022few,
author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan},
title = {Few-shot Adaptation Works with UnpredicTable Data},
publisher={arXiv},
year = {2022},
url = {https://arxiv.org/abs/2208.01009}
}
```
| 14,811 | [
[
-0.040740966796875,
-0.040985107421875,
0.03192138671875,
0.0225677490234375,
0.006671905517578125,
0.0101776123046875,
-0.00879669189453125,
-0.044097900390625,
0.037109375,
0.019989013671875,
-0.073486328125,
-0.045745849609375,
-0.0462646484375,
0.0151367... |
s3prl/iemocap_split | 2022-07-10T02:26:18.000Z | [
"region:us"
] | s3prl | null | null | 0 | 6 | 2022-07-05T03:51:30 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
biglam/cultural_heritage_metadata_accuracy | 2022-07-22T17:32:27.000Z | [
"task_categories:text-classification",
"task_ids:acceptability-classification",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:... | biglam | The dataset contains more than 100K textual descriptions of cultural items from Cultura Italia (http://www.culturaitalia.it/opencms/index.jsp?language=en), the Italian National Cultural aggregator.
Each of the description is labeled either HIGH or LOW quality, according its adherence to the standard cataloguing guidelines provided by Istituto Centrale per il Catalogo e la Documentazione (ICCD). | @article{Lorenzini2020,
author = "Matteo Lorenzini and Marco Rospocher and Sara Tonelli",
title = "{Annotated dataset to assess the accuracy of the textual description of cultural heritage records}",
year = "2020",
month = "12",
url = "https://figshare.com/articles/dataset/Annotated_dataset_to_assess_the_accuracy_of_the_textual_description_of_cultural_heritage_records/13359104",
doi = "10.6084/m9.figshare.13359104.v1"
} | 4 | 6 | 2022-07-07T14:51:59 | ---
annotations_creators:
- machine-generated
- expert-generated
language:
- it
language_creators:
- expert-generated
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Annotated dataset to assess the accuracy of the textual description of
cultural heritage records
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- acceptability-classification
---
# Dataset Card for Annotated dataset to assess the accuracy of the textual description of cultural heritage records
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[https://doi.org/10.6084/m9.figshare.13359104.v1](https://doi.org/10.6084/m9.figshare.13359104.v1)
- **Repository:**[https://doi.org/10.6084/m9.figshare.13359104.v1](https://doi.org/10.6084/m9.figshare.13359104.v1)
- **Paper:**[https://doi.org/10.1007/s00799-021-00302-1](https://doi.org/10.1007/s00799-021-00302-1)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The dataset contains more than 100K textual descriptions of cultural items from [Cultura Italia](http://www.culturaitalia.it/opencms/index.jsp?language=en), the Italian National Cultural aggregator. Each of the description is labeled either HIGH or LOW quality, according its adherence to the standard cataloguing guidelines provided by Istituto Centrale per il Catalogo e la Documentazione (ICCD). More precisely, each description is labeled as HIGH quality if the object and subject of the item (for which the description is provided) are both described according to the ICCD guidelines, and as LOW quality in all other cases. Most of the dataset was manually annotated, with ~30K descriptions automatically labeled as LOW quality due to their length (less than 3 tokens) or their provenance from old (pre-2012), not curated, collections. The dataset was developed to support the training and testing of ML text classification approaches for automatically assessing the quality of textual descriptions in digital Cultural Heritage repositories.
### Supported Tasks and Leaderboards
This dataset can be used for text classification tasks. The [paper](https://doi.org/10.1007/s00799-021-00302-1) introducing the dataset achieved an f1 score of `.783` for the task of classifying if a metadata record was low or high quality. Please see the [results table](https://link.springer.com/article/10.1007/s00799-021-00302-1/tables/4) for a full overview of the results reported in the paper.
### Languages
The dataset consists of Italian metadata records. The labels are in English.
## Dataset Structure
The dataset has only one configuration.
### Data Instances
An example instance from the dataset:
``` python
{'metadata_text': 'Figure:putto.Oggetti:ghirlanda di fiori',
'label': 0,
'source': 'OpereArteVisiva'}
```
### Data Fields
The datafields are:
- `metadata_text`: this contains the metadata text which was sourced from [Cultura Italia](http://www.culturaitalia.it/opencms/index.jsp?language=en)
- `label`: this is the label indicating if the record is `High_Quality`, or `Low_Quality`. Most of the dataset was manually annotated, with ~30K descriptions automatically labelled as LOW quality due to their length (less than 3 tokens) or their provenance from old (pre-2012), not curated, collections.
- `source`: the source of the metadata record
### Data Splits
The dataset used 'ten-fold cross-validation' and doesn't report specific splits for train, validation and test data.
## Dataset Creation
The dataset was generated using records from [Cultura Italia](http://www.culturaitalia.it/opencms/index.jsp?language=en). From the paper introducing the dataset:
> By using the textual description encoded by the dc:description element from the Dublin Core metadata schema, we collect a dataset of 100,821 descriptions, after duplicate removal. These records include mainly data from “Musei d’Italia” and “Regione Marche” datasets, which have been chosen because they contain a high number of non-empty dc:description elements. p.221
### Curation Rationale
From the paper:
> Duplicates were removed for two reasons: this reduced annotation effort in the subsequent manual annotation, and avoided that the same example appear both in the training and in the test set, a situation that could make classification biased and lead to inaccurate evaluation in supervised settings.Footnote 10 Duplicated descriptions were mainly short and of low-quality, reporting few generic words to describe an item (e.g. “Mensola.”, “Dipinto.”). p.221
### Source Data
#### Initial Data Collection and Normalization
The dataset was generated using records from [Cultura Italia](http://www.culturaitalia.it/opencms/index.jsp?language=en). This repository is accessible via an OAI-PMH handler or via a [SPARQL endpoint](http://dati.culturaitalia.it/sparql).
As discussed above duplicates were removed from the dataset.
#### Who are the source language producers?
The metadata producers are staff working in Italian cultural heritage institutions.
### Annotations
#### Annotation process
From the paper:
> "Most of the dataset was manually annotated, with ~30K descriptions automatically labeled as LOW quality due to their length (less than 3 tokens) or their provenance from old (pre-2012), not curated, collections."
To determine the quality of the collected descriptions the authors of the paper used guidelines from the [Istituto Centrale per il Catalogo e la Documentazione](http://www.iccd.beniculturali.it/)
From the paper:
> "More precisely, a specific section of the guidelines addresses how to describe any cultural item, clarifying that both the object and the subject of the item must be presented in the description as follows:
> Object: the object typology and shape must be described. To describe the object, the cataloguer must refer to the vocabularies provided by ICCD, using specific terminology (e.g. the technique used for paintings and drawings, or the material for the archaeological items);
> Subject: the cataloguer must report the iconographic and decorative settings of the item, such as the characters of the depicted scene in a painting and their attribution. Other aspects (e.g. the history behind the painting or the painter) should not be included." p.221
[More Information Needed]
#### Who are the annotators?
> "The annotation is carried out by an expert in cultural heritage who collaborated in the past with Cultura Italia and has therefore in-depth knowledge of the data characteristics and of the ICCD guidelines." p.222
### Personal and Sensitive Information
No personal or sensitive information is described in the paper.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- Lorenzini, Matteo
- Rospocher, Marco
- Tonelli, Sara
### Licensing Information
[cc-by-4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@article{Lorenzini2020,
author = "Matteo Lorenzini and Marco Rospocher and Sara Tonelli",
title = "{Annotated dataset to assess the accuracy of the textual description of cultural heritage records}",
year = "2020",
month = "12",
url = "https://figshare.com/articles/dataset/Annotated_dataset_to_assess_the_accuracy_of_the_textual_description_of_cultural_heritage_records/13359104",
doi = "10.6084/m9.figshare.13359104.v1"
}
```
### Contributions
Thanks to [@davanstrien](https://github.com/davanstrien) for adding this dataset.
| 8,699 | [
[
-0.040924072265625,
-0.032073974609375,
-0.00604248046875,
-0.0029201507568359375,
-0.034393310546875,
-0.02630615234375,
-0.02886962890625,
-0.04901123046875,
0.017669677734375,
0.04278564453125,
-0.03204345703125,
-0.06829833984375,
-0.043212890625,
0.0348... |
MicPie/unpredictable_cluster08 | 2022-08-04T19:48:00.000Z | [
"task_categories:multiple-choice",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:text2text-generation",
"task_categories:table-question-answering",
"task_categories:text-generation",
"task_categories:text-classification",
"task_categories:tabular-cl... | MicPie | The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. For more details please see the accompanying dataset card. | @misc{chan2022few,
author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan},
title = {Few-shot Adaptation Works with UnpredicTable Data},
publisher={arXiv},
year = {2022},
url = {https://arxiv.org/abs/2208.01009}
} | 0 | 6 | 2022-07-08T19:14:10 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: UnpredicTable-cluster08
size_categories:
- 100K<n<1M
source_datasets: []
task_categories:
- multiple-choice
- question-answering
- zero-shot-classification
- text2text-generation
- table-question-answering
- text-generation
- text-classification
- tabular-classification
task_ids:
- multiple-choice-qa
- extractive-qa
- open-domain-qa
- closed-domain-qa
- closed-book-qa
- open-book-qa
- language-modeling
- multi-class-classification
- natural-language-inference
- topic-classification
- multi-label-classification
- tabular-multi-class-classification
- tabular-multi-label-classification
---
# Dataset Card for "UnpredicTable-cluster08" - Dataset of Few-shot Tasks from Tables
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://ethanperez.net/unpredictable
- **Repository:** https://github.com/JunShern/few-shot-adaptation
- **Paper:** Few-shot Adaptation Works with UnpredicTable Data
- **Point of Contact:** junshern@nyu.edu, perez@nyu.edu
### Dataset Summary
The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.
There are several dataset versions available:
* [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites.
* [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/MicPie/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/MicPie/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites.
* [UnpredicTable-5k](https://huggingface.co/datasets/MicPie/unpredictable_5k): This dataset contains 5k random tables from the full dataset.
* UnpredicTable data subsets based on a manual human quality rating (please see our publication for details of the ratings):
* [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low)
* [UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium)
* [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high)
* UnpredicTable data subsets based on the website of origin:
* [UnpredicTable-baseball-fantasysports-yahoo-com](https://huggingface.co/datasets/MicPie/unpredictable_baseball-fantasysports-yahoo-com)
* [UnpredicTable-bulbapedia-bulbagarden-net](https://huggingface.co/datasets/MicPie/unpredictable_bulbapedia-bulbagarden-net)
* [UnpredicTable-cappex-com](https://huggingface.co/datasets/MicPie/unpredictable_cappex-com)
* [UnpredicTable-cram-com](https://huggingface.co/datasets/MicPie/unpredictable_cram-com)
* [UnpredicTable-dividend-com](https://huggingface.co/datasets/MicPie/unpredictable_dividend-com)
* [UnpredicTable-dummies-com](https://huggingface.co/datasets/MicPie/unpredictable_dummies-com)
* [UnpredicTable-en-wikipedia-org](https://huggingface.co/datasets/MicPie/unpredictable_en-wikipedia-org)
* [UnpredicTable-ensembl-org](https://huggingface.co/datasets/MicPie/unpredictable_ensembl-org)
* [UnpredicTable-gamefaqs-com](https://huggingface.co/datasets/MicPie/unpredictable_gamefaqs-com)
* [UnpredicTable-mgoblog-com](https://huggingface.co/datasets/MicPie/unpredictable_mgoblog-com)
* [UnpredicTable-mmo-champion-com](https://huggingface.co/datasets/MicPie/unpredictable_mmo-champion-com)
* [UnpredicTable-msdn-microsoft-com](https://huggingface.co/datasets/MicPie/unpredictable_msdn-microsoft-com)
* [UnpredicTable-phonearena-com](https://huggingface.co/datasets/MicPie/unpredictable_phonearena-com)
* [UnpredicTable-sittercity-com](https://huggingface.co/datasets/MicPie/unpredictable_sittercity-com)
* [UnpredicTable-sporcle-com](https://huggingface.co/datasets/MicPie/unpredictable_sporcle-com)
* [UnpredicTable-studystack-com](https://huggingface.co/datasets/MicPie/unpredictable_studystack-com)
* [UnpredicTable-support-google-com](https://huggingface.co/datasets/MicPie/unpredictable_support-google-com)
* [UnpredicTable-w3-org](https://huggingface.co/datasets/MicPie/unpredictable_w3-org)
* [UnpredicTable-wiki-openmoko-org](https://huggingface.co/datasets/MicPie/unpredictable_wiki-openmoko-org)
* [UnpredicTable-wkdu-org](https://huggingface.co/datasets/MicPie/unpredictable_wkdu-org)
* UnpredicTable data subsets based on clustering (for the clustering details please see our publication):
* [UnpredicTable-cluster00](https://huggingface.co/datasets/MicPie/unpredictable_cluster00)
* [UnpredicTable-cluster01](https://huggingface.co/datasets/MicPie/unpredictable_cluster01)
* [UnpredicTable-cluster02](https://huggingface.co/datasets/MicPie/unpredictable_cluster02)
* [UnpredicTable-cluster03](https://huggingface.co/datasets/MicPie/unpredictable_cluster03)
* [UnpredicTable-cluster04](https://huggingface.co/datasets/MicPie/unpredictable_cluster04)
* [UnpredicTable-cluster05](https://huggingface.co/datasets/MicPie/unpredictable_cluster05)
* [UnpredicTable-cluster06](https://huggingface.co/datasets/MicPie/unpredictable_cluster06)
* [UnpredicTable-cluster07](https://huggingface.co/datasets/MicPie/unpredictable_cluster07)
* [UnpredicTable-cluster08](https://huggingface.co/datasets/MicPie/unpredictable_cluster08)
* [UnpredicTable-cluster09](https://huggingface.co/datasets/MicPie/unpredictable_cluster09)
* [UnpredicTable-cluster10](https://huggingface.co/datasets/MicPie/unpredictable_cluster10)
* [UnpredicTable-cluster11](https://huggingface.co/datasets/MicPie/unpredictable_cluster11)
* [UnpredicTable-cluster12](https://huggingface.co/datasets/MicPie/unpredictable_cluster12)
* [UnpredicTable-cluster13](https://huggingface.co/datasets/MicPie/unpredictable_cluster13)
* [UnpredicTable-cluster14](https://huggingface.co/datasets/MicPie/unpredictable_cluster14)
* [UnpredicTable-cluster15](https://huggingface.co/datasets/MicPie/unpredictable_cluster15)
* [UnpredicTable-cluster16](https://huggingface.co/datasets/MicPie/unpredictable_cluster16)
* [UnpredicTable-cluster17](https://huggingface.co/datasets/MicPie/unpredictable_cluster17)
* [UnpredicTable-cluster18](https://huggingface.co/datasets/MicPie/unpredictable_cluster18)
* [UnpredicTable-cluster19](https://huggingface.co/datasets/MicPie/unpredictable_cluster19)
* [UnpredicTable-cluster20](https://huggingface.co/datasets/MicPie/unpredictable_cluster20)
* [UnpredicTable-cluster21](https://huggingface.co/datasets/MicPie/unpredictable_cluster21)
* [UnpredicTable-cluster22](https://huggingface.co/datasets/MicPie/unpredictable_cluster22)
* [UnpredicTable-cluster23](https://huggingface.co/datasets/MicPie/unpredictable_cluster23)
* [UnpredicTable-cluster24](https://huggingface.co/datasets/MicPie/unpredictable_cluster24)
* [UnpredicTable-cluster25](https://huggingface.co/datasets/MicPie/unpredictable_cluster25)
* [UnpredicTable-cluster26](https://huggingface.co/datasets/MicPie/unpredictable_cluster26)
* [UnpredicTable-cluster27](https://huggingface.co/datasets/MicPie/unpredictable_cluster27)
* [UnpredicTable-cluster28](https://huggingface.co/datasets/MicPie/unpredictable_cluster28)
* [UnpredicTable-cluster29](https://huggingface.co/datasets/MicPie/unpredictable_cluster29)
* [UnpredicTable-cluster-noise](https://huggingface.co/datasets/MicPie/unpredictable_cluster-noise)
### Supported Tasks and Leaderboards
Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.
The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.
### Languages
English
## Dataset Structure
### Data Instances
Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.
There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.
### Data Fields
'task': task identifier
'input': column elements of a specific row in the table.
'options': for multiple choice classification, it provides the options to choose from.
'output': target column element of the same row as input.
'pageTitle': the title of the page containing the table.
'outputColName': output column name
'url': url to the website containing the table
'wdcFile': WDC Web Table Corpus file
### Data Splits
The UnpredicTable datasets do not come with additional data splits.
## Dataset Creation
### Curation Rationale
Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.
### Source Data
#### Initial Data Collection and Normalization
We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.
#### Who are the source language producers?
The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/).
### Annotations
#### Annotation process
Manual annotation was only carried out for the [UnpredicTable-rated-low](https://huggingface.co/datasets/MicPie/unpredictable_rated-low),
[UnpredicTable-rated-medium](https://huggingface.co/datasets/MicPie/unpredictable_rated-medium), and [UnpredicTable-rated-high](https://huggingface.co/datasets/MicPie/unpredictable_rated-high) data subsets to rate task quality. Detailed instructions of the annotation instructions can be found in our publication.
#### Who are the annotators?
Annotations were carried out by a lab assistant.
### Personal and Sensitive Information
The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.
### Discussion of Biases
Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.
### Other Known Limitations
No additional known limitations.
## Additional Information
### Dataset Curators
Jun Shern Chan, Michael Pieler, Jonathan Jao, Jérémy Scheurer, Ethan Perez
### Licensing Information
Apache 2.0
### Citation Information
```
@misc{chan2022few,
author = {Chan, Jun Shern and Pieler, Michael and Jao, Jonathan and Scheurer, Jérémy and Perez, Ethan},
title = {Few-shot Adaptation Works with UnpredicTable Data},
publisher={arXiv},
year = {2022},
url = {https://arxiv.org/abs/2208.01009}
}
```
| 14,797 | [
[
-0.040557861328125,
-0.03912353515625,
0.034088134765625,
0.0224456787109375,
0.006412506103515625,
0.01025390625,
-0.0100250244140625,
-0.043060302734375,
0.0377197265625,
0.020843505859375,
-0.07159423828125,
-0.0478515625,
-0.047027587890625,
0.0145034790... |
saadob12/chart-to-text | 2022-07-10T10:09:33.000Z | [
"arxiv:2203.06486",
"region:us"
] | saadob12 | null | null | 3 | 6 | 2022-07-09T12:10:51 | This dataset only consists of linearized underlying data table of charts and their corresponding summaries.
Model that use this dataset: https://huggingface.co/saadob12/t5_C2T_big
## Created By:
Kanthara, S., Leong, R. T. K., Lin, X., Masry, A., Thakkar, M., Hoque, E., & Joty, S. (2022). Chart-to-Text: A Large-Scale Benchmark for Chart Summarization. arXiv preprint arXiv:2203.06486.
**Paper**: https://arxiv.org/abs/2203.06486
**Orignal github repo**: https://github.com/vis-nlp/Chart-to-text
# Abstract from the Paper
Charts are commonly used for exploring data
and communicating insights. Generating nat-
ural language summaries from charts can be
very helpful for people in inferring key in-
sights that would otherwise require a lot of
cognitive and perceptual efforts. We present
Chart-to-text, a large-scale benchmark with
two datasets and a total of 44,096 charts cover-
ing a wide range of topics and chart types. We
explain the dataset construction process and
analyze the datasets. We also introduce a num-
ber of state-of-the-art neural models as base-
lines that utilize image captioning and data-to-
text generation techniques to tackle two prob-
lem variations: one assumes the underlying
data table of the chart is available while the
other needs to extract data from chart images.
Our analysis with automatic and human eval-
uation shows that while our best models usu-
ally generate fluent summaries and yield rea-
sonable BLEU scores, they also suffer from
hallucinations and factual errors as well as dif-
ficulties in correctly explaining complex pat-
terns and trends in charts.
### Note
The original paper published two sub-datasets one collected from statista and the other from pew. The dataset upload here is from statista. Images can be downloaded from the github repo mentioned above.
# Langugage
The data is in english and the summaries are in english.
# Dataset split
| train | valid | test |
|:---:|:---:| :---:|
| 24367 | 5222 | 5222 |
**Name of Contributor:** Saad Obaid ul Islam | 2,032 | [
[
-0.0226898193359375,
-0.051177978515625,
0.0081024169921875,
0.01096343994140625,
-0.041351318359375,
-0.0014696121215820312,
-0.022796630859375,
-0.02496337890625,
0.0255279541015625,
0.051544189453125,
-0.02703857421875,
-0.053863525390625,
-0.050079345703125,... |
bhadresh-savani/image-to-style | 2022-07-20T08:58:29.000Z | [
"region:us"
] | bhadresh-savani | null | null | 0 | 6 | 2022-07-11T14:22:03 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
sileod/wikimedqa | 2023-05-16T07:47:46.000Z | [
"task_categories:text-classification",
"task_categories:multiple-choice",
"language:en",
"license:apache-2.0",
"medical",
"region:us"
] | sileod | Anonymous submission | @article{sileo2023generating,
title={Generating multiple-choice questions for medical question answering with distractors and cue-masking},
author={Sileo, Damien and Uma, Kanimozhi and Moens, Marie-Francine},
journal={arXiv preprint arXiv:2303.07069},
year={2023}
} | 6 | 6 | 2022-07-14T15:09:22 | ---
license: apache-2.0
task_categories:
- text-classification
- multiple-choice
language:
- en
tags:
- medical
---
```bib
@article{sileo2023wikimedqa,
title={Generating multiple-choice questions for medical question answering with distractors and cue-masking},
author={Sileo, Damien and Uma, Kanimozhi and Moens, Marie-Francine},
journal={arXiv preprint arXiv:2303.07069 },
year={2023}
}
``` | 400 | [
[
-0.0262298583984375,
-0.053619384765625,
0.0557861328125,
0.00402069091796875,
-0.01434326171875,
-0.020050048828125,
0.0027332305908203125,
-0.041351318359375,
0.04998779296875,
0.042236328125,
-0.0489501953125,
-0.006092071533203125,
-0.051513671875,
0.031... |
ttxy/emotion | 2023-08-17T02:25:59.000Z | [
"task_categories:text-classification",
"language:code",
"license:bsd",
"classification",
"region:us"
] | ttxy | null | null | 2 | 6 | 2022-07-24T06:00:03 | ---
language:
- code
pretty_name: "English Emotion classification"
tags:
- classification
license: "bsd"
task_categories:
- text-classification
---
一个包含六种基本情绪(愤怒、恐惧、喜悦、爱、悲伤和惊讶)的英文Twitter消息数据集
Github 链接 https://github.com/dair-ai/emotion_dataset
| 249 | [
[
-0.0219879150390625,
-0.0712890625,
-0.005718231201171875,
0.036773681640625,
-0.03369140625,
0.004482269287109375,
0.005855560302734375,
-0.0176544189453125,
0.066650390625,
0.0016269683837890625,
-0.039154052734375,
-0.045928955078125,
-0.04974365234375,
-... |
chintagunta85/bionlp2 | 2022-07-28T09:04:24.000Z | [
"region:us"
] | chintagunta85 | [BioNLP2004 NER dataset](https://aclanthology.org/W04-1213.pdf) | @inproceedings{collier-kim-2004-introduction,
title = "Introduction to the Bio-entity Recognition Task at {JNLPBA}",
author = "Collier, Nigel and
Kim, Jin-Dong",
booktitle = "Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications ({NLPBA}/{B}io{NLP})",
month = aug # " 28th and 29th",
year = "2004",
address = "Geneva, Switzerland",
publisher = "COLING",
url = "https://aclanthology.org/W04-1213",
pages = "73--78",
}
https://huggingface.co/datasets/chintagunta85/bionlp/raw/main/test_bionlp.json | 0 | 6 | 2022-07-28T07:27:12 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
okg/turkish-poems | 2022-07-31T10:22:53.000Z | [
"task_categories:text-generation",
"task_categories:text-classification",
"task_ids:language-modeling",
"task_ids:text-scoring",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:tr",
"license:unknown",
"region:us"
] | okg | null | null | 1 | 6 | 2022-07-31T10:09:54 | ---
annotations_creators:
- found
language:
- tr
language_creators:
- found
license:
- unknown
multilinguality:
- monolingual
pretty_name: turkish-poems
size_categories:
- 1K<n<10K
source_datasets: []
tags: []
task_categories:
- text-generation
- text-classification
task_ids:
- language-modeling
- text-scoring
---
Turkish poems scraped from antoloji.com. Features consists of id, poet name, poem rating and the poem.
| 421 | [
[
-0.044952392578125,
-0.0279693603515625,
0.0233917236328125,
0.0105133056640625,
-0.0810546875,
-0.01528167724609375,
0.017242431640625,
-0.06927490234375,
0.0521240234375,
0.059173583984375,
-0.056427001953125,
-0.0216522216796875,
-0.0540771484375,
0.02862... |
alex-apostolo/filtered-cuad | 2022-08-04T06:24:04.000Z | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:cuad",
"language:en",
"license:cc-by-4.0",
"arxiv:2103.06... | alex-apostolo | null | null | 1 | 6 | 2022-08-03T15:59:24 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- cuad
task_categories:
- question-answering
task_ids:
- closed-domain-qa
- extractive-qa
paperswithcode_id: cuad
pretty_name: CUAD
train-eval-index:
- config: default
task: question-answering
task_id: extractive_question_answering
splits:
train_split: train
eval_split: test
col_mapping:
question: question
context: context
answers:
text: text
answer_start: answer_start
metrics:
- type: cuad
name: CUAD
---
# Dataset Card for filtered_cuad
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Contract Understanding Atticus Dataset](https://www.atticusprojectai.org/cuad)
- **Repository:** [Contract Understanding Atticus Dataset](https://github.com/TheAtticusProject/cuad/)
- **Paper:** [CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review](https://arxiv.org/abs/2103.06268)
- **Point of Contact:** [Atticus Project Team](info@atticusprojectai.org)
### Dataset Summary
Contract Understanding Atticus Dataset (CUAD) v1 is a corpus of more than 13,000 labels in 510 commercial legal contracts that have been manually labeled to identify 41 categories of important clauses that lawyers look for when reviewing contracts in connection with corporate transactions. This dataset is a filtered version of CUAD. It excludes legal contracts with an Agreement date prior to 2002 and contracts which are not Business to Business. From the 41 categories we filtered them down to 12 which we considered the most crucial.
We wanted a small dataset to quickly fine-tune different models without sacrificing the categories which we deemed as important. The need to remove most questions was due to them not having an answer which is problematic since it can scue the resulting metrics such as the F1 score and the AUPR curve.
CUAD is curated and maintained by The Atticus Project, Inc. to support NLP research and development in legal contract review. Analysis of CUAD can be found at https://arxiv.org/abs/2103.06268. Code for replicating the results and the trained model can be found at https://github.com/TheAtticusProject/cuad.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset contains samples in English only.
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"answer_start": [44],
"text": ['DISTRIBUTOR AGREEMENT']
},
"context": 'EXHIBIT 10.6\n\n DISTRIBUTOR AGREEMENT\n\n THIS DISTRIBUTOR AGREEMENT (the "Agreement") is made by and between Electric City Corp., a Delaware corporation ("Company") and Electric City of Illinois LLC ("Distributor") this 7th day of September, 1999...',
"id": "LIMEENERGYCO_09_09_1999-EX-10-DISTRIBUTOR AGREEMENT__Document Name_0",
"question": "Highlight the parts (if any) of this contract related to "Document Name" that should be reviewed by a lawyer. Details: The name of the contract",
"title": "LIMEENERGYCO_09_09_1999-EX-10-DISTRIBUTOR AGREEMENT"
}
```
### Data Fields
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
This dataset is split into train/test set. Number of samples in each set is given below:
| | Train | Test |
| ----- | ------ | ---- |
| CUAD | 5442 | 936 |
## Dataset Creation
### Curation Rationale
A highly valuable specialized task without a public large-scale dataset is contract review, which costs humans substantial time, money, and attention. Many law firms spend approximately 50% of their time reviewing contracts (CEB, 2017). Due to the specialized training necessary to understand and interpret contracts, the billing rates for lawyers at large law firms are typically around $500-$900 per hour in the US. As a result, many transactions cost companies hundreds of thousands of dollars just so that lawyers can verify that there are no problematic obligations or requirements included in the contracts. Contract review can be a source of drudgery and, in comparison to other legal tasks, is widely considered to be especially boring.
Contract review costs also affect consumers. Since contract review costs are so prohibitive, contract review is not often performed outside corporate transactions. Small companies and individuals consequently often sign contracts without even reading them, which can result in predatory behavior that harms consumers. Automating contract review by openly releasing high-quality data and fine-tuned models can increase access to legal support for small businesses and individuals, so that legal support is not exclusively available to wealthy companies.
To reduce the disparate societal costs of contract review, and to study how well NLP models generalize to specialized domains, the authors introduced a new large-scale dataset for contract review. As part of The Atticus Project, a non-profit organization of legal experts, CUAD is introduced, the Contract Understanding Atticus Dataset. This dataset was created with a year-long effort pushed forward by dozens of law student annotators, lawyers, and machine learning researchers. The dataset includes more than 500 contracts and more than 13,000 expert annotations that span 41 label categories. For each of 41 different labels, models must learn to highlight the portions of a contract most salient to that label. This makes the task a matter of finding needles in a haystack.
### Source Data
#### Initial Data Collection and Normalization
The CUAD includes commercial contracts selected from 25 different types of contracts based on the contract names as shown below. Within each type, the creators randomly selected contracts based on the names of the filing companies across the alphabet.
Type of Contracts: # of Docs
Affiliate Agreement: 8
Agency Agreement: 8
Collaboration/Cooperation Agreement: 26
Co-Branding Agreement: 6
Consulting Agreement: 11
Development Agreement: 28
Distributor Agreement: 23
Endorsement Agreement: 10
Franchise Agreement: 14
Hosting Agreement: 12
IP Agreement: 16
Joint Venture Agreemen: 22
License Agreement: 32
Maintenance Agreement: 24
Manufacturing Agreement: 6
Marketing Agreement: 16
Non-Compete/No-Solicit/Non-Disparagement Agreement: 3
Outsourcing Agreement: 12
Promotion Agreement: 9
Reseller Agreement: 12
Service Agreement: 24
Sponsorship Agreement: 17
Supply Agreement: 13
Strategic Alliance Agreement: 32
Transportation Agreement: 1
TOTAL: 385
Categories
Document Name
Parties
Agreement Date
Effective Date
Expiration Date
Renewal Term
Notice Period To Terminate Renewal
Governing Law
Non-Compete
Exclusivity
Change Of Control
Anti-Assignment
#### Who are the source language producers?
The contracts were sourced from EDGAR, the Electronic Data Gathering, Analysis, and Retrieval system used at the U.S. Securities and Exchange Commission (SEC). Publicly traded companies in the United States are required to file certain contracts under the SEC rules. Access to these contracts is available to the public for free at https://www.sec.gov/edgar. Please read the Datasheet at https://www.atticusprojectai.org/ for information on the intended use and limitations of the CUAD.
### Annotations
#### Annotation process
The labeling process included multiple steps to ensure accuracy:
1. Law Student Training: law students attended training sessions on each of the categories that included a summary, video instructions by experienced attorneys, multiple quizzes and workshops. Students were then required to label sample contracts in eBrevia, an online contract review tool. The initial training took approximately 70-100 hours.
2. Law Student Label: law students conducted manual contract review and labeling in eBrevia.
3. Key Word Search: law students conducted keyword search in eBrevia to capture additional categories that have been missed during the “Student Label” step.
4. Category-by-Category Report Review: law students exported the labeled clauses into reports, review each clause category-by-category and highlight clauses that they believe are mislabeled.
5. Attorney Review: experienced attorneys reviewed the category-by-category report with students comments, provided comments and addressed student questions. When applicable, attorneys discussed such results with the students and reached consensus. Students made changes in eBrevia accordingly.
6. eBrevia Extras Review. Attorneys and students used eBrevia to generate a list of “extras”, which are clauses that eBrevia AI tool identified as responsive to a category but not labeled by human annotators. Attorneys and students reviewed all of the “extras” and added the correct ones. The process is repeated until all or substantially all of the “extras” are incorrect labels.
7. Final Report: The final report was exported into a CSV file. Volunteers manually added the “Yes/No” answer column to categories that do not contain an answer.
#### Who are the annotators?
Answered in above section.
### Personal and Sensitive Information
Some clauses in the files are redacted because the party submitting these contracts redacted them to protect confidentiality. Such redaction may show up as asterisks (\*\*\*) or underscores (\_\_\_) or blank spaces. The dataset and the answers reflect such redactions. For example, the answer for “January \_\_ 2020” would be “1/[]/2020”).
For any categories that require an answer of “Yes/No”, annotators include full sentences as text context in a contract. To maintain consistency and minimize inter-annotator disagreement, annotators select text for the full sentence, under the instruction of “from period to period”.
For the other categories, annotators selected segments of the text in the contract that are responsive to each such category. One category in a contract may include multiple labels. For example, “Parties” may include 4-10 separate text strings that are not continuous in a contract. The answer is presented in the unified format separated by semicolons of “Party A Inc. (“Party A”); Party B Corp. (“Party B”)”.
Some sentences in the files include confidential legends that are not part of the contracts. An example of such confidential legend is as follows:
THIS EXHIBIT HAS BEEN REDACTED AND IS THE SUBJECT OF A CONFIDENTIAL TREATMENT REQUEST. REDACTED MATERIAL IS MARKED WITH [* * *] AND HAS BEEN FILED SEPARATELY WITH THE SECURITIES AND EXCHANGE COMMISSION.
Some sentences in the files contain irrelevant information such as footers or page numbers. Some sentences may not be relevant to the corresponding category. Some sentences may correspond to a different category. Because many legal clauses are very long and contain various sub-parts, sometimes only a sub-part of a sentence is responsive to a category.
To address the foregoing limitations, annotators manually deleted the portion that is not responsive, replacing it with the symbol "<omitted>" to indicate that the two text segments do not appear immediately next to each other in the contracts. For example, if a “Termination for Convenience” clause starts with “Each Party may terminate this Agreement if” followed by three subparts “(a), (b) and (c)”, but only subpart (c) is responsive to this category, the authors manually deleted subparts (a) and (b) and replaced them with the symbol "<omitted>”. Another example is for “Effective Date”, the contract includes a sentence “This Agreement is effective as of the date written above” that appears after the date “January 1, 2010”. The annotation is as follows: “January 1, 2010 <omitted> This Agreement is effective as of the date written above.”
Because the contracts were converted from PDF into TXT files, the converted TXT files may not stay true to the format of the original PDF files. For example, some contracts contain inconsistent spacing between words, sentences and paragraphs. Table format is not maintained in the TXT files.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Attorney Advisors
Wei Chen, John Brockland, Kevin Chen, Jacky Fink, Spencer P. Goodson, Justin Haan, Alex Haskell, Kari Krusmark, Jenny Lin, Jonas Marson, Benjamin Petersen, Alexander Kwonji Rosenberg, William R. Sawyers, Brittany Schmeltz, Max Scott, Zhu Zhu
Law Student Leaders
John Batoha, Daisy Beckner, Lovina Consunji, Gina Diaz, Chris Gronseth, Calvin Hannagan, Joseph Kroon, Sheetal Sharma Saran
Law Student Contributors
Scott Aronin, Bryan Burgoon, Jigar Desai, Imani Haynes, Jeongsoo Kim, Margaret Lynch, Allison Melville, Felix Mendez-Burgos, Nicole Mirkazemi, David Myers, Emily Rissberger, Behrang Seraj, Sarahginy Valcin
Technical Advisors & Contributors
Dan Hendrycks, Collin Burns, Spencer Ball, Anya Chen
### Licensing Information
CUAD is licensed under the Creative Commons Attribution 4.0 (CC BY 4.0) license and free to the public for commercial and non-commercial use.
The creators make no representations or warranties regarding the license status of the underlying contracts, which are publicly available and downloadable from EDGAR.
Privacy Policy & Disclaimers
The categories or the contracts included in the dataset are not comprehensive or representative. The authors encourage the public to help improve them by sending them your comments and suggestions to info@atticusprojectai.org. Comments and suggestions will be reviewed by The Atticus Project at its discretion and will be included in future versions of Atticus categories once approved.
The use of CUAD is subject to their privacy policy https://www.atticusprojectai.org/privacy-policy and disclaimer https://www.atticusprojectai.org/disclaimer.
### Citation Information
```
@article{hendrycks2021cuad,
title={CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review},
author={Dan Hendrycks and Collin Burns and Anya Chen and Spencer Ball},
journal={arXiv preprint arXiv:2103.06268},
year={2021}
}
```
### Contributions
Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding this dataset. | 15,737 | [
[
-0.035736083984375,
-0.034210205078125,
0.0224151611328125,
0.00855255126953125,
-0.0198211669921875,
0.0007510185241699219,
-0.005062103271484375,
-0.045806884765625,
0.022674560546875,
0.059600830078125,
-0.0121917724609375,
-0.060089111328125,
-0.041229248046... |
rungalileo/newsgroups | 2022-10-05T22:49:15.000Z | [
"region:us"
] | rungalileo | null | null | 0 | 6 | 2022-08-04T04:59:02 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
rungalileo/sst2 | 2022-10-05T22:48:35.000Z | [
"region:us"
] | rungalileo | null | null | 0 | 6 | 2022-08-04T05:00:27 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
RUCAIBox/Summarization | 2022-10-25T06:19:17.000Z | [
"task_categories:summarization",
"multilinguality:monolingual",
"language:en",
"region:us"
] | RUCAIBox | null | null | 1 | 6 | 2022-08-13T01:53:11 | ---
language:
- en
multilinguality:
- monolingual
task_categories:
- summarization
task_ids: []
---
This is the summarization datasets collected by TextBox, including:
- CNN/Daily Mail (cnndm)
- XSum (xsum)
- SAMSum (samsum)
- WLE (wle)
- Newsroom (nr)
- WikiHow (wikihow)
- MicroSoft News (msn)
- MediaSum (mediasum)
- English Gigaword (eg).
The detail and leaderboard of each dataset can be found in [TextBox page](https://github.com/RUCAIBox/TextBox#dataset). | 464 | [
[
-0.034454345703125,
-0.03363037109375,
0.0010242462158203125,
-0.00445556640625,
-0.01470184326171875,
0.033782958984375,
0.00756072998046875,
0.00408935546875,
0.0538330078125,
0.041015625,
-0.06463623046875,
-0.058837890625,
-0.044342041015625,
0.020446777... |
RUCAIBox/Story-Generation | 2023-03-03T14:42:27.000Z | [
"task_categories:text-generation",
"multilinguality:monolingual",
"language:en",
"story-generation",
"region:us"
] | RUCAIBox | null | null | 2 | 6 | 2022-08-13T02:09:37 | ---
language:
- en
multilinguality:
- monolingual
task_categories:
- text-generation
task_ids: []
tags:
- story-generation
---
This is the story generation datasets collected by TextBox, including:
- ROCStories (roc)
- WritingPrompts (wp)
- Hippocorpus (hc)
- WikiPlots (wikip)
- ChangeMyView (cmv).
The detail and leaderboard of each dataset can be found in [TextBox page](https://github.com/RUCAIBox/TextBox#dataset). | 421 | [
[
-0.0238189697265625,
-0.04937744140625,
0.01548004150390625,
-0.006237030029296875,
0.007747650146484375,
0.03192138671875,
0.01806640625,
-0.0186309814453125,
0.04241943359375,
0.07073974609375,
-0.09442138671875,
-0.060302734375,
-0.01003265380859375,
0.01... |
jakartaresearch/id-paraphrase-detection | 2022-08-14T02:10:33.000Z | [
"task_categories:sentence-similarity",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|msrp",
"language:id",
"license:cc-by-4.0",
"msrp",
"id-msrp",
"paraphrase-detection",
"region:us"
] | jakartaresearch | This dataset is built as a playground for sequence to sequence classification | null | 3 | 6 | 2022-08-14T01:46:49 | ---
annotations_creators:
- found
language:
- id
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Indonesian Paraphrase Detection
size_categories:
- 1K<n<10K
source_datasets:
- extended|msrp
tags:
- msrp
- id-msrp
- paraphrase-detection
task_categories:
- sentence-similarity
task_ids: []
---
# Dataset Card for Indonesian Sentence Paraphrase Detection
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The dataset is originally from [Microsoft Research Paraphrase Corpus](https://www.microsoft.com/en-us/download/details.aspx?id=52398). We translated the text into Bahasa using google translate.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Indonesian
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@andreaschandra](https://github.com/andreaschandra) for adding this dataset. | 2,994 | [
[
-0.0128173828125,
-0.0609130859375,
0.02313232421875,
0.0321044921875,
-0.03936767578125,
-0.017608642578125,
-0.0120391845703125,
-0.00738525390625,
0.036468505859375,
0.0833740234375,
-0.03863525390625,
-0.0684814453125,
-0.055877685546875,
0.0522766113281... |
fourteenBDr/toutiao | 2022-08-21T14:58:22.000Z | [
"license:mit",
"region:us"
] | fourteenBDr | null | null | 1 | 6 | 2022-08-21T14:54:32 | ---
license: mit
---
# 中文文本分类数据集
数据来源:
今日头条客户端
数据格式:
```
6552431613437805063_!_102_!_news_entertainment_!_谢娜为李浩菲澄清网络谣言,之后她的两个行为给自己加分_!_佟丽娅,网络谣言,快乐大本营,李浩菲,谢娜,观众们
```
每行为一条数据,以`_!_`分割的个字段,从前往后分别是 新闻ID,分类code(见下文),分类名称(见下文),新闻字符串(仅含标题),新闻关键词
分类code与名称:
```
100 民生 故事 news_story
101 文化 文化 news_culture
102 娱乐 娱乐 news_entertainment
103 体育 体育 news_sports
104 财经 财经 news_finance
106 房产 房产 news_house
107 汽车 汽车 news_car
108 教育 教育 news_edu
109 科技 科技 news_tech
110 军事 军事 news_military
112 旅游 旅游 news_travel
113 国际 国际 news_world
114 证券 股票 stock
115 农业 三农 news_agriculture
116 电竞 游戏 news_game
```
数据规模:
共382688条,分布于15个分类中。
采集时间:
2018年05月
| 646 | [
[
-0.03253173828125,
-0.054962158203125,
0.01702880859375,
0.0545654296875,
-0.054351806640625,
-0.0106048583984375,
0.0043792724609375,
-0.01187896728515625,
0.03533935546875,
0.0270843505859375,
-0.0136566162109375,
-0.055419921875,
-0.0322265625,
0.00307083... |
teven/webnlg_2017_human_eval | 2022-08-24T23:27:45.000Z | [
"region:us"
] | teven | null | null | 0 | 6 | 2022-08-24T23:27:42 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.