id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ibm/claim_stance | ibm | 2023-11-15T10:01:56Z | 48 | 0 | null | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-3.0",
"region:us"
] | 2023-11-15T10:01:56Z | 2023-11-06T10:29:47.000Z | 2023-11-06T10:29:47 | ---
license: cc-by-3.0
task_categories:
- text-classification
language:
- en
pretty_name: Claim Stance
size_categories:
- 1K<n<10K
configs:
- config_name: claim_stance
data_files:
- split: train
path: "train.csv"
- split: test
path: "test.csv"
- config_name: claim_stance_topic
data_files:
- split: train
path: "train_topic.csv"
- split: validation
path: "dev_topic.csv"
- split: test
path: "test_topic.csv"
---
---
# Dataset Card for Claim Stance Dataset
## Table of Contents
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Notes](#notes)
## Dataset Summary
### Claim Stance
This dataset contains 2,394 labeled Wikipedia claims for 55 topics. The dataset includes the stance (Pro/Con) of each claim towards the topic,
as well as fine-grained annotations, based on the semantic model of [Stance Classification of Context-Dependent Claims](https://aclanthology.org/E17-1024/) (topic target,
topic sentiment towards its target, claim target, claim sentiment towards its target, and the relation between the targets).
The dataset is divided into a training set (25 topics, 1,039 claims) and a test set (30 topics, 1,355 claims).
The information in this card refers to this subset of the dataset unless stated otherwise.
### Claim Stance Topic
This subset contains the claims (column `text`) only associated with the topic (column `label`) in a different split to train-validation-test.
This subset can be utilized for topic classification tasks.
## Dataset Structure
* topicId - internal topic ID
* split - train or test
* topicText - the topic text
* topicTarget - sentiment target of topic
* topicSentiment - topic sentiment towards its target (1:positive/-1:negative)
* claims.claimId - claim internal ID
* claims.stance - PRO or CON
* claims.claimCorrectedText - the corrected version of the claim
* claims.claimOriginalText - the original version of the claim
* claims.Compatible - is the claim compatible with the semantic model of [Stance Classification of Context-Dependent Claims](https://aclanthology.org/E17-1024/)? (yes/no)
The following fine-grained annotations are specified only for "compatible" claims
* claims.claimTarget.text - claim sentiment target text (in the corrected version of the claim)
* claims.claimTarget.span.start - 0,
* claims.claimTarget.span.end - 31
* claims.claimSentiment - claim's sentiment towards its target (1:positive/-1:negative)
* claims.targetsRelation - relation between claim target and topic target ((1:consistent/-1:contrastive))
## Licensing Information
The datasets are released under the following licensing and copyright terms:
* (c) Copyright [Wikipedia](https://en.wikipedia.org/wiki/Wikipedia:Copyrights#Reusers.27_rights_and_obligations)
* (c) Copyright IBM 2014. Released under [CC-BY-SA 3.0](http://creativecommons.org/licenses/by-sa/3.0/)
## Citation Information
If you use this dataset, please cite the following paper:
```
@inproceedings{bar-haim-etal-2017-stance,
title = "Stance Classification of Context-Dependent Claims",
author = "Bar-Haim, Roy and
Bhattacharya, Indrajit and
Dinuzzo, Francesco and
Saha, Amrita and
Slonim, Noam",
editor = "Lapata, Mirella and
Blunsom, Phil and
Koller, Alexander",
booktitle = "Proceedings of the 15th Conference of the {E}uropean Chapter of the Association for Computational Linguistics: Volume 1, Long Papers",
month = apr,
year = "2017",
address = "Valencia, Spain",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/E17-1024",
pages = "251--261",
abstract = "Recent work has addressed the problem of detecting relevant claims for a given controversial topic. We introduce the complementary task of Claim Stance Classification, along with the first benchmark dataset for this task. We decompose this problem into: (a) open-domain target identification for topic and claim (b) sentiment classification for each target, and (c) open-domain contrast detection between the topic and the claim targets. Manual annotation of the dataset confirms the applicability and validity of our model. We describe an implementation of our model, focusing on a novel algorithm for contrast detection. Our approach achieves promising results, and is shown to outperform several baselines, which represent the common practice of applying a single, monolithic classifier for stance classification.",
}
```
Improved stance classification results on this dataset were published in:
```
@inproceedings{bar-haim-etal-2017-improving,
title = "Improving Claim Stance Classification with Lexical Knowledge Expansion and Context Utilization",
author = "Bar-Haim, Roy and
Edelstein, Lilach and
Jochim, Charles and
Slonim, Noam",
editor = "Habernal, Ivan and
Gurevych, Iryna and
Ashley, Kevin and
Cardie, Claire and
Green, Nancy and
Litman, Diane and
Petasis, Georgios and
Reed, Chris and
Slonim, Noam and
Walker, Vern",
booktitle = "Proceedings of the 4th Workshop on Argument Mining",
month = sep,
year = "2017",
address = "Copenhagen, Denmark",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W17-5104",
doi = "10.18653/v1/W17-5104",
pages = "32--38",
abstract = "Stance classification is a core component in on-demand argument construction pipelines. Previous work on claim stance classification relied on background knowledge such as manually-composed sentiment lexicons. We show that both accuracy and coverage can be significantly improved through automatic expansion of the initial lexicon. We also developed a set of contextual features that further improves the state-of-the-art for this task.",
}
```
## Notes
(1) Claim annotations and the experiments reported in [Stance Classification of Context-Dependent Claims](https://aclanthology.org/E17-1024/) and [Improving Claim Stance Classification with Lexical Knowledge Expansion and Context Utilization](https://aclanthology.org/W17-5104/)
are based on the corrected version of the claim. See [A Benchmark Dataset for Automatic Detection of Claims and Evidence in the Context of Controversial Topics](https://aclanthology.org/W14-2109/) for description of generating
corrected version for claims. The original version is the claim as it is found in the clean version of
the article, with no further editing.
(2) The topics and claims partially overlap with the CE-EMNLP-2015 dataset:
Common topics IDs: 1, 21, 61, 81, 101, 121, 181, 221, 323, 381, 441, 442, 443, 481, 482, 483, 601, 602,
621, 641, 642, 644, 645, 648, 662, 663, 665, 681, 683, 701, 721, 742, 743, 744, 761, 801, 803, 841, 861,
881, 923, 926, 941, 942, 944, 946
Only this dataset: 603, 661, 922, 985, 987, 990, 994, 1005, 1065
Only the CE-EMNLP-2015 dataset: 643, 646, 647, 664, 821, 902, 921, 925, 943, 945, 947, 961
| [
-0.6313466429710388,
-0.4990401864051819,
0.1715376079082489,
0.07580456882715225,
-0.1416875272989273,
-0.23386064171791077,
-0.2727767527103424,
-0.3391270935535431,
0.3599865734577179,
0.27151042222976685,
-0.2253146469593048,
-0.7724043726921082,
-0.8029025793075562,
-0.047318965196609... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BEE-spoke-data/code_contests_instruct | BEE-spoke-data | 2023-11-07T22:03:26Z | 48 | 1 | null | [
"task_categories:text-generation",
"size_categories:1M<n<10M",
"source_datasets:teven/code_contests",
"source_datasets:deepmind/code_contests",
"language:en",
"license:apache-2.0",
"code",
"region:us"
] | 2023-11-07T22:03:26Z | 2023-11-07T20:28:07.000Z | 2023-11-07T20:28:07 | ---
language:
- en
license: apache-2.0
size_categories:
- 1M<n<10M
source_datasets:
- teven/code_contests
- deepmind/code_contests
task_categories:
- text-generation
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
- config_name: hq
data_files:
- split: train
path: hq/train-*
- split: test
path: hq/test-*
- split: valid
path: hq/valid-*
- config_name: hq-deduped
data_files:
- split: train
path: hq-deduped/train-*
- split: validation
path: hq-deduped/validation-*
- split: test
path: hq-deduped/test-*
- config_name: hq-python
data_files:
- split: train
path: hq-python/train-*
- split: test
path: hq-python/test-*
- split: valid
path: hq-python/valid-*
- config_name: hq-python-deduped
data_files:
- split: train
path: hq-python-deduped/train-*
- split: validation
path: hq-python-deduped/validation-*
- split: test
path: hq-python-deduped/test-*
- config_name: min-cols
data_files:
- split: train
path: min-cols/train-*
- split: test
path: min-cols/test-*
- split: valid
path: min-cols/valid-*
dataset_info:
- config_name: default
features:
- name: name
dtype: string
- name: description
dtype: string
- name: source
dtype: int64
- name: difficulty
dtype: int64
- name: solution
dtype: string
- name: language
dtype: string
- name: text
dtype: string
- name: flesch_reading_ease
dtype: float64
splits:
- name: train
num_bytes: 25891168054
num_examples: 4432447
- name: test
num_bytes: 279260221
num_examples: 32181
- name: valid
num_bytes: 252932416
num_examples: 29863
download_size: 5215422847
dataset_size: 26423360691
- config_name: hq
features:
- name: name
dtype: string
- name: source
dtype: int64
- name: difficulty
dtype: int64
- name: language
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5217583126.7379055
num_examples: 1743032
- name: test
num_bytes: 66792901.52201609
num_examples: 15145
- name: valid
num_bytes: 60429767.29487995
num_examples: 14031
download_size: 2680120741
dataset_size: 5344805795.554802
- config_name: hq-deduped
features:
- name: name
dtype: string
- name: source
dtype: int64
- name: difficulty
dtype: int64
- name: language
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2622892441
num_examples: 655870
- name: validation
num_bytes: 36580402
num_examples: 6697
- name: test
num_bytes: 40713434
num_examples: 7535
download_size: 1263763539
dataset_size: 2700186277
- config_name: hq-python
features:
- name: name
dtype: string
- name: source
dtype: int64
- name: difficulty
dtype: int64
- name: language
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1933769036.2943466
num_examples: 646012
- name: test
num_bytes: 16630969.405052671
num_examples: 3771
- name: valid
num_bytes: 17589278.713726014
num_examples: 4084
download_size: 694570534
dataset_size: 1967989284.4131253
- config_name: hq-python-deduped
features:
- name: name
dtype: string
- name: source
dtype: int64
- name: difficulty
dtype: int64
- name: language
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 291003334
num_examples: 103850
- name: validation
num_bytes: 6325352
num_examples: 1377
- name: test
num_bytes: 4835016
num_examples: 1170
download_size: 142884093
dataset_size: 302163702
- config_name: min-cols
features:
- name: language
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 13060236837.0
num_examples: 4432447
- name: test
num_bytes: 140470163.0
num_examples: 32181
- name: valid
num_bytes: 127234217.0
num_examples: 29863
download_size: 6417796354
dataset_size: 13327941217.0
tags:
- code
---
# Dataset Card for "code_contests_instruct"
The `deepmind/code_contests` dataset formatted as markdown-instruct for text generation training.
There are several different configs. Look at them. Comments:
- `flesch_reading_ease` is computed on the `description` col via [textstat](https://pypi.org/project/textstat/)
- `hq` means that python2 (aka `PYTHON` in `language` column) is dropped, and keeps only rows with `flesch_reading_ease` 75 or greater
- `min-cols` drops all cols except `language` and `text`
- possible values for `language` are `{'CPP', 'JAVA', 'PYTHON', 'PYTHON3'}`
### example
An example value in the `text` column:
```
### Prompt
Your challenge is to write a PYTHON3 solution to the following problem:
For the given integer n (n > 2) let's write down all the strings of length n which contain n-2 letters 'a' and two letters 'b' in lexicographical (alphabetical) order.
Recall that the string s of length n is lexicographically less than string t of length n, if there exists such i (1 ≤ i ≤ n), that s_i < t_i, and for any j (1 ≤ j < i) s_j = t_j. The lexicographic comparison of strings is implemented by the operator < in modern programming languages.
For example, if n=5 the strings are (the order does matter):
1. aaabb
2. aabab
3. aabba
4. abaab
5. ababa
6. abbaa
7. baaab
8. baaba
9. babaa
10. bbaaa
It is easy to show that such a list of strings will contain exactly (n ⋅ (n-1))/(2) strings.
You are given n (n > 2) and k (1 ≤ k ≤ (n ⋅ (n-1))/(2)). Print the k-th string from the list.
Input
The input contains one or more test cases.
The first line contains one integer t (1 ≤ t ≤ 10^4) — the number of test cases in the test. Then t test cases follow.
Each test case is written on the the separate line containing two integers n and k (3 ≤ n ≤ 10^5, 1 ≤ k ≤ min(2⋅10^9, (n ⋅ (n-1))/(2)).
The sum of values n over all test cases in the test doesn't exceed 10^5.
Output
For each test case print the k-th string from the list of all described above strings of length n. Strings in the list are sorted lexicographically (alphabetically).
Example
Input
7
5 1
5 2
5 8
5 10
3 1
3 2
20 100
Output
aaabb
aabab
baaba
bbaaa
abb
bab
aaaaabaaaaabaaaaaaaa
### Response
\```python3
t = int(input())
for x in range(t):
n, k = map(int, input().split())
res = ['a'] * n
s = int((n * (n - 1))/2 )
mark = 0
mark1 = 1
for i in range(n - 1, 0, -1):
if s == k:
mark1 = n-mark-1
break
if s < k:
mark1 = k-s
mark -= 1
break
s -= i
mark += 1
# print(mark,mark1)
res[mark] = 'b'
res[n-mark1] ='b'
e = ''.join(map(str,res))
print(e)
\```
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4372079372406006,
-0.6658720374107361,
0.4813104569911957,
0.26745089888572693,
-0.1754908263683319,
-0.04255667328834534,
-0.3666527569293976,
-0.25953376293182373,
0.09781593829393387,
0.25321099162101746,
-0.48303884267807007,
-0.5759906768798828,
-0.574666440486908,
0.57648593187332... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Globaly/globaly-segments-es | Globaly | 2023-11-09T21:06:04Z | 48 | 0 | null | [
"region:us"
] | 2023-11-09T21:06:04Z | 2023-11-09T20:58:05.000Z | 2023-11-09T20:58:05 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomrb/minipileoflaw | tomrb | 2023-11-24T20:51:47Z | 48 | 0 | null | [
"region:us"
] | 2023-11-24T20:51:47Z | 2023-11-10T11:42:07.000Z | 2023-11-10T11:42:07 | ---
configs:
- config_name: acus_reports
data_files:
- split: train
path: "data/minipileoflaw_acus_reports_train.csv"
- split: valid
path: "data/minipileoflaw_acus_reports_valid.csv"
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vinhtran2611/zaloai-2023-nlp-en | vinhtran2611 | 2023-11-11T01:39:52Z | 48 | 0 | null | [
"region:us"
] | 2023-11-11T01:39:52Z | 2023-11-10T14:38:41.000Z | 2023-11-10T14:38:41 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: dev
path: data/dev-*
dataset_info:
features:
- name: choices
sequence: string
- name: explanation
dtype: string
- name: question
dtype: string
- name: id
dtype: string
- name: answer
dtype: string
- name: prompts
dtype: string
- name: labels
dtype: string
splits:
- name: train
num_bytes: 389140
num_examples: 960
- name: test
num_bytes: 47744
num_examples: 120
- name: dev
num_bytes: 48518
num_examples: 120
download_size: 269982
dataset_size: 485402
---
# Dataset Card for "zaloai-2023-nlp-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5855429172515869,
-0.06222936138510704,
0.06565120071172714,
0.38830819725990295,
-0.22873550653457642,
-0.1744399219751358,
0.055257342755794525,
-0.45493245124816895,
0.938581109046936,
0.4978523850440979,
-0.9687737822532654,
-0.7101540565490723,
-0.5435804724693298,
0.08905942738056... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tungkho178/NLLB_translations_Vietnamese_40_51k76 | tungkho178 | 2023-11-12T17:58:26Z | 48 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-12T17:58:26Z | 2023-11-12T17:57:49.000Z | 2023-11-12T17:57:49 | ---
license: apache-2.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yuufong/PhoNer_Covid19 | yuufong | 2023-11-13T01:18:14Z | 48 | 0 | null | [
"region:us"
] | 2023-11-13T01:18:14Z | 2023-11-13T01:18:11.000Z | 2023-11-13T01:18:11 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: words
sequence: string
- name: tags
sequence: string
splits:
- name: train
num_bytes: 2638301
num_examples: 5027
- name: validation
num_bytes: 1158651
num_examples: 2000
- name: test
num_bytes: 1158651
num_examples: 2000
download_size: 684199
dataset_size: 4955603
---
# Dataset Card for "PhoNer_Covid19"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.416549414396286,
-0.20826652646064758,
-0.13477207720279694,
0.24004779756069183,
-0.2261485457420349,
-0.012084243819117546,
0.34451159834861755,
-0.1729080229997635,
0.7680853009223938,
0.34294214844703674,
-0.6800985336303711,
-0.6425480842590332,
-0.3734022378921509,
-0.248728066682... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
philschmid/dolly-15k-oai-style | philschmid | 2023-11-15T08:14:46Z | 48 | 0 | null | [
"region:us"
] | 2023-11-15T08:14:46Z | 2023-11-15T08:09:58.000Z | 2023-11-15T08:09:58 | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 12278400
num_examples: 15011
download_size: 7243728
dataset_size: 12278400
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dolly-15k-oai-style"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.47928550839424133,
-0.42261216044425964,
-0.16196243464946747,
0.3205528259277344,
-0.2181040346622467,
-0.1865285187959671,
0.5059013366699219,
-0.2928011417388916,
0.9805879592895508,
0.6603389382362366,
-0.8854005336761475,
-0.6620314717292786,
-0.5002257227897644,
-0.117306970059871... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
medmabfc/Arabic_News_Texts_Corpus | medmabfc | 2023-11-23T22:22:35Z | 48 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-23T22:22:35Z | 2023-11-22T20:38:26.000Z | 2023-11-22T20:38:26 | ---
license: mit
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 655293
num_examples: 154
download_size: 309603
dataset_size: 655293
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Xnhyacinth/TQA-Image | Xnhyacinth | 2023-11-25T08:19:52Z | 48 | 0 | null | [
"region:us"
] | 2023-11-25T08:19:52Z | 2023-11-25T07:49:39.000Z | 2023-11-25T07:49:39 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: question
dtype: string
- name: answers
sequence: string
- name: target
dtype: string
- name: ctxs
list:
- name: id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
- name: compressed_ctxs_1
struct:
- name: compressed_prompt
dtype: string
- name: compressed_tokens
dtype: int64
- name: origin_tokens
dtype: int64
- name: ratio
dtype: string
- name: saving
dtype: string
- name: compressed_ctxs_100
struct:
- name: compressed_prompt
dtype: string
- name: compressed_tokens
dtype: int64
- name: origin_tokens
dtype: int64
- name: ratio
dtype: string
- name: saving
dtype: string
splits:
- name: train
num_bytes: 5395421949
num_examples: 78785
- name: eval
num_bytes: 605118800
num_examples: 8837
- name: test
num_bytes: 775128252
num_examples: 11313
download_size: 3917964666
dataset_size: 6775669001
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: eval
path: data/eval-*
- split: test
path: data/test-*
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
llangnickel/long-covid-classification-data | llangnickel | 2022-11-24T10:29:58Z | 47 | 0 | null | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-11-24T10:29:58Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: 'Dataset containing abstracts from PubMed, either related to long COVID
or not. '
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-classification
---
## Data Description
Long-COVID related articles have been manually collected by information specialists.
Please find further information [here](https://doi.org/10.1093/database/baac048).
## Size
||Training|Development|Test|Total|
|--|--|--|--|--|
Positive Examples|215|76|70|345|
Negative Examples|199|62|68|345|
Total|414|238|138|690|
## Citation
@article{10.1093/database/baac048,
author = {Langnickel, Lisa and Darms, Johannes and Heldt, Katharina and Ducks, Denise and Fluck, Juliane},
title = "{Continuous development of the semantic search engine preVIEW: from COVID-19 to long COVID}",
journal = {Database},
volume = {2022},
year = {2022},
month = {07},
issn = {1758-0463},
doi = {10.1093/database/baac048},
url = {https://doi.org/10.1093/database/baac048},
note = {baac048},
eprint = {https://academic.oup.com/database/article-pdf/doi/10.1093/database/baac048/44371817/baac048.pdf},
} | [
-0.46494051814079285,
-0.7768431901931763,
0.23710298538208008,
0.32966089248657227,
-0.034953389316797256,
-0.10792021453380585,
-0.22430889308452606,
-0.4896784722805023,
0.38900211453437805,
0.048715513199567795,
-0.6414086818695068,
-0.7397294640541077,
-0.31138864159584045,
0.32539060... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mvarma/medwiki | mvarma | 2022-10-25T09:51:06Z | 47 | 4 | null | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|wikipedia",
"license:cc-by-4.0",
"arxiv:2110.08228",
"region:us"
] | 2022-10-25T09:51:06Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
YAML tags:
annotations_creators:
- machine-generated
language_creators:
- crowdsourced
language:
- en-US
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: medwiki
size_categories:
- unknown
source_datasets:
- extended|wikipedia
task_categories:
- text-retrieval
task_ids:
- entity-linking-retrieval
---
# Dataset Card for MedWiki
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Github](https://github.com/HazyResearch/medical-ned-integration)
- **Paper:** [Cross-Domain Data Integration for Named Entity Disambiguation in Biomedical Text](https://arxiv.org/abs/2110.08228)
- **Point of Contact:** [Maya Varma](mailto:mvarma2@stanford.edu)
### Dataset Summary
MedWiki is a large sentence dataset collected from a medically-relevant subset of Wikipedia and annotated with biomedical entities in the Unified Medical Language System (UMLS) knowledge base. For each entity, we include a rich set of types sourced from both UMLS and WikiData. Consisting of over 13 million sentences and 17 million entity annotations, MedWiki can be utilized as a pretraining resource for language models and can improve performance of medical named entity recognition and disambiguation systems, especially on rare entities.
Here, we include two configurations of MedWiki (further details in [Dataset Creation](#dataset-creation)):
- `MedWiki-Full` is a large sentence dataset with UMLS medical entity annotations generated through the following two steps: (1) a weak labeling proecedure to annotate WikiData entities in sentences and (2) a data integration approach that maps WikiData entities to their counterparts in UMLS.
- `MedWiki-HQ` is a subset of MedWiki-Full with higher quality labels designed to limit noise that arises from the annotation procedure listed above.
### Languages
The text in the dataset is in English and was obtained from English Wikipedia.
## Dataset Structure
### Data Instances
A typical data point includes a sentence collected from Wikipedia annotated with UMLS medical entities and associated titles and types.
An example from the MedWiki test set looks as follows:
```
{'sent_idx_unq': 57000409,
'sentence': "The hair , teeth , and skeletal side effects of TDO are lifelong , and treatment is used to manage those effects .",
'mentions': ['tdo'],
'entities': ['C2931236'],
'entity_titles': ['Tricho-dento-osseous syndrome 1'],
'types': [['Disease or Syndrome', 'disease', 'rare disease', 'developmental defect during embryogenesis', 'malformation syndrome with odontal and/or periodontal component', 'primary bone dysplasia with increased bone density', 'syndromic hair shaft abnormality']],
'spans': [[10, 11]]}
```
### Data Fields
- `sent_idx_unq`: a unique integer identifier for the data instance
- `sentence`: a string sentence collected from English Wikipedia. Punctuation is separated from words, and the sentence can be tokenized into word-pieces with the .split() method.
- `mentions`: list of medical mentions in the sentence.
- `entities`: list of UMLS medical entity identifiers corresponding to mentions. There is exactly one entity for each mention, and the length of the `entities` list is equal to the length of the `mentions` list.
- `entity_titles`: List of English titles collected from UMLS that describe each entity. The length of the `entity_titles` list is equal to the length of the `entities` list.
- `types`: List of category types associated with each entity, including types collected from UMLS and WikiData.
- `spans`: List of integer pairs representing the word span of each mention in the sentence.
### Data Splits
MedWiki includes two configurations: MedWiki-Full and MedWiki-HQ (described further in [Dataset Creation](#dataset-creation)). For each configuration, data is split into training, development, and test sets. The split sizes are as follow:
| | Train | Dev | Test |
| ----- | ------ | ----- | ---- |
| MedWiki-Full Sentences |11,784,235 | 649,132 | 648,608 |
| MedWiki-Full Mentions |15,981,347 | 876,586 | 877,090 |
| MedWiki-Full Unique Entities | 230,871 | 55,002 | 54,772 |
| MedWiki-HQ Sentences | 2,962,089 | 165,941 | 164,193 |
| MedWiki-HQ Mentions | 3,366,108 | 188,957 | 186,622 |
| MedWiki-HQ Unique Entities | 118,572 | 19,725 | 19,437 |
## Dataset Creation
### Curation Rationale
Existing medical text datasets are generally limited in scope, often obtaining low coverage over the entities and structural resources in the UMLS medical knowledge base. When language models are trained across such datasets, the lack of adequate examples may prevent models from learning the complex reasoning patterns that are necessary for performing effective entity linking or disambiguation, especially for rare entities as shown in prior work by [Orr et al.](http://cidrdb.org/cidr2021/papers/cidr2021_paper13.pdf). Wikipedia, which is often utilized as a rich knowledge source in general text settings, contains references to medical terms and can help address this issue. Here, we curate the MedWiki dataset, which is a large-scale, weakly-labeled dataset that consists of sentences from Wikipedia annotated with medical entities in the UMLS knowledge base. MedWiki can serve as a pretraining dataset for language models and holds potential for improving performance on medical named entity recognition tasks, especially on rare entities.
### Source Data
#### Initial Data Collection and Normalization
MedWiki consists of sentences obtained from the November 2019 dump of English Wikipedia. We split pages into an 80/10/10 train/dev/test split and then segment each page at the sentence-level. This ensures that all sentences associated with a single Wikipedia page are placed in the same split.
#### Who are the source language producers?
The source language producers are editors on English Wikipedia.
### Annotations
#### Annotation process
We create two configurations of our dataset: MedWiki-Full and MedWiki-HQ. We label MedWiki-Full by first annotating all English Wikipedia articles with textual mentions and corresponding WikiData entities; we do so by obtaining gold entity labels from internal page links as well as generating weak labels based on pronouns and alternative entity names (see [Orr et al. 2020](http://cidrdb.org/cidr2021/papers/cidr2021_paper13.pdf) for additional information). Then, we use the off-the-shelf entity linker [Bootleg](https://github.com/HazyResearch/bootleg) to map entities in WikiData to their counterparts in the 2017AA release of the Unified Medical Language System (UMLS), a standard knowledge base for biomedical entities (additional implementation details in forthcoming publication). Any sentence containing at least one UMLS entity is included in MedWiki-Full. We also include types associated with each entity, which are collected from both WikiData and UMLS using the generated UMLS-Wikidata mapping. It is important to note that types obtained from WikiData are filtered according to methods described in [Orr et al. 2020](http://cidrdb.org/cidr2021/papers/cidr2021_paper13.pdf).
Since our labeling procedure introduces some noise into annotations, we also release the MedWiki-HQ dataset configuration with higher-quality labels. To generate MedWiki-HQ, we filtered the UMLS-Wikidata mappings to only include pairs of UMLS medical entities and WikiData items that share a high textual overlap between titles. MedWiki-HQ is a subset of MedWiki-Full.
To evaluate the quality of our UMLS-Wikidata mappings, we find that WikiData includes a small set of "true" labeled mappings between UMLS entities and WikiData items. (Note that we only include WikiData items associated with linked Wikipedia pages.) This set comprises approximately 9.3k UMLS entities in the original UMLS-Wikidata mapping (used for MedWiki-Full) and 5.6k entities in the filtered UMLS-Wikidata mapping (used for MedWiki-HQ). Using these labeled sets, we find that our mapping accuracy is 80.2% for the original UMLS-Wikidata mapping and 94.5% for the filtered UMLS-Wikidata mapping. We also evaluate integration performance on this segment as the proportion of mapped WikiData entities that share a WikiData type with the true entity, suggesting the predicted mapping adds relevant structural resources. Integration performance is 85.4% for the original UMLS-Wikidata mapping and 95.9% for the filtered UMLS-Wikidata mapping. The remainder of items in UMLS have no “true” mappings to WikiData.
#### Who are the annotators?
The dataset was labeled using weak-labeling techniques as described above.
### Personal and Sensitive Information
No personal or sensitive information is included in MedWiki.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to enable the creation of better named entity recognition systems for biomedical text. MedWiki encompasses a large set of entities in the UMLS knowledge base and includes a rich set of types associated with each entity, which can enable the creation of models that achieve high performance on named entity recognition tasks, especially on rare or unpopular entities. Such systems hold potential for improving automated parsing and information retrieval from large quantities of biomedical text.
### Discussion of Biases
The data included in MedWiki comes from English Wikipedia. Generally, Wikipedia articles are neutral in point of view and aim to avoid bias. However, some [prior work](https://www.hbs.edu/ris/Publication%20Files/15-023_e044cf50-f621-4759-a827-e9a3bf8920c0.pdf) has shown that ideological biases may exist within some Wikipedia articles, especially those that are focused on political issues or those that are written by fewer authors. We anticipate that such biases are rare for medical articles, which are typically comprised of scientific facts. However, it is important to note that bias encoded in Wikipedia is likely to be reflected by MedWiki.
### Other Known Limitations
Since MedWiki was annotated using weak labeling techniques, there is likely some noise in entity annotations. (Note that to address this, we include the MedWiki-HQ configuration, which is a subset of MedWiki-Full with higher quality labels. Additional details in [Dataset Creation](#dataset-creation)).
## Additional Information
### Dataset Curators
MedWiki was curated by Maya Varma, Laurel Orr, Sen Wu, Megan Leszczynski, Xiao Ling, and Chris Ré.
### Licensing Information
Dataset licensed under CC BY 4.0.
### Citation Information
```
@inproceedings{varma-etal-2021-cross-domain,
title = "Cross-Domain Data Integration for Named Entity Disambiguation in Biomedical Text",
author = "Varma, Maya and
Orr, Laurel and
Wu, Sen and
Leszczynski, Megan and
Ling, Xiao and
R{\'e}, Christopher",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.388",
pages = "4566--4575",
}
```
### Contributions
Thanks to [@maya124](https://github.com/maya124) for adding this dataset.
| [
-0.5368154048919678,
-0.7036492228507996,
0.4010153114795685,
-0.12434830516576767,
-0.3757592439651489,
-0.15481990575790405,
-0.3858278691768646,
-0.4926881790161133,
0.4716680943965912,
0.5411183834075928,
-0.5419111847877502,
-0.7525119185447693,
-0.376180499792099,
0.5355654954910278,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
toloka/VoxDIY-RusNews | toloka | 2022-12-06T15:24:30Z | 47 | 2 | null | [
"task_categories:summarization",
"task_categories:automatic-speech-recognition",
"task_categories:text2text-generation",
"annotations_creators:found",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:ru",
"license:cc-... | 2022-12-06T15:24:30Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- found
language_creators:
- crowdsourced
language:
- ru
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- summarization
- automatic-speech-recognition
- text2text-generation
task_ids: []
pretty_name: VoxDIY RusNews
language_bcp47:
- ru-RU
tags:
- conditional-text-generation
- stuctured-to-text
- speech-recognition
---
# Dataset Card for VoxDIY RusNews
## Dataset Description
- **Repository:** [GitHub](https://github.com/Toloka/CrowdSpeech)
- **Paper:** [Paper](https://openreview.net/forum?id=3_hgF1NAXU7)
- **Point of Contact:** research@toloka.ai
### Dataset Summary
VoxDIY RusNews is the first publicly available large-scale dataset of crowdsourced audio transcriptions in Russian language.
The dataset was constructed by annotating audio recordings of Russian sentences from news domain on [Toloka crowdsourcing platform](https://toloka.ai).
VoxDIY RusNews consists of 3091 instances having around 21K annotations obtained from crowd workers.
### Supported Tasks and Leaderboards
Aggregation of crowd transcriptions.
### Languages
Russian
## Dataset Structure
### Data Instances
A data instance contains a url to the audio recording, a list of transcriptions along with the corresponding performers identifiers and
ground truth. For each data instance, seven crowdsourced transcriptions are provided.
```
{'task': 'https://tlk.s3.yandex.net/annotation_tasks/russian/1003.wav',
'transcriptions': 'в список так же попали мэрлин монро джон ленон и альберт эйнштейн | в список также попали мерлин монро джон ленон и альберт энштейн | в список также попали мерилин монро джон леннон и альберт энтштейн | в список также попали мэрилин монро джон леннон и альберт эпштейн | в список также попали мэрилин монро джон леннон и альберт эйнштейн | в список так же попали мерелин монро джон ленон и альберт нштейн | в список также попали мэрилин монро джон леннон и альберт эйнштейн',
'performers': '1743 | 784 | 1014 | 1572 | 744 | 2187 | 1208',
'gt': 'в список также попали мэрилин монро джон леннон и альберт эйнштейн'}
```
### Data Fields
* task: a string containing a url of the audio recording
* transcriptions: a list of the crowdsourced transcriptions separated by '|'
* performers: the corresponding performers' identifiers.
* gt: ground truth transcription
## Dataset Creation
### Source Data
The audio recordings were obtained using a [speech synthesis tool](https://cloud.yandex.com/en-ru/services/speechkit).
The source sentences come from the Russian test set of the machine translation shared task executed as a part of the
Eights and Ninth Workshops on Statistical Machine Translation ([WMT 2013](https://www.statmt.org/wmt13/) and [WMT 2014](https://www.statmt.org/wmt14/)).
### Annotations
Annotation was done on [Toloka crowdsourcing platform](https://toloka.ai) with overlap of 7 (that is, each task was performed by 7 annotators).
Only annotators who self-reported the knowledge of Russian had access to the annotation task.
Additionally, annotators had to pass *Entrance Exam*. For this, we ask all incoming eligible workers to annotate ten audio
recordings. We then compute our target metric — Word Error Rate (WER) — on these recordings and accept to the main task all workers
who achieve WER of 40% or less (the smaller the value of the metric, the higher the quality of annotation).
The Toloka crowdsourcing platform associates workers with unique identifiers and returns these identifiers to the requester.
To further protect the data, we additionally encode each identifier with an integer that is eventually reported in our released datasets.
See more details in the [paper](https://arxiv.org/pdf/2107.01091.pdf).
### Citation Information
```
@inproceedings{CrowdSpeech,
author = {Pavlichenko, Nikita and Stelmakh, Ivan and Ustalov, Dmitry},
title = {{CrowdSpeech and Vox~DIY: Benchmark Dataset for Crowdsourced Audio Transcription}},
year = {2021},
booktitle = {Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks},
eprint = {2107.01091},
eprinttype = {arxiv},
eprintclass = {cs.SD},
url = {https://openreview.net/forum?id=3_hgF1NAXU7},
language = {english},
pubstate = {forthcoming},
}
``` | [
-0.24747100472450256,
-0.3022306561470032,
0.2833152711391449,
0.3646860420703888,
-0.4434202313423157,
0.010856375098228455,
-0.44330355525016785,
-0.30348604917526245,
0.3233181834220886,
0.49359583854675293,
-0.6753214001655579,
-0.8878372311592102,
-0.6819084286689758,
0.28521454334259... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bible-nlp/biblenlp-corpus | bible-nlp | 2023-07-21T11:56:30Z | 47 | 12 | null | [
"task_categories:translation",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:translation",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:aai",
"language:aak",
"language:aau",
"language:aaz",
"lan... | 2023-07-21T11:56:30Z | 2022-04-07T03:04:02.000Z | 2022-04-07T03:04:02 | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- aai
- aak
- aau
- aaz
- abt
- abx
- aby
- acf
- acr
- acu
- adz
- aer
- aey
- agd
- agg
- agm
- agn
- agr
- agt
- agu
- aia
- aii
- aka
- ake
- alp
- alq
- als
- aly
- ame
- amf
- amk
- amm
- amn
- amo
- amp
- amr
- amu
- amx
- anh
- anv
- aoi
- aoj
- aom
- aon
- apb
- ape
- apn
- apr
- apu
- apw
- apz
- arb
- are
- arl
- arn
- arp
- asm
- aso
- ata
- atb
- atd
- atg
- att
- auc
- aui
- auy
- avt
- awb
- awk
- awx
- azb
- azg
- azz
- bao
- bba
- bbb
- bbr
- bch
- bco
- bdd
- bea
- bef
- bel
- ben
- beo
- beu
- bgs
- bgt
- bhg
- bhl
- big
- bjk
- bjp
- bjr
- bjv
- bjz
- bkd
- bki
- bkq
- bkx
- bla
- blw
- blz
- bmh
- bmk
- bmr
- bmu
- bnp
- boa
- boj
- bon
- box
- bpr
- bps
- bqc
- bqp
- bre
- bsj
- bsn
- bsp
- bss
- buk
- bus
- bvd
- bvr
- bxh
- byr
- byx
- bzd
- bzh
- bzj
- caa
- cab
- cac
- caf
- cak
- cao
- cap
- car
- cav
- cax
- cbc
- cbi
- cbk
- cbr
- cbs
- cbt
- cbu
- cbv
- cco
- ceb
- cek
- ces
- cgc
- cha
- chd
- chf
- chk
- chq
- chz
- cjo
- cjv
- ckb
- cle
- clu
- cme
- cmn
- cni
- cnl
- cnt
- cof
- con
- cop
- cot
- cpa
- cpb
- cpc
- cpu
- cpy
- crn
- crx
- cso
- csy
- cta
- cth
- ctp
- ctu
- cub
- cuc
- cui
- cuk
- cut
- cux
- cwe
- cya
- daa
- dad
- dah
- dan
- ded
- deu
- dgc
- dgr
- dgz
- dhg
- dif
- dik
- dji
- djk
- djr
- dob
- dop
- dov
- dwr
- dww
- dwy
- ebk
- eko
- emi
- emp
- eng
- enq
- epo
- eri
- ese
- esk
- etr
- ewe
- faa
- fai
- far
- ffm
- for
- fra
- fue
- fuf
- fuh
- gah
- gai
- gam
- gaw
- gdn
- gdr
- geb
- gfk
- ghs
- glk
- gmv
- gng
- gnn
- gnw
- gof
- grc
- gub
- guh
- gui
- guj
- gul
- gum
- gun
- guo
- gup
- gux
- gvc
- gvf
- gvn
- gvs
- gwi
- gym
- gyr
- hat
- hau
- haw
- hbo
- hch
- heb
- heg
- hin
- hix
- hla
- hlt
- hmo
- hns
- hop
- hot
- hrv
- hto
- hub
- hui
- hun
- hus
- huu
- huv
- hvn
- ian
- ign
- ikk
- ikw
- ilo
- imo
- inb
- ind
- ino
- iou
- ipi
- isn
- ita
- iws
- ixl
- jac
- jae
- jao
- jic
- jid
- jiv
- jni
- jpn
- jvn
- kan
- kaq
- kbc
- kbh
- kbm
- kbq
- kdc
- kde
- kdl
- kek
- ken
- kew
- kgf
- kgk
- kgp
- khs
- khz
- kik
- kiw
- kiz
- kje
- kjn
- kjs
- kkc
- kkl
- klt
- klv
- kmg
- kmh
- kmk
- kmo
- kms
- kmu
- kne
- knf
- knj
- knv
- kos
- kpf
- kpg
- kpj
- kpr
- kpw
- kpx
- kqa
- kqc
- kqf
- kql
- kqw
- ksd
- ksj
- ksr
- ktm
- kto
- kud
- kue
- kup
- kvg
- kvn
- kwd
- kwf
- kwi
- kwj
- kyc
- kyf
- kyg
- kyq
- kyz
- kze
- lac
- lat
- lbb
- lbk
- lcm
- leu
- lex
- lgl
- lid
- lif
- lin
- lit
- llg
- lug
- luo
- lww
- maa
- maj
- mal
- mam
- maq
- mar
- mau
- mav
- maz
- mbb
- mbc
- mbh
- mbj
- mbl
- mbs
- mbt
- mca
- mcb
- mcd
- mcf
- mco
- mcp
- mcq
- mcr
- mdy
- med
- mee
- mek
- meq
- met
- meu
- mgc
- mgh
- mgw
- mhl
- mib
- mic
- mie
- mig
- mih
- mil
- mio
- mir
- mit
- miz
- mjc
- mkj
- mkl
- mkn
- mks
- mle
- mlh
- mlp
- mmo
- mmx
- mna
- mop
- mox
- mph
- mpj
- mpm
- mpp
- mps
- mpt
- mpx
- mqb
- mqj
- msb
- msc
- msk
- msm
- msy
- mti
- mto
- mux
- muy
- mva
- mvn
- mwc
- mwe
- mwf
- mwp
- mxb
- mxp
- mxq
- mxt
- mya
- myk
- myu
- myw
- myy
- mzz
- nab
- naf
- nak
- nas
- nay
- nbq
- nca
- nch
- ncj
- ncl
- ncu
- ndg
- ndj
- nfa
- ngp
- ngu
- nhe
- nhg
- nhi
- nho
- nhr
- nhu
- nhw
- nhy
- nif
- nii
- nin
- nko
- nld
- nlg
- nmw
- nna
- nnq
- noa
- nop
- not
- nou
- npi
- npl
- nsn
- nss
- ntj
- ntp
- ntu
- nuy
- nvm
- nwi
- nya
- nys
- nyu
- obo
- okv
- omw
- ong
- ons
- ood
- opm
- ory
- ote
- otm
- otn
- otq
- ots
- pab
- pad
- pah
- pan
- pao
- pes
- pib
- pio
- pir
- piu
- pjt
- pls
- plu
- pma
- poe
- poh
- poi
- pol
- pon
- por
- poy
- ppo
- prf
- pri
- ptp
- ptu
- pwg
- qub
- quc
- quf
- quh
- qul
- qup
- qvc
- qve
- qvh
- qvm
- qvn
- qvs
- qvw
- qvz
- qwh
- qxh
- qxn
- qxo
- rai
- reg
- rgu
- rkb
- rmc
- rmy
- ron
- roo
- rop
- row
- rro
- ruf
- rug
- rus
- rwo
- sab
- san
- sbe
- sbk
- sbs
- seh
- sey
- sgb
- sgz
- shj
- shp
- sim
- sja
- sll
- smk
- snc
- snn
- snp
- snx
- sny
- som
- soq
- soy
- spa
- spl
- spm
- spp
- sps
- spy
- sri
- srm
- srn
- srp
- srq
- ssd
- ssg
- ssx
- stp
- sua
- sue
- sus
- suz
- swe
- swh
- swp
- sxb
- tac
- taj
- tam
- tav
- taw
- tbc
- tbf
- tbg
- tbl
- tbo
- tbz
- tca
- tcs
- tcz
- tdt
- tee
- tel
- ter
- tet
- tew
- tfr
- tgk
- tgl
- tgo
- tgp
- tha
- thd
- tif
- tim
- tiw
- tiy
- tke
- tku
- tlf
- tmd
- tna
- tnc
- tnk
- tnn
- tnp
- toc
- tod
- tof
- toj
- ton
- too
- top
- tos
- tpa
- tpi
- tpt
- tpz
- trc
- tsw
- ttc
- tte
- tuc
- tue
- tuf
- tuo
- tur
- tvk
- twi
- txq
- txu
- tzj
- tzo
- ubr
- ubu
- udu
- uig
- ukr
- uli
- ulk
- upv
- ura
- urb
- urd
- uri
- urt
- urw
- usa
- usp
- uvh
- uvl
- vid
- vie
- viv
- vmy
- waj
- wal
- wap
- wat
- wbi
- wbp
- wed
- wer
- wim
- wiu
- wiv
- wmt
- wmw
- wnc
- wnu
- wol
- wos
- wrk
- wro
- wrs
- wsk
- wuv
- xav
- xbi
- xed
- xla
- xnn
- xon
- xsi
- xtd
- xtm
- yaa
- yad
- yal
- yap
- yaq
- yby
- ycn
- yka
- yle
- yml
- yon
- yor
- yrb
- yre
- yss
- yuj
- yut
- yuw
- yva
- zaa
- zab
- zac
- zad
- zai
- zaj
- zam
- zao
- zap
- zar
- zas
- zat
- zav
- zaw
- zca
- zga
- zia
- ziw
- zlm
- zos
- zpc
- zpl
- zpm
- zpo
- zpq
- zpu
- zpv
- zpz
- zsr
- ztq
- zty
- zyp
- be
- br
- cs
- ch
- zh
- de
- en
- eo
- fr
- ht
- he
- hr
- id
- it
- ja
- la
- nl
- ru
- sa
- so
- es
- sr
- sv
- to
- uk
- vi
license:
- cc-by-4.0
- other
multilinguality:
- translation
- multilingual
pretty_name: biblenlp-corpus
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- translation
task_ids: []
---
# Dataset Card for BibleNLP Corpus
### Dataset Summary
Partial and complete Bible translations in 833 languages, aligned by verse.
### Languages
aai, aak, aau, aaz, abt, abx, aby, acf, acr, acu, adz, aer, aey, agd, agg, agm, agn, agr, agt, agu, aia, aii, aka, ake, alp, alq, als, aly, ame, amf, amk, amm, amn, amo, amp, amr, amu, amx, anh, anv, aoi, aoj, aom, aon, apb, ape, apn, apr, apu, apw, apz, arb, are, arl, arn, arp, asm, aso, ata, atb, atd, atg, att, auc, aui, auy, avt, awb, awk, awx, azb, azg, azz, bao, bba, bbb, bbr, bch, bco, bdd, bea, bef, bel, ben, beo, beu, bgs, bgt, bhg, bhl, big, bjk, bjp, bjr, bjv, bjz, bkd, bki, bkq, bkx, bla, blw, blz, bmh, bmk, bmr, bmu, bnp, boa, boj, bon, box, bpr, bps, bqc, bqp, bre, bsj, bsn, bsp, bss, buk, bus, bvd, bvr, bxh, byr, byx, bzd, bzh, bzj, caa, cab, cac, caf, cak, cao, cap, car, cav, cax, cbc, cbi, cbk, cbr, cbs, cbt, cbu, cbv, cco, ceb, cek, ces, cgc, cha, chd, chf, chk, chq, chz, cjo, cjv, ckb, cle, clu, cme, cmn, cni, cnl, cnt, cof, con, cop, cot, cpa, cpb, cpc, cpu, cpy, crn, crx, cso, csy, cta, cth, ctp, ctu, cub, cuc, cui, cuk, cut, cux, cwe, cya, daa, dad, dah, dan, ded, deu, dgc, dgr, dgz, dhg, dif, dik, dji, djk, djr, dob, dop, dov, dwr, dww, dwy, ebk, eko, emi, emp, eng, enq, epo, eri, ese, esk, etr, ewe, faa, fai, far, ffm, for, fra, fue, fuf, fuh, gah, gai, gam, gaw, gdn, gdr, geb, gfk, ghs, glk, gmv, gng, gnn, gnw, gof, grc, gub, guh, gui, guj, gul, gum, gun, guo, gup, gux, gvc, gvf, gvn, gvs, gwi, gym, gyr, hat, hau, haw, hbo, hch, heb, heg, hin, hix, hla, hlt, hmo, hns, hop, hot, hrv, hto, hub, hui, hun, hus, huu, huv, hvn, ian, ign, ikk, ikw, ilo, imo, inb, ind, ino, iou, ipi, isn, ita, iws, ixl, jac, jae, jao, jic, jid, jiv, jni, jpn, jvn, kan, kaq, kbc, kbh, kbm, kbq, kdc, kde, kdl, kek, ken, kew, kgf, kgk, kgp, khs, khz, kik, kiw, kiz, kje, kjn, kjs, kkc, kkl, klt, klv, kmg, kmh, kmk, kmo, kms, kmu, kne, knf, knj, knv, kos, kpf, kpg, kpj, kpr, kpw, kpx, kqa, kqc, kqf, kql, kqw, ksd, ksj, ksr, ktm, kto, kud, kue, kup, kvg, kvn, kwd, kwf, kwi, kwj, kyc, kyf, kyg, kyq, kyz, kze, lac, lat, lbb, lbk, lcm, leu, lex, lgl, lid, lif, lin, lit, llg, lug, luo, lww, maa, maj, mal, mam, maq, mar, mau, mav, maz, mbb, mbc, mbh, mbj, mbl, mbs, mbt, mca, mcb, mcd, mcf, mco, mcp, mcq, mcr, mdy, med, mee, mek, meq, met, meu, mgc, mgh, mgw, mhl, mib, mic, mie, mig, mih, mil, mio, mir, mit, miz, mjc, mkj, mkl, mkn, mks, mle, mlh, mlp, mmo, mmx, mna, mop, mox, mph, mpj, mpm, mpp, mps, mpt, mpx, mqb, mqj, msb, msc, msk, msm, msy, mti, mto, mux, muy, mva, mvn, mwc, mwe, mwf, mwp, mxb, mxp, mxq, mxt, mya, myk, myu, myw, myy, mzz, nab, naf, nak, nas, nay, nbq, nca, nch, ncj, ncl, ncu, ndg, ndj, nfa, ngp, ngu, nhe, nhg, nhi, nho, nhr, nhu, nhw, nhy, nif, nii, nin, nko, nld, nlg, nmw, nna, nnq, noa, nop, not, nou, npi, npl, nsn, nss, ntj, ntp, ntu, nuy, nvm, nwi, nya, nys, nyu, obo, okv, omw, ong, ons, ood, opm, ory, ote, otm, otn, otq, ots, pab, pad, pah, pan, pao, pes, pib, pio, pir, piu, pjt, pls, plu, pma, poe, poh, poi, pol, pon, por, poy, ppo, prf, pri, ptp, ptu, pwg, qub, quc, quf, quh, qul, qup, qvc, qve, qvh, qvm, qvn, qvs, qvw, qvz, qwh, qxh, qxn, qxo, rai, reg, rgu, rkb, rmc, rmy, ron, roo, rop, row, rro, ruf, rug, rus, rwo, sab, san, sbe, sbk, sbs, seh, sey, sgb, sgz, shj, shp, sim, sja, sll, smk, snc, snn, snp, snx, sny, som, soq, soy, spa, spl, spm, spp, sps, spy, sri, srm, srn, srp, srq, ssd, ssg, ssx, stp, sua, sue, sus, suz, swe, swh, swp, sxb, tac, taj, tam, tav, taw, tbc, tbf, tbg, tbl, tbo, tbz, tca, tcs, tcz, tdt, tee, tel, ter, tet, tew, tfr, tgk, tgl, tgo, tgp, tha, thd, tif, tim, tiw, tiy, tke, tku, tlf, tmd, tna, tnc, tnk, tnn, tnp, toc, tod, tof, toj, ton, too, top, tos, tpa, tpi, tpt, tpz, trc, tsw, ttc, tte, tuc, tue, tuf, tuo, tur, tvk, twi, txq, txu, tzj, tzo, ubr, ubu, udu, uig, ukr, uli, ulk, upv, ura, urb, urd, uri, urt, urw, usa, usp, uvh, uvl, vid, vie, viv, vmy, waj, wal, wap, wat, wbi, wbp, wed, wer, wim, wiu, wiv, wmt, wmw, wnc, wnu, wol, wos, wrk, wro, wrs, wsk, wuv, xav, xbi, xed, xla, xnn, xon, xsi, xtd, xtm, yaa, yad, yal, yap, yaq, yby, ycn, yka, yle, yml, yon, yor, yrb, yre, yss, yuj, yut, yuw, yva, zaa, zab, zac, zad, zai, zaj, zam, zao, zap, zar, zas, zat, zav, zaw, zca, zga, zia, ziw, zlm, zos, zpc, zpl, zpm, zpo, zpq, zpu, zpv, zpz, zsr, ztq, zty, zyp
## Dataset Structure
### Data Fields
**translation**
- **languages** - an N length list of the languages of the translations, sorted alphabetically
- **translation** - an N length list with the translations each corresponding to the language specified in the above field
**files**
- **lang** - an N length list of the languages of the files, in order of input
- **file** - an N length list of the filenames from the corpus on github, each corresponding with the lang above
**ref** - the verse(s) contained in the record, as a list, with each represented with: ``<a three letter book code> <chapter number>:<verse number>``
**licenses** - an N length list of licenses, corresponding to the list of files above
**copyrights** - information on copyright holders, corresponding to the list of files above
### Usage
The dataset loading script requires installation of tqdm, ijson, and numpy
Specify the languages to be paired with a list and ISO 693-3 language codes, such as ``languages = ['eng', 'fra']``.
By default, the script will return individual verse pairs, as well as verses covering a full range. If only the individual verses is desired, use ``pair='single'``. If only the maximum range pairing is desired use ``pair='range'`` (for example, if one text uses the verse range covering GEN 1:1-3, all texts would return only the full length pairing).
## Sources
https://github.com/BibleNLP/ebible-corpus | [
-0.7216166853904724,
-0.16065393388271332,
0.315409779548645,
0.20331209897994995,
-0.06836684048175812,
0.4185476005077362,
0.1279001086950302,
-0.2518490254878998,
0.40007704496383667,
0.5549840331077576,
-0.6348722577095032,
-0.5086019039154053,
-0.6188847422599792,
0.36221352219581604,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ccdv/WCEP-10 | ccdv | 2022-10-25T10:55:52Z | 47 | 3 | null | [
"task_categories:summarization",
"task_categories:text2text-generation",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"conditional-text-generation",
"arxiv:2005.10070",
"arxiv:2110.08499",
"region:us"
] | 2022-10-25T10:55:52Z | 2022-05-09T14:13:26.000Z | 2022-05-09T14:13:26 | ---
language:
- en
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
task_categories:
- summarization
- text2text-generation
task_ids: []
tags:
- conditional-text-generation
---
# WCEP10 dataset for summarization
Summarization dataset copied from [PRIMERA](https://github.com/allenai/PRIMER)
This dataset is compatible with the [`run_summarization.py`](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization) script from Transformers if you add this line to the `summarization_name_mapping` variable:
```python
"ccdv/WCEP-10": ("document", "summary")
```
# Configs
4 possibles configs:
- `roberta` will concatenate documents with "\</s\>" (default)
- `newline` will concatenate documents with "\n"
- `bert` will concatenate documents with "[SEP]"
- `list` will return the list of documents instead of a string
### Data Fields
- `id`: paper id
- `document`: a string/list containing the body of a set of documents
- `summary`: a string containing the abstract of the set
### Data Splits
This dataset has 3 splits: _train_, _validation_, and _test_. \
| Dataset Split | Number of Instances |
| ------------- | --------------------|
| Train | 8158 |
| Validation | 1020 |
| Test | 1022 |
# Cite original article
```
@article{DBLP:journals/corr/abs-2005-10070,
author = {Demian Gholipour Ghalandari and
Chris Hokamp and
Nghia The Pham and
John Glover and
Georgiana Ifrim},
title = {A Large-Scale Multi-Document Summarization Dataset from the Wikipedia
Current Events Portal},
journal = {CoRR},
volume = {abs/2005.10070},
year = {2020},
url = {https://arxiv.org/abs/2005.10070},
eprinttype = {arXiv},
eprint = {2005.10070},
timestamp = {Fri, 22 May 2020 16:21:28 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2005-10070.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
@article{DBLP:journals/corr/abs-2110-08499,
author = {Wen Xiao and
Iz Beltagy and
Giuseppe Carenini and
Arman Cohan},
title = {{PRIMER:} Pyramid-based Masked Sentence Pre-training for Multi-document
Summarization},
journal = {CoRR},
volume = {abs/2110.08499},
year = {2021},
url = {https://arxiv.org/abs/2110.08499},
eprinttype = {arXiv},
eprint = {2110.08499},
timestamp = {Fri, 22 Oct 2021 13:33:09 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2110-08499.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | [
-0.6282990574836731,
-0.3563585579395294,
0.02676914818584919,
0.33034732937812805,
-0.2820162773132324,
0.10572145879268646,
-0.35942158102989197,
-0.20604558289051056,
0.36471566557884216,
0.3773975372314453,
-0.5004817843437195,
-0.5939722657203674,
-0.6058114171028137,
0.30078777670860... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arize-ai/ecommerce_reviews_with_language_drift | arize-ai | 2022-07-01T17:26:03Z | 47 | 1 | null | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|imdb",
"language:en",
"license:mit",
"region:us"
] | 2022-07-01T17:26:03Z | 2022-05-31T23:24:11.000Z | 2022-05-31T23:24:11 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: sentiment-classification-reviews-with-drift
size_categories:
- 10K<n<100K
source_datasets:
- extended|imdb
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for `reviews_with_drift`
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added (`age`, `gender`, `context`) as well as a made up timestamp `prediction_ts` of when the inference took place.
### Supported Tasks and Leaderboards
`text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).
### Languages
Text is mainly written in english.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@fjcasti1](https://github.com/fjcasti1) for adding this dataset. | [
-0.628180205821991,
-0.4517954885959625,
0.2533239424228668,
0.13039681315422058,
-0.37903034687042236,
0.17068053781986237,
-0.3452197015285492,
-0.19959135353565216,
0.6291214227676392,
0.6300225853919983,
-1.0287784337997437,
-0.9946329593658447,
-0.5445864200592041,
0.03664374351501465... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ITESM/embedded_faqs_medicare | ITESM | 2022-06-14T22:06:28Z | 47 | 0 | null | [
"region:us"
] | 2022-06-14T22:06:28Z | 2022-06-14T22:00:33.000Z | 2022-06-14T22:00:33 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kiddothe2b/contract-nli | kiddothe2b | 2022-07-27T13:07:52Z | 47 | 1 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-07-27T13:07:52Z | 2022-07-27T12:36:23.000Z | 2022-07-27T12:36:23 | ---
license: cc-by-nc-sa-4.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
laion/laion2b-multi-vit-h-14-embeddings | laion | 2022-12-23T20:29:43Z | 47 | 1 | null | [
"region:us"
] | 2022-12-23T20:29:43Z | 2022-10-25T22:02:16.000Z | 2022-10-25T22:02:16 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yeeb/C50 | yeeb | 2022-10-26T05:55:06Z | 47 | 0 | null | [
"license:openrail",
"region:us"
] | 2022-10-26T05:55:06Z | 2022-10-26T05:49:50.000Z | 2022-10-26T05:49:50 | ---
license: openrail
---
## Dataset Description
The dataset is the subset of RCV1. These corpus has already been used in author identification experiments. In the top 50 authors (with respect to total size of articles) were selected. 50 authors of texts labeled with at least one subtopic of the class CCAT(corporate/industrial) were selected.That way, it is attempted to minimize the topic factor in distinguishing among the texts. The training corpus consists of 2,500 texts (50 per author) and the test corpus includes other 2,500 texts (50 per author) non-overlapping with the training texts.
- **Homepage:** https://archive.ics.uci.edu/ml/datasets/Reuter_50_50
- **Repository:** https://archive.ics.uci.edu/ml/datasets/Reuter_50_50
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** | [
-0.6446667313575745,
-0.4005526006221771,
0.16720952093601227,
0.10493116080760956,
-0.28717562556266785,
0.33072641491889954,
-0.17274363338947296,
-0.2936760187149048,
0.39531898498535156,
0.6286830902099609,
-0.729768693447113,
-0.7545167207717896,
-0.6535186767578125,
0.323470175266265... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/bionlp_st_2013_cg | bigbio | 2022-12-22T15:43:57Z | 47 | 2 | null | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-12-22T15:43:57Z | 2022-11-13T22:07:03.000Z | 2022-11-13T22:07:03 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: GENIA_PROJECT_LICENSE
pretty_name: BioNLP 2013 CG
homepage: https://github.com/openbiocorpora/bionlp-st-2013-cg
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- EVENT_EXTRACTION
- NAMED_ENTITY_RECOGNITION
- COREFERENCE_RESOLUTION
---
# Dataset Card for BioNLP 2013 CG
## Dataset Description
- **Homepage:** https://github.com/openbiocorpora/bionlp-st-2013-cg
- **Pubmed:** True
- **Public:** True
- **Tasks:** EE,NER,COREF
the Cancer Genetics (CG) is a event extraction task and a main task of the BioNLP Shared Task (ST) 2013.
The CG task is an information extraction task targeting the recognition of events in text,
represented as structured n-ary associations of given physical entities. In addition to
addressing the cancer domain, the CG task is differentiated from previous event extraction
tasks in the BioNLP ST series in addressing a wide range of pathological processes and multiple
levels of biological organization, ranging from the molecular through the cellular and organ
levels up to whole organisms. Final test set submissions were accepted from six teams
## Citation Information
```
@inproceedings{pyysalo-etal-2013-overview,
title = "Overview of the Cancer Genetics ({CG}) task of {B}io{NLP} Shared Task 2013",
author = "Pyysalo, Sampo and
Ohta, Tomoko and
Ananiadou, Sophia",
booktitle = "Proceedings of the {B}io{NLP} Shared Task 2013 Workshop",
month = aug,
year = "2013",
address = "Sofia, Bulgaria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W13-2008",
pages = "58--66",
}
```
| [
-0.0013056760653853416,
-0.5608446598052979,
0.20356912910938263,
0.09789709746837616,
-0.2872485816478729,
-0.08790288120508194,
-0.3317156136035919,
-0.5020945072174072,
0.3225630521774292,
0.19906093180179596,
-0.6955603957176208,
-0.9135928153991699,
-0.7514464259147644,
0.233309999108... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Sociovestix/lenu | Sociovestix | 2023-10-25T15:09:29Z | 47 | 1 | null | [
"region:us"
] | 2023-10-25T15:09:29Z | 2022-11-22T17:24:58.000Z | 2022-11-22T17:24:58 | ---
dataset_info:
- config_name: AT
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
'0': AXSB
'1': EQOV
'2': '8888'
'3': ONF1
'4': DX6Z
'5': JTAV
'6': 5WWO
'7': ECWU
'8': JJYT
'9': E9OX
'10': UI81
'11': GVPD
'12': NIJH
'13': 8XDW
'14': 1NOX
'15': CAQ1
'16': JQOI
'17': O65B
'18': 69H1
'19': G3R6
splits:
- name: train
num_bytes: 1197203
num_examples: 18337
- name: validation
num_bytes: 171674
num_examples: 2620
- name: test
num_bytes: 344598
num_examples: 5240
download_size: 343099313
dataset_size: 1713475
- config_name: AU
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
'0': TXVC
'1': '8888'
'2': ADXG
'3': R4KK
'4': 7TPC
'5': LZFR
'6': BC38
'7': J4JC
'8': Q82Q
'9': 6W6X
'10': XHCV
'11': PQHL
splits:
- name: train
num_bytes: 742544
num_examples: 11150
- name: validation
num_bytes: 105871
num_examples: 1594
- name: test
num_bytes: 212554
num_examples: 3187
download_size: 343099313
dataset_size: 1060969
- config_name: CH
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
'0': 3EKS
'1': '8888'
'2': 7MNN
'3': MVII
'4': FJG4
'5': FLNB
'6': 2JZ4
'7': 54WI
'8': XJOT
'9': H781
'10': QSI2
'11': W6A7
'12': L5DU
'13': DP2E
'14': 5BEZ
'15': E0NE
'16': AZA0
'17': 2B81
'18': HX77
'19': CQMY
'20': MRSY
'21': GP8M
'22': FFTN
'23': M848
'24': TL87
'25': 2XJA
'26': BF9N
splits:
- name: train
num_bytes: 613152
num_examples: 9937
- name: validation
num_bytes: 87564
num_examples: 1420
- name: test
num_bytes: 177207
num_examples: 2840
download_size: 343099313
dataset_size: 877923
- config_name: CN
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
'0': ECAK
'1': '8888'
'2': 6NSC
'3': B5UZ
'4': E4FG
'5': 2M6Y
'6': 1IWK
'7': UMCR
'8': I39S
'9': GGZ5
'10': SH05
'11': RV48
'12': OH9O
'13': YXJ5
'14': CYV6
'15': V816
'16': BDTI
'17': OMUD
splits:
- name: train
num_bytes: 2115989
num_examples: 28391
- name: validation
num_bytes: 302006
num_examples: 4057
- name: test
num_bytes: 606021
num_examples: 8112
download_size: 343099313
dataset_size: 3024016
- config_name: CZ
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
'0': 9HLU
'1': 6CQN
'2': 9RVC
'3': ZQO8
'4': '8888'
'5': 95G8
'6': 6D9L
'7': 3G3D
'8': SNWJ
'9': J8PB
'10': JCAD
'11': CATU
'12': CIO8
'13': QS6A
'14': CD28
'15': UFDA
'16': QIEL
'17': 7OZQ
'18': 6FAI
'19': NI3I
'20': QQ49
'21': Q25I
'22': 5KU5
'23': BL4B
'24': G2I3
'25': QJ0F
'26': 4UB2
'27': FY1B
'28': VIE3
'29': OVKW
'30': IQ9O
'31': 917C
'32': LJL0
'33': R2XE
'34': MAVU
'35': PFE5
'36': MBUU
'37': NQHQ
'38': D1VK
'39': HQPK
'40': XG70
'41': 74W6
'42': CZUA
'43': NPH3
'44': NJ87
splits:
- name: train
num_bytes: 640736
num_examples: 10885
- name: validation
num_bytes: 91606
num_examples: 1556
- name: test
num_bytes: 183820
num_examples: 3111
download_size: 343099313
dataset_size: 916162
- config_name: DE
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
'0': 2HBR
'1': 6QQB
'2': XLWA
'3': '8888'
'4': V2YH
'5': QZ3L
'6': 63KS
'7': SQKS
'8': 8Z6G
'9': YJ4C
'10': US8E
'11': T0YJ
'12': SCE1
'13': FR3V
'14': 79H0
'15': AZFE
'16': 40DB
'17': 2YZO
'18': SGST
'19': OL20
'20': 13AV
'21': FEBD
'22': 9JGX
'23': D40E
'24': 8CM0
'25': 7J3S
'26': JNDX
'27': SUA1
'28': JMVF
'29': YA01
'30': AMKW
splits:
- name: train
num_bytes: 6932102
num_examples: 104047
- name: validation
num_bytes: 989330
num_examples: 14865
- name: test
num_bytes: 1981615
num_examples: 29728
download_size: 343099313
dataset_size: 9903047
- config_name: DK
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
'0': H8VP
'1': 599X
'2': ZRPO
'3': D4PU
'4': 40R4
'5': FUKI
'6': NUL8
'7': '8888'
'8': 9KSX
'9': 7WRN
'10': PMJW
'11': PIOI
'12': PZ6Y
'13': GFXN
'14': '9999'
'15': F7JY
'16': 37UT
'17': 1MWR
'18': WU7R
'19': GULL
'20': FW7S
'21': 5QS7
splits:
- name: train
num_bytes: 2384069
num_examples: 41351
- name: validation
num_bytes: 341095
num_examples: 5908
- name: test
num_bytes: 681707
num_examples: 11815
download_size: 343099313
dataset_size: 3406871
- config_name: EE
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
'0': 9LJA
'1': JC0Y
'2': PRTB
'3': '8888'
'4': LVEQ
'5': 1NKP
'6': VSEV
'7': I1UP
'8': 752Q
'9': J34T
'10': LA47
'11': 8ZQE
'12': 3UPJ
splits:
- name: train
num_bytes: 564808
num_examples: 10933
- name: validation
num_bytes: 80833
num_examples: 1563
- name: test
num_bytes: 161350
num_examples: 3125
download_size: 343099313
dataset_size: 806991
- config_name: ES
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
'0': 5RDO
'1': S0Z5
'2': DP3Q
'3': FH4R
'4': '8888'
'5': R6UT
'6': MDOL
'7': UJ35
'8': 1QU8
'9': 8EHB
'10': S6MS
'11': JB2M
'12': CUIH
'13': 1G29
'14': K0RI
'15': GJL1
'16': QMUM
'17': 956I
'18': AXS5
'19': JTV5
'20': 9FPZ
'21': TUHS
'22': A0J6
'23': 4SJR
'24': S6X7
'25': I2WU
'26': A97B
'27': AJ9U
'28': IAS6
'29': SS0L
'30': ARDP
'31': 7U8O
'32': 1SL4
'33': 1ZHJ
'34': B0V5
'35': TDD5
'36': R2L8
'37': 4S57
'38': DDES
'39': IT6N
'40': TLCJ
'41': XYGP
splits:
- name: train
num_bytes: 3960678
num_examples: 63928
- name: validation
num_bytes: 566802
num_examples: 9133
- name: test
num_bytes: 1133425
num_examples: 18266
download_size: 343099313
dataset_size: 5660905
- config_name: FI
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
'0': DKUW
'1': 5WI2
'2': K6VE
'3': '8888'
'4': 8WJ7
'5': 1AFG
'6': HEOB
'7': V0TJ
'8': UXEW
'9': K2G8
'10': NV7C
'11': XJH3
'12': VOTI
'13': '9999'
'14': YK5G
'15': 2RK5
'16': PPMX
'17': BKVI
'18': IYF9
'19': BKQO
'20': EE90
'21': 8HGS
'22': 4H61
'23': DAFV
'24': ZMTL
'25': SJL9
'26': K09E
'27': R39F
'28': SDPE
'29': MRW9
'30': N3LC
'31': 97PB
'32': EDZP
'33': 6PEQ
'34': DMT8
'35': SKGX
'36': KHI5
'37': 37GR
'38': T3K4
'39': HTT9
'40': SQS1
'41': OXLO
'42': R6UB
'43': 9AUC
'44': Z38E
'45': DL9Z
'46': 760X
'47': V42B
'48': UMF0
'49': 1YIR
splits:
- name: train
num_bytes: 1484167
num_examples: 26642
- name: validation
num_bytes: 211423
num_examples: 3807
- name: test
num_bytes: 424302
num_examples: 7613
download_size: 343099313
dataset_size: 2119892
- config_name: GB
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
'0': '8888'
'1': H0PO
'2': B6ES
'3': G12F
'4': Z0EY
'5': STX7
'6': 57V7
'7': XLZV
'8': AVYY
'9': ID30
'10': VV0W
'11': JTCO
'12': 7T8N
'13': Q0M5
'14': 9B78
'15': 17R0
'16': 4GJI
'17': NBTW
'18': E12O
'19': BX6Y
'20': WBQU
'21': IYXU
'22': 60IF
'23': 468Q
'24': '9999'
'25': 8CF0
'26': 4A3J
'27': TT2H
'28': ZQ6S
splits:
- name: train
num_bytes: 3457794
num_examples: 53528
- name: validation
num_bytes: 494508
num_examples: 7648
- name: test
num_bytes: 987987
num_examples: 15294
download_size: 343099313
dataset_size: 4940289
- config_name: HU
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
'0': P9F2
'1': BKUX
'2': 8VH3
'3': S3DA
'4': EO9F
'5': M1DW
'6': 8UEG
'7': BJ8Q
'8': BMYJ
'9': TSVO
'10': 2A44
'11': '8888'
'12': DPY1
'13': DN6F
'14': QYV5
'15': 4C5L
'16': 876R
'17': 4QRE
'18': LNY0
'19': ESTU
'20': BSK1
'21': TQ3O
'22': ZQAQ
'23': 2LB5
'24': OII5
'25': V3LT
'26': 4WV7
'27': J6MO
'28': XW5U
'29': Y64R
'30': 995K
'31': UD8K
'32': '9999'
'33': HTJD
splits:
- name: train
num_bytes: 844059
num_examples: 8665
- name: validation
num_bytes: 120758
num_examples: 1239
- name: test
num_bytes: 242550
num_examples: 2476
download_size: 343099313
dataset_size: 1207367
- config_name: IE
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
'0': LGWG
'1': '8888'
'2': MNQ7
'3': VYAX
'4': JXDX
'5': KMFX
'6': 2GV9
'7': C58S
'8': DWS3
'9': HNJK
'10': 5AX8
'11': LZIC
'12': 54SK
'13': URQH
'14': '9999'
'15': 9BPE
'16': FF1D
'17': ZJS8
'18': 363J
splits:
- name: train
num_bytes: 830141
num_examples: 11381
- name: validation
num_bytes: 118467
num_examples: 1627
- name: test
num_bytes: 236640
num_examples: 3252
download_size: 343099313
dataset_size: 1185248
- config_name: JP
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
'0': T417
'1': '8888'
'2': DYQK
'3': 7QQ0
'4': N3JU
'5': R4LR
'6': IUVI
'7': MXMH
'8': 2NRQ
'9': VQLD
'10': 5MVV
splits:
- name: train
num_bytes: 637769
num_examples: 7143
- name: validation
num_bytes: 91619
num_examples: 1021
- name: test
num_bytes: 181834
num_examples: 2041
download_size: 343099313
dataset_size: 911222
- config_name: KY
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
'0': '8888'
'1': OSBR
'2': 6XB7
'3': XAQA
'4': MPUG
'5': MP7S
'6': 4XP8
'7': K575
'8': T5UM
'9': JDX6
'10': 8HR7
'11': SNUK
splits:
- name: train
num_bytes: 1011376
num_examples: 14728
- name: validation
num_bytes: 144456
num_examples: 2105
- name: test
num_bytes: 290789
num_examples: 4209
download_size: 343099313
dataset_size: 1446621
- config_name: LI
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
'0': TV8Y
'1': TMU1
'2': BSZ8
'3': 7RRP
'4': 1DGT
'5': '8888'
'6': 53QF
'7': WAK8
'8': Y8LH
'9': IF49
'10': 32HC
'11': EV7F
'12': ANSR
'13': 1SOY
splits:
- name: train
num_bytes: 372606
num_examples: 6496
- name: validation
num_bytes: 52912
num_examples: 929
- name: test
num_bytes: 106395
num_examples: 1857
download_size: 343099313
dataset_size: 531913
- config_name: LU
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
'0': '9999'
'1': DVXS
'2': '8888'
'3': UDY2
'4': 5GGB
'5': U8KA
'6': 81G5
'7': 63P9
'8': SQ1A
'9': AIR5
'10': WCEP
'11': 2JEI
'12': HHR4
'13': V19Y
'14': BKAB
'15': STBC
'16': V5OS
'17': 2S2U
'18': ZFFA
'19': ATQY
'20': 9C91
'21': EUT4
'22': BEAN
'23': LCR0
'24': 7SIZ
'25': 68J6
'26': 2IGL
splits:
- name: train
num_bytes: 1605714
num_examples: 24718
- name: validation
num_bytes: 229935
num_examples: 3532
- name: test
num_bytes: 458753
num_examples: 7063
download_size: 343099313
dataset_size: 2294402
- config_name: NL
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
'0': 54M6
'1': V44D
'2': B5PM
'3': '8888'
'4': EZQW
'5': JHK5
'6': CODH
'7': 62Y3
'8': NFFH
'9': L7HX
'10': A0W7
'11': 4QXM
'12': 8VFX
'13': BBEB
'14': '9999'
'15': 33MN
'16': 9AAK
'17': DEO1
'18': GNXT
'19': M1IZ
'20': UNJ2
splits:
- name: train
num_bytes: 3909154
num_examples: 66504
- name: validation
num_bytes: 560120
num_examples: 9501
- name: test
num_bytes: 1117587
num_examples: 19002
download_size: 343099313
dataset_size: 5586861
- config_name: 'NO'
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
'0': YI42
'1': O0EU
'2': '8888'
'3': EXD7
'4': FSBD
'5': 3C7U
'6': CF5L
'7': LJJW
'8': V06W
'9': IQGE
'10': KX7D
'11': K5P8
'12': 8S9H
'13': 3L58
'14': R71C
'15': 5ZTZ
'16': BJ65
'17': 326Y
'18': ZQ0Q
'19': 4ZRR
'20': PB3V
'21': M9IQ
'22': 9DI1
'23': GYY6
'24': AEV1
'25': 50TD
'26': '9999'
'27': YTMC
'28': Q0Q1
splits:
- name: train
num_bytes: 1272932
num_examples: 24260
- name: validation
num_bytes: 181731
num_examples: 3466
- name: test
num_bytes: 363822
num_examples: 6932
download_size: 343099313
dataset_size: 1818485
- config_name: PL
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
'0': FJ0E
'1': O7XB
'2': RBHP
'3': BSJT
'4': ZVVM
'5': OMX0
'6': 96XK
'7': '8888'
'8': 629I
'9': H7OD
'10': 8TOF
'11': WUJ2
'12': T7PB
'13': B21W
'14': ZZKE
'15': AL9T
'16': 13ZV
'17': KM66
'18': LT9U
'19': SVA3
'20': SP4S
'21': 60BG
'22': J3A3
'23': 3BJG
'24': JCKO
'25': WNX1
'26': QUX1
'27': FQ5Y
'28': 5F76
'29': WOK7
'30': QYL4
'31': GZE5
'32': SMIS
'33': CY1M
'34': YLZL
'35': RUCO
splits:
- name: train
num_bytes: 1186471
num_examples: 14402
- name: validation
num_bytes: 171507
num_examples: 2058
- name: test
num_bytes: 338096
num_examples: 4115
download_size: 343099313
dataset_size: 1696074
- config_name: SE
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
'0': XJHM
'1': BEAY
'2': '8888'
'3': CX05
'4': E9BI
'5': '9999'
'6': BYQJ
'7': OJ9I
'8': 1TN0
'9': C61P
'10': AZTO
'11': 2UAX
'12': O1QI
'13': SSOM
'14': 54P7
'15': G04R
'16': M0Y0
'17': UKOL
'18': 381R
'19': 9YIP
'20': PDQ0
'21': WZDB
'22': 44CQ
'23': 27AW
splits:
- name: train
num_bytes: 2342043
num_examples: 40179
- name: validation
num_bytes: 333930
num_examples: 5741
- name: test
num_bytes: 667716
num_examples: 11481
download_size: 343099313
dataset_size: 3343689
- config_name: US-CA
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
'0': '8888'
'1': K7YU
'2': 5HQ4
'3': H1UM
'4': EI4J
'5': 7CDL
'6': PZR6
'7': SQ7B
'8': CVXK
'9': G1P6
'10': KQXA
'11': 5Y1L
'12': N295
'13': BADE
splits:
- name: train
num_bytes: 244757
num_examples: 3821
- name: validation
num_bytes: 34887
num_examples: 547
- name: test
num_bytes: 70390
num_examples: 1093
download_size: 343099313
dataset_size: 350034
- config_name: US-DE
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
'0': HZEH
'1': 4FSX
'2': '8888'
'3': T91T
'4': XTIQ
'5': QF4W
'6': 1HXP
'7': TGMR
'8': JU79
'9': 12N6
'10': 9ASJ
splits:
- name: train
num_bytes: 2184502
num_examples: 34182
- name: validation
num_bytes: 312589
num_examples: 4884
- name: test
num_bytes: 623843
num_examples: 9767
download_size: 343099313
dataset_size: 3120934
- config_name: US-NY
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
'0': '8888'
'1': 51RC
'2': PJ10
'3': SDX0
'4': BO6L
'5': XIZI
'6': M0ER
'7': 4VH5
'8': D6JI
splits:
- name: train
num_bytes: 193565
num_examples: 3085
- name: validation
num_bytes: 27830
num_examples: 441
- name: test
num_bytes: 55783
num_examples: 882
download_size: 343099313
dataset_size: 277178
- config_name: VG
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
'0': 6EH6
'1': '8888'
'2': YOP9
'3': KORB
'4': 1GR6
'5': Q62B
'6': ZHED
'7': N28C
'8': BST2
'9': GLCI
'10': JS65
splits:
- name: train
num_bytes: 642649
num_examples: 10576
- name: validation
num_bytes: 91778
num_examples: 1512
- name: test
num_bytes: 183495
num_examples: 3022
download_size: 343099313
dataset_size: 917922
- config_name: ZA
features:
- name: LEI
dtype: string
- name: Entity.LegalName
dtype: string
- name: Entity.LegalForm.EntityLegalFormCode
dtype:
class_label:
names:
'0': GQVQ
'1': '8888'
'2': XE4Z
'3': 3QSR
'4': 4YUU
'5': R155
'6': MZT6
'7': J7L0
'8': R59V
splits:
- name: train
num_bytes: 56511
num_examples: 855
- name: validation
num_bytes: 7932
num_examples: 123
- name: test
num_bytes: 16111
num_examples: 245
download_size: 343099313
dataset_size: 80554
---
# Dataset Card for "LENU - Legal Entity Name Understanding"
---------------
<h1 align="center">
<a href="https://gleif.org">
<img src="http://sdglabs.ai/wp-content/uploads/2022/07/gleif-logo-new.png" width="220px" style="display: inherit">
</a>
</h1><br>
<h3 align="center">in collaboration with</h3>
<h1 align="center">
<a href="https://sociovestix.com">
<img src="https://sociovestix.com/img/svl_logo_centered.svg" width="700px" style="width: 100%">
</a>
</h1><br>
---------------
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** [gleif.org](https://gleif.org)
- **Repository:** [The LENU project](https://github.com/Sociovestix/lenu)
- **Point of Contact:** [aarimond](https://huggingface.co/aarimond)
### Dataset Summary
This dataset contains legal entity names from the Global LEI System in which each entity is
assigned with a unique
[Legal Entity Identifier](https://www.gleif.org/en/about-lei/introducing-the-legal-entity-identifier-lei)
(LEI) code (ISO Standard 17441)
along with their corresponding
[Entity Legal Form (ELF) Codes](https://www.gleif.org/en/about-lei/code-lists/iso-20275-entity-legal-forms-code-list)
(ISO Standard 20275),
which specifies the legal form of each entity.
The dataset has been created as part of a collaboration of the [Global Legal Entity Identifier Foundation](https://gleif.org) (GLEIF) and
[Sociovestix Labs](https://sociovestix.com) with the goal to explore how Machine Learning can support in detecting the ELF Code solely based on an entity's legal name and legal jurisdiction.
See also the open source python library [LENU](https://github.com/Sociovestix/lenu), which supports in this task.
The data is created from LEI data downloaded from
[GLEIF's public website](https://www.gleif.org/en/lei-data/gleif-golden-copy/download-the-golden-copy/)
(Date: 2022-11-01 00:00), where it is accessible free of charge.
It is divided into subsets for a selection of legal jurisdictions, whereas each Jurisdiction has its own set of ELF Codes.
The ELF Code reference list can be downloaded [here](https://www.gleif.org/en/about-lei/code-lists/iso-20275-entity-legal-forms-code-list).
### Languages
The data contains several major Jurisdictions (e.g. US-DE (US Delaware), JP (Japan), DE (Germany) and others).
Legal entity names usually follow certain language patterns, depending on which jurisdiction they are located in.
Thus, we apply models that are pre-trained on the corresponding language.
## Dataset Structure
### Data Instances
The data contains of the LEI, the corresponding legal name and ELF Code.
```
{
'LEI': '254900OMZ079O2SDWA75',
'Entity.LegalName': 'Park Reseda Mortgage LLC',
'Entity.LegalForm.EntityLegalFormCode': 0
}
```
### Data Fields
This is just a subset of available fields in the LEI system. All fields are described in detail in GLEIF's
[LEI Common Data Format (CDF)](https://www.gleif.org/en/about-lei/common-data-file-format/current-versions/level-1-data-lei-cdf-3-1-format).
- `LEI`: The [Legal Entity Identifier](https://www.gleif.org/en/about-lei/introducing-the-legal-entity-identifier-lei) Code. Uniquely identifies a Legal Entity.
- `Entity.LegalName`: The official name of the legal entity as registered in the LEI system.
- `Entity.LegalForm.EntityLegalFormCode`: class encoded column which contains the [Entity Legal Form Code](https://www.gleif.org/en/about-lei/code-lists/iso-20275-entity-legal-forms-code-list)
### Data Splits
We have divided each jurisdiction's subset into stratified train (70%), validation (10%) and test (20%) splits.
ELF Codes that appear less than three times in a Jurisdiction have been removed.
## Licensing Information
This dataset, which is based on LEI data, is available under Creative Commons (CC0) license.
See [gleif.org/en/about/open-data](https://gleif.org/en/about/open-data). | [
-0.3527126908302307,
-0.49169766902923584,
0.21481351554393768,
0.01521080732345581,
-0.43354058265686035,
-0.23018978536128998,
-0.14179576933383942,
-0.7583115100860596,
0.31026384234428406,
0.8456475734710693,
-0.4165397584438324,
-1.1024057865142822,
-0.2949301302433014,
0.142054364085... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mjw/stock_market_tweets | mjw | 2022-12-20T19:01:40Z | 47 | 9 | null | [
"license:apache-2.0",
"region:us"
] | 2022-12-20T19:01:40Z | 2022-12-20T18:54:22.000Z | 2022-12-20T18:54:22 |
---
license: apache-2.0
---
# Overview
This file contains over 1.7m public tweets about Apple, Amazon, Google, Microsoft and Tesla stocks, published between 01/01/2015 and 31/12/2019.
| [
-0.3209402561187744,
-0.5929034948348999,
0.3980847895145416,
0.42660221457481384,
0.006022234912961721,
0.339770644903183,
0.06056014820933342,
-0.3780122995376587,
0.6396799683570862,
0.4816085696220398,
-0.5854367017745972,
-0.7673693895339966,
-0.7264504432678223,
-0.17627356946468353,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
keremberke/plane-detection | keremberke | 2023-01-27T13:46:18Z | 47 | 2 | null | [
"task_categories:object-detection",
"roboflow",
"roboflow2huggingface",
"region:us"
] | 2023-01-27T13:46:18Z | 2023-01-18T09:43:30.000Z | 2023-01-18T09:43:30 | ---
task_categories:
- object-detection
tags:
- roboflow
- roboflow2huggingface
---
<div align="center">
<img width="640" alt="keremberke/plane-detection" src="https://huggingface.co/datasets/keremberke/plane-detection/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['planes']
```
### Number of Images
```json
{'test': 25, 'valid': 50, 'train': 175}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/plane-detection", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/skybot-cam/overhead-plane-detector/dataset/4](https://universe.roboflow.com/skybot-cam/overhead-plane-detector/dataset/4?ref=roboflow2huggingface)
### Citation
```
@misc{ overhead-plane-detector_dataset,
title = { Overhead Plane Detector Dataset },
type = { Open Source Dataset },
author = { SkyBot Cam },
howpublished = { \\url{ https://universe.roboflow.com/skybot-cam/overhead-plane-detector } },
url = { https://universe.roboflow.com/skybot-cam/overhead-plane-detector },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { jan },
note = { visited on 2023-01-27 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.ai on March 30, 2022 at 3:11 PM GMT
It includes 250 images.
Planes are annotated in COCO format.
The following pre-processing was applied to each image:
No image augmentation techniques were applied.
| [
-0.6632840037345886,
-0.18863146007061005,
0.4410490393638611,
0.12115570157766342,
-0.3192833364009857,
-0.1235162541270256,
0.09410973638296127,
-0.30100855231285095,
0.42297422885894775,
0.19161348044872284,
-0.7531997561454773,
-0.5578997731208801,
-0.5066363215446472,
0.02175960317254... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nlphuji/fairface_val_padding_025 | nlphuji | 2023-01-18T22:57:00Z | 47 | 1 | null | [
"region:us"
] | 2023-01-18T22:57:00Z | 2023-01-18T22:46:25.000Z | 2023-01-18T22:46:25 | # FairFace (val set)
Original paper: [Fairface: Face attribute dataset for balanced race, gender, and age for bias measurement and mitigation](https://openaccess.thecvf.com/content/WACV2021/papers/Karkkainen_FairFace_Face_Attribute_Dataset_for_Balanced_Race_Gender_and_Age_WACV_2021_paper.pdf)
Homepage: https://github.com/joojs/fairface
Bibtex:
```
@inproceedings{karkkainenfairface,
title={FairFace: Face Attribute Dataset for Balanced Race, Gender, and Age for Bias Measurement and Mitigation},
author={Karkkainen, Kimmo and Joo, Jungseock},
booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
year={2021},
pages={1548--1558}
}
``` | [
-0.492821604013443,
-0.3653603494167328,
0.2028474658727646,
0.2630457580089569,
0.19694393873214722,
-0.2904549241065979,
0.2472601681947708,
-0.5359671115875244,
-0.057418469339609146,
0.5415096282958984,
-0.7797170877456665,
-0.4346385598182678,
-0.44628751277923584,
-0.2284447699785232... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TREC-AToMiC/AToMiC-Qrels-v0.2 | TREC-AToMiC | 2023-02-14T21:31:18Z | 47 | 1 | null | [
"license:cc-by-sa-4.0",
"region:us"
] | 2023-02-14T21:31:18Z | 2023-01-24T13:11:24.000Z | 2023-01-24T13:11:24 | ---
dataset_info:
features:
- name: text_id
dtype: string
- name: Q0
dtype: string
- name: image_id
dtype: string
- name: rel
dtype: int64
splits:
- name: test
num_bytes: 789840
num_examples: 9873
- name: validation
num_bytes: 1424080
num_examples: 17801
- name: train
num_bytes: 352152240
num_examples: 4401903
download_size: 205636566
dataset_size: 354366160
license: cc-by-sa-4.0
---
# Dataset Card for "AToMiC-Qrels-v0.2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4984310269355774,
0.018748611211776733,
0.3300260901451111,
0.014523817226290703,
-0.37210002541542053,
0.07688503712415695,
0.4320448040962219,
-0.19585399329662323,
0.6336991786956787,
0.3423863649368286,
-0.7156326770782471,
-0.659181535243988,
-0.42306822538375854,
-0.22726289927959... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nglaura/scielo-summarization | nglaura | 2023-04-11T10:21:45Z | 47 | 0 | null | [
"task_categories:summarization",
"language:fr",
"license:apache-2.0",
"region:us"
] | 2023-04-11T10:21:45Z | 2023-01-25T12:02:33.000Z | 2023-01-25T12:02:33 | ---
license: apache-2.0
task_categories:
- summarization
language:
- fr
pretty_name: SciELO
---
# LoRaLay: A Multilingual and Multimodal Dataset for Long Range and Layout-Aware Summarization
A collaboration between [reciTAL](https://recital.ai/en/), [MLIA](https://mlia.lip6.fr/) (ISIR, Sorbonne Université), [Meta AI](https://ai.facebook.com/), and [Università di Trento](https://www.unitn.it/)
## SciELO dataset for summarization
SciELO is a dataset for summarization of research papers written in Spanish and Portuguese, for which layout information is provided.
### Data Fields
- `article_id`: article id
- `article_words`: sequence of words constituting the body of the article
- `article_bboxes`: sequence of corresponding word bounding boxes
- `norm_article_bboxes`: sequence of corresponding normalized word bounding boxes
- `abstract`: a string containing the abstract of the article
- `article_pdf_url`: URL of the article's PDF
### Data Splits
This dataset has 3 splits: _train_, _validation_, and _test_.
| Dataset Split | Number of Instances (ES/PT) |
| ------------- | ----------------------------|
| Train | 20,853 / 19,407 |
| Validation | 1,158 / 1,078 |
| Test | 1,159 / 1,078 |
## Citation
``` latex
@article{nguyen2023loralay,
title={LoRaLay: A Multilingual and Multimodal Dataset for Long Range and Layout-Aware Summarization},
author={Nguyen, Laura and Scialom, Thomas and Piwowarski, Benjamin and Staiano, Jacopo},
journal={arXiv preprint arXiv:2301.11312},
year={2023}
}
``` | [
-0.17822113633155823,
-0.45244163274765015,
0.19641262292861938,
0.9324599504470825,
-0.33131712675094604,
-0.04671378806233406,
-0.3274044096469879,
-0.4070315361022949,
0.7301067113876343,
0.48224303126335144,
-0.26455581188201904,
-0.99700528383255,
-0.3952249586582184,
0.40478256344795... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jonathan-roberts1/RSSCN7 | jonathan-roberts1 | 2023-03-31T17:20:53Z | 47 | 1 | null | [
"task_categories:image-classification",
"task_categories:zero-shot-image-classification",
"license:other",
"region:us"
] | 2023-03-31T17:20:53Z | 2023-01-25T16:16:29.000Z | 2023-01-25T16:16:29 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': field
'1': forest
'2': grass
'3': industry
'4': parking
'5': resident
'6': river or lake
splits:
- name: train
num_bytes: 345895442.4
num_examples: 2800
download_size: 367257922
dataset_size: 345895442.4
license: other
task_categories:
- image-classification
- zero-shot-image-classification
---
# Dataset Card for "RSSCN7"
## Dataset Description
- **Paper** [Deep Learning Based Feature Selection for Remote Sensing Scene Classification](https://ieeexplore.ieee.org/iel7/8859/7305891/07272047.pdf)
### Licensing Information
For research and academic purposes.
## Citation Information
[Deep Learning Based Feature Selection for Remote Sensing Scene Classification](https://ieeexplore.ieee.org/iel7/8859/7305891/07272047.pdf)
```
@article{7272047,
title = {Deep Learning Based Feature Selection for Remote Sensing Scene Classification},
author = {Zou, Qin and Ni, Lihao and Zhang, Tong and Wang, Qian},
year = 2015,
journal = {IEEE Geoscience and Remote Sensing Letters},
volume = 12,
number = 11,
pages = {2321--2325},
doi = {10.1109/LGRS.2015.2475299}
}
``` | [
-0.6517180800437927,
0.05890925973653793,
0.18747495114803314,
-0.034068234264850616,
-0.6333792805671692,
-0.0766250342130661,
-0.019011393189430237,
-0.49487388134002686,
-0.2573275864124298,
0.5127658247947693,
-0.5519183278083801,
-0.7436279058456421,
-0.544434130191803,
0.018275270238... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jjmachan/NSFW-questions | jjmachan | 2023-03-04T23:32:09Z | 47 | 6 | null | [
"license:apache-2.0",
"region:us"
] | 2023-03-04T23:32:09Z | 2023-03-03T07:25:45.000Z | 2023-03-03T07:25:45 | ---
license: apache-2.0
dataset_info:
features:
- name: title
dtype: string
- name: subreddit
dtype: string
- name: post_id
dtype: string
- name: score
dtype: int64
- name: link_flair_text
dtype: string
- name: is_self
dtype: bool
- name: over_18
dtype: bool
- name: upvote_ratio
dtype: float64
- name: is_question
dtype: bool
- name: C1
dtype: string
- name: C2
dtype: string
- name: C3
dtype: string
- name: C4
dtype: string
- name: C5
dtype: string
splits:
- name: train
num_bytes: 1541472
num_examples: 1442
download_size: 904939
dataset_size: 1541472
---
| [
-0.1285339742898941,
-0.18616800010204315,
0.6529127359390259,
0.4943626821041107,
-0.1931934952735901,
0.2360742688179016,
0.360720157623291,
0.05056300014257431,
0.5793654322624207,
0.7400140166282654,
-0.6508105993270874,
-0.23783984780311584,
-0.7102248668670654,
-0.047826044261455536,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
RiniPL/Dementia_Dataset | RiniPL | 2023-03-15T07:48:14Z | 47 | 3 | null | [
"task_categories:image-classification",
"language:en",
"license:ecl-2.0",
"code",
"region:us"
] | 2023-03-15T07:48:14Z | 2023-03-15T05:57:38.000Z | 2023-03-15T05:57:38 | ---
license: ecl-2.0
task_categories:
- image-classification
language:
- en
tags:
- code
pretty_name: Dementia
--- | [
-0.1285339742898941,
-0.18616800010204315,
0.6529127359390259,
0.4943626821041107,
-0.1931934952735901,
0.2360742688179016,
0.360720157623291,
0.05056300014257431,
0.5793654322624207,
0.7400140166282654,
-0.6508105993270874,
-0.23783984780311584,
-0.7102248668670654,
-0.047826044261455536,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
NbAiLab/norwegian-alpaca | NbAiLab | 2023-07-25T15:05:00Z | 47 | 8 | null | [
"task_categories:text-generation",
"language:no",
"language:nb",
"license:cc-by-4.0",
"instruction-finetuning",
"region:us"
] | 2023-07-25T15:05:00Z | 2023-03-20T13:14:23.000Z | 2023-03-20T13:14:23 | ---
license: cc-by-4.0
language:
- 'no'
- nb
tags:
- instruction-finetuning
pretty_name: NB Alpaca Norwegian Bokmål
task_categories:
- text-generation
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: instruction_en
dtype: string
- name: input_en
dtype: string
- name: output_en
dtype: string
splits:
- name: train
num_bytes: 38067492
num_examples: 51942
download_size: 24204487
dataset_size: 38067492
---
# NB Alpaca Norwegian Bokmål
This dataset is a translation to Norwegian Bokmål of [alpaca_data_cleaned.json](https://github.com/tloen/alpaca-lora/blob/main/alpaca_data_cleaned.json), a clean version of the [Alpaca dataset made at Stanford](https://huggingface.co/datasets/tatsu-lab/alpaca).
An [earlier version](https://huggingface.co/datasets/NbAiLab/norwegian-alpaca/tree/main/nllb) used [Facebook's NLLB 1.3B model](https://huggingface.co/facebook/nllb-200-1.3B), but the current version uses OpenAI's `gpt-3.5-turbo`, hence this dataset cannot be used to create models that compete in any way against OpenAI. | [
-0.4587749242782593,
-0.699059247970581,
0.04266821965575218,
0.4741305708885193,
-0.6350762248039246,
-0.25582727789878845,
0.15433481335639954,
-0.861638605594635,
0.7366222143173218,
0.5881876349449158,
-0.6570468544960022,
-0.534025251865387,
-0.44108015298843384,
0.2040989249944687,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
acheong08/nsfw_reddit | acheong08 | 2023-04-09T13:44:10Z | 47 | 10 | null | [
"license:openrail",
"region:us"
] | 2023-04-09T13:44:10Z | 2023-03-25T08:23:53.000Z | 2023-03-25T08:23:53 | ---
license: openrail
---
| [
-0.1285339742898941,
-0.18616800010204315,
0.6529127359390259,
0.4943626821041107,
-0.1931934952735901,
0.2360742688179016,
0.360720157623291,
0.05056300014257431,
0.5793654322624207,
0.7400140166282654,
-0.6508105993270874,
-0.23783984780311584,
-0.7102248668670654,
-0.047826044261455536,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
liuyanchen1015/MULTI_VALUE_wnli_she_inanimate_objects | liuyanchen1015 | 2023-04-03T19:47:26Z | 47 | 0 | null | [
"region:us"
] | 2023-04-03T19:47:26Z | 2023-04-03T19:47:22.000Z | 2023-04-03T19:47:22 | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 3305
num_examples: 21
- name: test
num_bytes: 12409
num_examples: 44
- name: train
num_bytes: 29515
num_examples: 166
download_size: 23893
dataset_size: 45229
---
# Dataset Card for "MULTI_VALUE_wnli_she_inanimate_objects"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.42296743392944336,
-0.27692750096321106,
-0.04978787526488304,
0.03398590907454491,
-0.14144450426101685,
0.0008574865059927106,
0.33896347880363464,
-0.45416852831840515,
0.8024665713310242,
0.26506131887435913,
-0.6819136142730713,
-0.6166194677352905,
-0.5991237759590149,
-0.28766080... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
InstaDeepAI/multi_species_genomes | InstaDeepAI | 2023-11-01T14:07:25Z | 47 | 7 | null | [
"DNA",
"Genomics",
"Nucleotide",
"region:us"
] | 2023-11-01T14:07:25Z | 2023-04-06T19:05:46.000Z | 2023-04-06T19:05:46 | ---
tags:
- DNA
- Genomics
- Nucleotide
pretty_name: Human Reference Genome
---
# Dataset Card for the Multi-species genome
## Dataset Description
- **Repository:** [Nucleotide Transformer](https://github.com/instadeepai/nucleotide-transformer)
- **Paper:** [The Nucleotide Transformer: Building and Evaluating Robust Foundation Models for Human Genomics](https://www.biorxiv.org/content/10.1101/2023.01.11.523679v1)
### Dataset Summary
The Multi-species dataset was constructed by parsing the genomes available on [NCBI](https://www.ncbi.nlm.nih.gov/), before arbitrarily selecting only one species from each genus. Plant and virus genomes were not taken into account, as their regulatory elements differ from those of interest in the paper's tasks. The resulting collection of genomes was downsampled to a total of 850 species, in which several genomes that are heavily studied in the literature have been incorporated. The collection represents 174B nucleotides, resulting in roughly 29B tokens. The distribution of each genomics class in the dataset is displayed below:
```
| Class | Number of species | Number of nucleotides (B) |
| ---------------------| -------------------| --------------------------|
| Bacteria | 667 | 17.1 |
| Fungi | 46 | 2.3 |
| Invertebrate | 39 | 20.8 |
| Protozoa | 10 | 0.5 |
| Mammalian Vertebrate | 31 | 69.8 |
| Other Vertebrate | 57 | 63.4 |
```
### Supported Tasks and Leaderboards
This dataset has been used as a pre-training corpus for the Nucleotide Transformers models. Depending on the configuration used, each sequence is 6,200 or 12,200 base pase pairs long. If the dataset is iterated without being shuffled, the first 100 nucleotides of a sequence are the same as the last 100 base pairs of the previous sequence, and the last 100 nucleotides are the same as the first 100 base pairs of the next sequence. During training, this allows for randomly selecting a nucleotide between the first 200 nucleotides of the sequence and start the tokenization from this nucleotide. That way, all the chromosome is covered and the model sees different tokens for a given sequence at each epoch.
### Languages
DNA
## Dataset Structure
[N/A]
### Data Instances
For each instance, there is a string representing the sequence, a string indicating the description of the sequence, two integers representing the index of the first and last nucleotide respectively and the link to the genome's fasta URL. An instance is shown below:
```python
{'sequence': 'AAACTACCACTGGCTAAATTTCGACCATCTGGGCTAATAGCAACTGACCGCACCCAATATTTATGTCCTTTAAGTGTGCGAATTAGCTTTCCTGTGCCTAAATTCCAAACTTTGAGAGTGTTGTCATCGCTACCACTCACCAAAATTTTCCCATTAGGACTAATTGTTAATGCTTGAATGGAGTCAGTATGTCCTGTTAATGTGTAGACTATTTTACCTGTTGCCAAATTCCAGGCTTTAATAGTTTGATCATCACTCCCGCTAACCAAAGTTTTGCCATTGGGACTGATAGCCACAGCATTAACTTTTTGCGAATGTCCACTCAGGGTTAGTATTTCTTTTCCTGTGGTCAGATTCCACATTTTAATTATGCGTTCCCCTTCGCCACTACTAGCAATTGTCTGCCCATCGGGACTAATGGCGACAGAGACAACAGATTTTGCCCCACCTTTGAGGGTGTTAGCTAAGGAAATATTTTTAACTGGAACATTGGGTGACTGACCAAAAACAACTTCACCCTGAGTAGGACTGTAATTTCCTGGCTTTAGTCTCGATAACAAACTGGTTTGAATTTGGTGATATTTTTGATACCAAGTATCACTAAAACCAAATAACAAAATGAAAGCAGCGCCTAAAACTAAACTTTTGACAAAAGCATATTTAAAGGAGAACTTTGCACTCGGTTGAGTTACGGTGAATTTTCCTGATGATTGTCCGGCGGCTGGTAAGGCGCGTGGGAGTGATGGAATCAAATCTTTAATCACTTCATCGGCTGACTGGTAGCGTTGACTTAAGTCTTTTTGCAACAGCTTCGTCATCACCCCTTCCAATTCTGGCGACAAAGGACTACGCAAATATTCCCGCCAACTGTTCGCCCAGCCATAGCCATGTTCCATCCACAATTGAAAAGGGGATGTTCCTGTTAAGAGATGAAAACAGGTAGCCCCCAAACTGAACAAATCACTAGCTGGGTAAGCTTTACCGTCTCTGATTTGTTCCAGTGGAGAATAACCATGCGAACCAATGGATGTACCATTTTTATTCTTGACTTTTTCGGTTAATTGCTTAGAAGAACCAAAATCAATCAAGCTAAGTCGCCCATCATAACGACAGCGAATTAAATTTTCTGGTTTAATGTCTCGGTGAATCACACCGCGATCGTGAATGAATTTGAGTACAGGCAGTAAATCAAGTAAAATTGCTTGAATTTCATTCGCTTTATAGACTTTGCGCTGTTGTAATTCTTTTAACAAGTTCTGCCCATTAATAAACTGTTGTACCAAATAAAGGCAGTTATCTTGTTCAAAGTAAGCAATCAGTGTAGGAATTTGCGGATGTTCGCCGAGTTCTTGCAGTCGCTTGGCTTCTTCTGCAAATAACTCCATTGCTTTTTTCTGCGACCAAGTTCCTTGAAATTTCGGTGCTAATTGCTTAATTACACACAGTTCATTGAGTTTATCGGTATCTTCAGATAAATAAGTTCTGCCAAATCCCCCCTCATCGGAAAGCACCCGAATCACTCGAAAGCGATTTCTTAATAGTGGCACCAAGGGGGTGCTACAAGTTTGGCATGACTGCTTTCCTTTGGGATTTAGGGGATTTGGACAATCGGGATTTAAGCAGCAGATCATTATCTGACAGGCGCAACTGCATAAAAATTTTTACTAAATTAACCCCGATATTTCCCTAGATGATGATTGACTCTCACGTATTGATGGTAGATCCCGCTGGTAGTGGGGAGTGGGGAATCAATTATATAGTCAATTTTGGTAAATGCTCATAAGTTTTCTTCAATGCAGGAAAACTACGAGAGTCATCAGCTGAATTTTATCGATTATAGCAGCAGGCAAAAGTAGCAGACAGGTTAAGAGTGTCATTAGTCAAGACAAATGACTCATGACTAATGACTCATGACTAATAACTAAGGCTTTTGGGTGGCGATCGCTAATTTTGCCCCCTGGACTTGTCTGACTTGATCCATCACTGCCACTACTTTACCGTGGGTGACTGTTGCATCAGCATTCACAATTACTAATGCTTCTTGGTTATCGCCTACCAAGGTACGCAATTGTCCGGCTAAACCGTCAACAGTGCTTGGTTGACGGTTAACACTTACTATTCCATCTTTATCTACTGTGACGGTAATTTTGGCTGGAACTTGCTGCTGTTTGGCTGTCGCCGCTTTGGGTAAGTTGACGGGTAAACCTTCTGAGCGAGTTAAAAATAACGTTGACATGATAAAAAATGTCAAAATCGCAAATATCACATCAATCATTGGCACGATGTTGATTTGCGGTGGTAAATCTGGCTCATCTTGTAGACGCATAGGTTCTGTCTCCTCGTTCAAAGCGGCGGCGATAGAGCAGTTCTAATTGTCCACCATATTCTTGTATTGCGGCAATCTGTCGTTGATATAACCCTCGAAAGGTATTAGCAAATAAAAGTATAAAAATAGCCACAATTAAACCTGAAGCTGTAGATACCAGCGCTTCACTAATACCTGCGGTAACTCCTGCGGTTTTTGTCCCGCCTACATCACCCAAGTTTAATGATGCAAAAGAAGCAATCAAACCTAATACAGTACCCAGTAGACCTAAAAGTGGTGCAAGACCAATAATTGTGTCAAACATATTTTGAAAACGTTTGAGAACTGGGATTTCGGCTTGCGCTTCACTTTCTAGTGCAAGCCGAAATTCTTCTGGGGTTGGTTCTTCTAATTGCAACGCCGCTAAAAAAATCCGTGTCATGGGCAAATCTGCATTCTTTTGCAATTTATCCAACGCGCCAACAACATTATCAAGGCGGTAAAGATTCAACACTTCTCTGACTATGCGGTTTTGCCGAGTATTGATGCGATACCAAAAGCGGACTCGCTCGATAATTAAAGCAATTCCCACCACACTAAACGCCAGCAGGGGCCACATGACTACGCCACCTGCTACAAACAACTCATACATGGGCAATATCTCTAGGAACTAAATGGACAACGTTACAGTTAGACTAGCAGTTTACGGTACTAAATGATATATCTTATCAATAAGGAGTAGACAAAATAAAAAGCTATGTCAAATTCGGTTGAGTTTTGATGACATAATTATTCATTCTTGTTCAAGGCTTGATTCGCTACAATCCTGATGATGAAAGTATTTGTGTAAGTATACAGTTGATGAAAGCTAACTCAGGAATTTTTTTCTTTATTGCTTGACTTTTGCGAGAGATGGTTTTGAACAGAGTAATTACTAATAAGAACTTGCAATAAATTTAAACAGAACAGTAGTTTGTAGCTTTGCTTGAGAAGCGATCGCCCGACGTTGAGAGTTAAAGTATATTTTGCGTACTAACTTACCCAACGCCCAAAAAATTACATCATTTGAATATCGTCAATTTGTACTCTTAATCATCTATGGCTAAACTATTTGACTCAATCACAGAAGAACTGCAAGAGTTTATTGCAGCCCAAAACCTTTTCTTTGTAGGAACCGCGCCTCTGAGTGCTACAGGTCACGTTAATTTATCTCCCAAAGGTCTCGATTGCTTGCGGATTTTATCACCCCACAAAGTCGCCTATCTCGATCTCACAGGTAGCGGTAACGAAACTTCAGCCCATCTGCAAGAAAATGGTCGCATTACCTTCATGTTTTGCGCCTTCACTGAACCAGCGCGCATCTTGCGACTTTACGGTCAAGGACACGTAATTTTACCTAGCTATCCTGATTGGGATTCTGTATATTCAGTGTTTCCGCCGCTACCAGGAACTCGTCAAATTATCGTAGCTGATATTGAGATTGTGCAAAGTTCCTGTGGTTTCGGCGTTCCTCTTTACGAATACCAAGGTCAACGCCAAACACTAGTAAATTGGGCTGCTAAAAAAGGCGAACAGGGAGTCCGAGAATATCAACAACAAAAAAACAGCATCAGCATTGATGGTTTACCGACACCATTAGGCCAATTATCTGACGGTTAAAGCGGCGTTTCATATATTTTTAGTTAATCTGAACCAAAAAATCTCAAATTTTTTGTCAATAGTCTCTAGTCCAAAGAAGCTTGATTTTTGACCATAGATTGTAGGCTTTTGACAAAAATAACCTTTATAGAGAAAATTTATCCTTGCTGACACTCTATAACTAAGTTTATAAAACATAGCGTCAAAAATCGATACATATCAGTTCTATTTTCTGCCTCTATTCCTAATTAAATTTGGTGTAAAGGAACTATTATGCGGTTTCCGTGTCTTGACGTAATGATTTGCAACGAATTATGATTCGAGTTTAGTCCGGATCAACCGAGACATCCTCGAAAATTGGTGCAAGTAAATTCAACTTTCGCTCTACATAATCACACGCATGAGATTACGCTTATTTCTGTTTAGCGTTGTCAGTATTGTCCTGCTTTCTTCTCCAGTAAGAGCATCTCGCTTAGAATCTTGGAGCTTTGACACCGCACAAAATCAACTGAATATTACTACTGTATCTGGTGTTAAACCAAGAGCATTTTTAATTCAAAATCCCACGCGGTTAGTTATCGATCTTCCTGGTACACAACTGAACACAAATACAGTTCGGAAAAACTTTGGTTCCACAGTACGTGAAATCCGTGTTGGTAAGGTTGACGATAACACAACAAGATTAGTAGTTGAATTAGCACCTGGATACACTGTAGACCCTAACAAGTTACTGCTGCAAGGTGATTCTTCCACTCATTGGATAGTGAAATTTCCATCGGTAGAACGGGTTCAAAATCCTGTTGATAATAATTTTTCTTTATCTAGTGAAGAGCAAATTCCGGTTTCTGTGAGTGATGTTTCTTTGTTTGCGGGAGTTGTACCGTTAGGTAAGGAAATACCACAATTGCGATCGCAGGTACAAGCCTTAGCTGCTCGTTATCGTTCCCTGGATGCAGGAATGTTCTTTTTAGATTTAGATACTGGTAACTATCTAGATTTAAATGGTGAGAAAGTCTTTCCTGCTGCTAGTACAATAAAGTTTCCCATTTTAGTAGCGTTATTTCAAGAAGTAGATGCAGGTAGAGTCAAACTGAATGAAACCTTAGTTATGCGGCGCGACTTAATAACTGGAGGTTCTGGAGAATTTCAATACAAGCGTGCAGGAAGTCGTTTTAGTCTGATAGAAACCGTGACTAAGATGATTACCATCAGCGACAACACAGCTACCAATATGGTAATTGACCGATTAGGTGGTAAAGCTAAGTTAAATCAGCGTTTTCGTGGTTGGGGTCTGCAAAACACCGTTGTGCGGAATTTACTCGGCGACTTTAAGGGAACGAATACAACTAGCGCCAAAGATTTAGTCAGGCTGTCTGCGTTGGTTGCAAAAAATCAATTATTGACTGATTCCAGCCGTAGCAAAGTTTTGGATATTATGCAGCGTGTTCACAACACCAAGTTATTACCTGCTGGTTTGGGTAAAGGTGCGGTAATTGCTCACAAAACCGGAACTCTAGGCATTGTACTAGGTGATGCCGGGATTATTCAAATGCCATCTGGTAAGCGCTACTTAGCCGGAATTTTTGTCAGAAGACCTTTTAATGATTTAAAAGCGCGAGATTTTATCAATCAAGTTTCTCGAATTGTTTACGGCTATTTAGACCAACCAAGAGTCGCCAGCAAGCCTTAATACTCCTGATGTAAAAAAGAAAAATTTTAATTGACGTAAGCCCCTGATATTCATTAATATCTAGGGGTTTTTGCATATCTATTTATAGCAGTGCTTAACGCACCCTATCTCTCAGTGCGTTACGGCTAATCCTTATTCTCTTAAACTAACAAATTCTTGCATAGCCGTAACACATTCTAATTCATATTGGCTTTGAAGGATATTGACTGTATTCCTGCCAAGTTGGCTACATATACCTAAGCCGCACTGCTAAATTATGAATGGGAAATAACTTGCGGGCTTGATAAACCAACTTTTACTACACTAAACATGCTAAAGCATTAACAACGGACGGATTTAGGTTAGTTGCTTATTTTGCTCACTCTTGTGAGAGATTGCTGCTGTTTTTATTGTAGCGATCGACATCAAACTTCTTTATCTCTAAAAGGACAAATATAACAGGAAGTCCTCATTGATTACTCCTATCCTCACCTCGTTCATCGCAAAATGTACGAGGGCTTTTTTTATTTGGCAGAATTTACCCCTATTACGCCAATGATAATTAAAGCTATCGAGAAAAGTTTGGTAAGAGACATTGATTCACGAAACCAAATTACCCCAATAGTAGCGATTACAGTTGTGCCTAAACCTGACCAAACAGCATACGCAATGCTGACTTCAATTTTTTTAAGAGCTAAAGTTAAAAAACTAAAACAAATTCCATAACAGATAAAAATTAAAACCGAGGGAATAGTTCTTGTAAACCCCTCAGACAATTTCATGGAAGTTGTACCAGCGACTTCAAATAAGATTGCTGCAATGAGATAAAGCCAACTATTTACCATGTTTATTGATTGATTATAAGGTGATGATGGGAATATGATTTTTCGACAAGCATAATGAGTCAAAATTCTATATTTAATCTATTAACTAATTCTGCTATTTTGACAACATTTATAGTTAGCTGATGAGATAGGCAAAAATCAAAATATTCATATTTCCGAATTAGTAAAGAAGTTGGTAATCTCTAAAGTTCAGTTTACCACACCAATATTATGGGGGTTTACCGTACTAATACTAAGGTTCGGAAATCATGATGTAATTGGTGATAAAAACCGAATTTACACTGTACTGGATTGTGAATACTATAAAAACAACGCAAATGATTTAAACCTAAATCAACTACACAAAATTAGAAATTAAACGAGGTGGAGACATGACATTAGTGCGTTGGAATCCTTGGCAAGAAATGAACACTCTCCAAAGACAAATCAACAATTTATTTGCAGACGAAATGCTCCCATCTACTTTACTTGAAAGAAGCCTTACAAAAGTTCCGGCGGCTGAATTACACGAATCTGAAGAAGCTATTCATCTCAAGCTAGAATTACCAGGAATTGAAGCCAAAGACCTAGATGTGCAAGTTACAGAAAAAGCTGTGTATATCAGCGGTGAACGGAAATCTGAAACTAAAACAGAAGGGAAAGGTGTAACCAAGAGTGAATTTCATTATGGGAAATTCCAACGTTTGATTCCTTTACCAACTCGCATTCAAAATACCAATGTTACTGCTGATTATAAAGATGGTATTTTGACTCTGACTTTGCCTAAAGCCGAAGAAGAAAAGAAAAAGGTTGTCAAGCTGAATCTTGAATCTATTGGCTAATATCAATTTTGGATTAGCGCTAAAATACCCGACTTCTTTAAGAAGTCGGGTATTTTGTTGTTCACTAATGATTTAAAATTGCTATAAGCTGCGATTTCTGCCTGTTGATTGTTGTCTGTCTACGGGAAAAACGTCAAAATCGAAAGTTGCAATTAGACGCTCATCAACGTATACCTGTATTTTATGCTTACCAGGAGGATCACCTGCGGCGATCGTCCAATAGTTTTCAATTACACCATCATTAGCTATAGTTTTGCGCCTCATTACCGACTCTGTACCGTCAGCGGAGACTGTGAAGTTTTCACCATCATCTGTAGCCCAAGTTTCTGGGGGTTTTGGTAAGCGTAGGACTTCTCGCCATGTAACTTCGCCTTGGTAGTCTTTGAGTTGAATTCGCCACCCATATTTACTACCTTCTTGTAGTGGGACTCTGAATGTGGGGATGAAGTTAACTTTACCTCTAGCATCGACTCTCGCTATGCCAAACTCAGCTTTGTCGATCGCTACCGACTTTTTAGTATTGTTTGCTTGAGAAATTGACCCTGATGATGCTATTTTTTCGTCGGAGATCGCTACTGTAGCATTGATTGGCTGAGACGCTACCAACCCGGAAACTAGCCAAGAAGAAGTTAGTACAACTATTGCAGTCCAAATTCTCATCAGCAAAATTTTTGGTCATTTACTAGTACTTATTCCCGCCTTCCCATTGGCTTCCGGGTACAGTCCCGATAAATAGCCAAGTTGGCAGAATAAAAGTTGCAGAATTAATAGTCAGTTTATAGTTAAATCGGCAACACCAGATCAAGCCACTCAAACTACTTTACTCTCGGGCCAGTTGCCAGAACTGCGAAAACTATCATCGCAGGTTTTCGGTGTAGGTGCTAAATATGCGTTTATTCTTAACTATTTTGTGTTCAATACGGAATTTTTAATATGTAAGCAATTGCTGACAGTCGGCTATTTGATCAATTGTCATTTCCTAGAGTTTCATCCCCTTGAGGGGAAGGAGTTTGGGAAATGTCAAAAACTGTCAAATGCTTAATGCAAAGATTAACAGTTGTGCCTAAGTGCGATCGCACTTAGGCATGACAAAGCATCAAAAATTAGCATTGGAGAACCGATATTTTCCTATTACCTGACTGCTATATATTGATAGTGAGGCGTTTTTGAGCAGCAAACAGCATGGCAGATATTCCAAATTCCATCGCATCATACCGTGCCTTAGCACTGCAAGTTACCTGTCATGCTGTGAATCAAGCGAGCGATCGCCACGCTGTCCAAGAAATCATTCATCATACTATCAACCGCCTGGCGCAACAAATCGCCGCCAGTATTGCTTTTATTGGTTTTGACTGTCGTTTAATTGTTTTACCAGAATATTTTCTGACAGGTTTCCCGATGGGTGAACCTTTGGCTGTTTGGGGAGAAAAGGCTTGTATAGAAATGCACGGTGCCGAGTATGAAGCCCTCAGTAAAATTGCTCAAAAACATCAGATATTTTTAGCTGGTAACGCCTACGAACTCGACCCCAATTTTCCTGGCTTATACTTTCAAACTTGCTTTGTGATTGACCCGGCTGGTGCTATTGTCTTGCGGTATCGGCGGCTAAATTCGTTATTTGCACCCACACCTCATGATGTTTGGGATAAATATCTTGATTGTTACGGCCTAGAAGGGGTGTTTCCTGTAGCGAAAACTGCAATTGGCAATTTAGCCGCTTTAGCTTCCGAAGAAATTTTGTATCCAGAAGTAGCGCGGTGTTTAGCAATGCGTGGTGCAGAAATTTTTCTGCATTCCACTTCTGAAATTTATAGCAAAAACCTCACACCTAAAGATGCGGCGAAAATTTCTCGCGCTGTGGAAAATATGGCTTACGTTGTGTCTGCGAATACCGCAGGTCTAGCTAATAGTTCTATACCCAGCGCTTCTGTTGATGGTGGCTCAAAAATAGTTGACTATCGCGGTATCGTATTAGCAGAAACAGGTGCAGGCGAAAGTATGGCAGCTTTTGCAGAGATAGATTTAACTGCTTTAAGACGCGATCGCCGTCGTCCAGGGTTAAATAATTTACTGTCTCGCCAGCGATTTGAACTCTACGCCCAAAGCTACAGCCAGTCACAATTTTATCCAGCAAACACTATGCTAAATCAAGAATGCGATCGCCAACACTTCATCCAAACACAGCAACAAACCATAGAACGTCTATCTCAGTTAGGAGTGATTTAAAAGTCTAAAGTCTGAAATTAGATTCTTTTGACCATTGACTATTGACAAATGACAAATGACAAAACCAATCGAAGTCCGTAACCCGCGAACGGGAAAATATGATTATGTAATTATCCCACCGCCGCCGAAACTGCTGGCGCAGCAATGTAACCGAGCGCGAAGGGCGCAAGTGCGTTGGCAAAAACTGGGCGTAGAAGGGAGAGTTGCAGCTTTAAAAGAATGGAAGCAAGCAGTTTTGGCTGGACGCGAAAAGCTCACAGATGCTTTGGTCAATGATACGGGTAGATTATCTATATCAGTGATGGAAATCGACTCATTCCTTTCTAGCATCGATCGCTGGTGTGGATTAGCGCCAGATTTATTACAAGATTCGGCCAAAAATACATCAATTCCGTTCATCGCCTTACAACAAACATCAACGCCTTACCCTGTAGTTGGGGTAATTAGTCCTTGGAATTTCCCTCTGTTGCTGTCTACGATAGATACCATTCCCGCACTGTTGGCGGGTTGTGCTGTAGTTGTCAAACCCAGTGAAATTGCACCGCGTTTCATCGCCCCACTGATAGCTGCAATTAATCAAGTACCCGCCTTGCGCGATGTTTTCAGTTTTGTGGAAGGTGCGGGAGAAACTGGCGCGGCTTTGATGGAGAATGTAGATTTAGTTTGTTTTACCGGTAGTGTCGCTACTGGACGCAAAGTTGCAGAAGTCGCCGCACAAAGATTTATCCCCGCTTTTTTGGAATTGGGCGGGAAAGATCCGGCGATCGTGTTGGAATCTGCCGATTTAGAATTAGCCACATCAGCGATTTTATGGGGTTCCGTCGTTAACACCGGACAGTCTTGTTTATCAATTGAGCGTATTTACGTTGCCGAATCTATCTTTGAAAAGTTTTATCATCAGTTAGTAGCCAAAGCACATCGCCTACAACTAGCCCATCCCACCATTGAAAGTGGCGAAATCGGCCCCATTATTGCTGAAAGACAAGCTGGCATAATTAACGAGCATATCTCCGATGCAGTGCAAAAAGGTGCAGTAATTCATTGTGGCGGTAAAGTTGAAGAGTTAGGCGGTGGTTGGTGGTGTCATCCCACAGTGCTGACTCATGTTAACCATACAATGAAAGTCATGACCGAAGAGACTTTTGGCCCGATCATGCCAATCATGCCTTTTGCCACAGTAGAGGAAGCTGTTAACTTAGCCAACGATTCAATTTATGGACTGAGTGCGGCGGTGTTTGCGGAAACCGAAACTGAAGCGTTAACAGTTGCCCAGCAAATAGATGCAGGTGCTATCAGTATTAATGATGCCGCCCTCACCGCCATTATGCACGAAGGTGAAAAAAACGCTTTCAAATTATCCGGTTTAGGCGGTTCACGTATGGGTGCAGCCGCCATCAAACGATTTTTGCGGAAAAAAGCGTTTTTGATTAAAACCAACTCAAATCAAGACCCTTGGTGGTTTGAGCCTAAAGTGTAGTGCAATCTTCTCTCAGCGACCTCTGCGTCTCTGTAGTTCGTTAAAAACCGTATTAGATTCTGTTTGTTGGGTTTCGCTGTCGCTTCACCCAACCTACTTTCCTTAAACCCCTACTACAGATTCATTCACAGTTTCACTAGCCGCAACACCATTAGTCAAAATCGCTTGCCGAGTTTTCAGGTTAAATTTATAACCATGTGGCAAAATATGCAGCTTCGCACCACAAATTGCCAAAGGTTCATCCCGGAGAATTGTATCTGCGTTGTTATATGTAGATTCAGACTCATCCACAATGGTGACTGAACCTTCACCAATAATTTCGATTTGGTCATCAGTCACGGCGATCGCTGTATTCTCATCAATCCCAAATCCTAACACCGCAGGTTCATGAATTAAAGCTGTAATTAAACGCCCTAAGCGTCCCCGTTGTAAGAAATGTTGGTCAATCACCACCCCTGGGAGAAAACCCATACCAGGCCCCATTTCCACAATTTCCATCCGTGGTGTACTTTGAGAATCACCCTCAACAATCATTTTATCGGGCATCACAGCCGCACCCGCACTAGTACCTGCAATTACTGCACCTTCAGCATAGCGTTGGTGAATAGCCGCATCGATTTCGGTATCCTTGAGGATACTAGTAATTCGCGCTTGGTCTCCTCCAGTAAAAAATATCCCAGTCGCCTTAGCAATAGCTTCTAAAGCCGTAGAAGACCTAGCATCTTCACGAGTTTCTGTATCAATAATGCGAACGTGTTCTGCACCTAGCCGTTCAAAAACTCTAATATAATTTTCCCCCACTTCTCTAGGCAGTTCTGTGGCGGCCGTCATAATTACAATATTGGCTTTTGTACCCCCAGCCCGACGGACAAATTCTCGCAGAATCACACAATCTCCTTCTTTATCTTCTGCGCCACCAATAATTACCAACTGGCGTTTATGTGCAGTTTCTGTCATAATGCCCCCCGGATAACCGGATTAGAATTTAATTTAGATTAATTTCAATAAAACATGACAATTATCACAATCAAATCATCCATTTGATAGATTAATTTTTAATGGCAAAAGTTAAATTATATATAACTTTATGTATATATAAACTCTTGCCAAATTTAGCATTTTTAATAATTGGTAATTCATTTAGCAGAATTACCAATTACTTATACAGTAATAATTTATGTATAACTCTTCTCAAGTAATAGCACTAAAATCTCATAGT',
'description': 'NZ_AP018174.1 Anabaenopsis circularis NIES-21 DNA, nearly complete genome',
'start_pos': 1824000,
'end_pos': 1836200,
'fasta_url': 'https://ftp.ncbi.nlm.nih.gov/genomes/refseq/bacteria/Anabaenopsis_circularis/latest_assembly_versions/GCF_002367975.1_ASM236797v1/GCF_002367975.1_ASM236797v1_genomic.fna.gz'}
```
### Data Fields
- `sequence`: a string containing a DNA sequence from the human reference genome
- `desciption`: a string indicating the Species of the sequence as well as the NCBI id.
- `start_pos`: an integer indicating the index of the sequence's first nucleotide
- `end_pos`: an integer indicating the index of the sequence's last nucleotide
- `fasta_url`: a string indicating the URL used to download the fasta from which the sequence was taken.
### Data Splits
The Multi-species dataset has 3 splits: train, validation, and test. |
## Dataset Creation
[N/A]
### Curation Rationale
[N/A]
### Source Data
#### Initial Data Collection and Normalization
The data consists of sequences cut from the the whole genome sequences of the 850 species sampled that can be found in the `urls.csv` file of this dataset's repository.
#### Who are the source language producers?
[N/A]
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
[N/A]
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
[N/A]
### Licensing Information
[N/A]
### Citation Information
```bibtex
@article{dalla2023nucleotide,
title={The Nucleotide Transformer: Building and Evaluating Robust Foundation Models for Human Genomics},
author={Dalla-Torre, Hugo and Gonzalez, Liam and Mendoza Revilla, Javier and Lopez Carranza, Nicolas and Henryk Grywaczewski, Adam and Oteri, Francesco and Dallago, Christian and Trop, Evan and Sirelkhatim, Hassan and Richard, Guillaume and others},
journal={bioRxiv},
pages={2023--01},
year={2023},
publisher={Cold Spring Harbor Laboratory}
}
``` | [
-0.7558447122573853,
-0.30931931734085083,
0.08322685211896896,
-0.04371576011180878,
-0.32662561535835266,
0.14853344857692719,
-0.15486422181129456,
0.019384143874049187,
0.4498351812362671,
0.31373703479766846,
-0.5190637111663818,
-0.594306468963623,
-0.6534448862075806,
0.662719786167... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
distil-whisper/tedlium | distil-whisper | 2023-09-25T10:30:14Z | 47 | 0 | null | [
"task_categories:automatic-speech-recognition",
"language:en",
"license:cc-by-nc-nd-3.0",
"region:us"
] | 2023-09-25T10:30:14Z | 2023-04-10T07:32:45.000Z | 2023-04-10T07:32:45 | ---
license: cc-by-nc-nd-3.0
task_categories:
- automatic-speech-recognition
language:
- en
-pretty_name: TEDLIUM
---
# Distil Whisper: TEDLIUM
This is a variant of the [TEDLIUM](https://huggingface.co/datasets/LIUM/tedlium) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/LIUM/tedlium).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/tedlium", "release3")
# take the first sample of the validation set
sample = dataset["validation"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/tedlium", "release3", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under cc-by-nc-nd-3.0.
| [
-0.041806723922491074,
-0.6224288940429688,
0.25887492299079895,
0.36294281482696533,
-0.17023062705993652,
0.06993899494409561,
-0.2571859061717987,
-0.1522679179906845,
0.4115492105484009,
0.42335495352745056,
-0.8737609386444092,
-0.5662652254104614,
-0.5406071543693542,
0.1383695006370... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jlh/uci-adult-income | jlh | 2023-04-25T23:19:35Z | 47 | 0 | null | [
"region:us"
] | 2023-04-25T23:19:35Z | 2023-04-25T21:40:16.000Z | 2023-04-25T21:40:16 | ---
dataset_info:
features:
- name: age
dtype: int64
- name: workclass
dtype: string
- name: fnlwgt
dtype: int64
- name: education
dtype: string
- name: education-num
dtype: int64
- name: marital-status
dtype: string
- name: occupation
dtype: string
- name: relationship
dtype: string
- name: race
dtype: string
- name: sex
dtype: string
- name: capital-gain
dtype: int64
- name: capital-loss
dtype: int64
- name: hours-per-week
dtype: int64
- name: native-country
dtype: string
- name: income
dtype:
class_label:
names:
'0': ' <=50K'
'1': ' >50K'
splits:
- name: train
num_bytes: 5552570
num_examples: 32561
download_size: 586658
dataset_size: 5552570
---
# Dataset Card for "uci-adult-income"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5297618508338928,
-0.20479081571102142,
-0.023407327011227608,
0.3438165783882141,
0.04993942007422447,
0.1910981684923172,
0.2184348702430725,
-0.27417081594467163,
0.77547687292099,
0.597679615020752,
-0.8266263604164124,
-0.7737374305725098,
-0.39604485034942627,
-0.2402777522802353,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
doushabao4766/ontonotes_zh_ner_knowledge_V3_wc | doushabao4766 | 2023-05-27T01:56:57Z | 47 | 0 | null | [
"region:us"
] | 2023-05-27T01:56:57Z | 2023-05-27T01:56:42.000Z | 2023-05-27T01:56:42 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: knowledge
dtype: string
- name: token_words
sequence:
sequence: string
- name: knowledge_words
sequence:
sequence: string
splits:
- name: train
num_bytes: 87045598
num_examples: 15724
- name: validation
num_bytes: 28512103
num_examples: 4301
- name: test
num_bytes: 32267375
num_examples: 4346
download_size: 29522634
dataset_size: 147825076
---
# Dataset Card for "ontonotes_zh_ner_knowledge_V3_wc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3226977288722992,
-0.0501461960375309,
0.6333478093147278,
0.18188457190990448,
-0.1308174878358841,
-0.3022218346595764,
0.28121790289878845,
-0.24964968860149384,
0.643700122833252,
0.8346673250198364,
-0.7410033345222473,
-1.018990397453308,
-0.535069465637207,
-0.2985226809978485,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TigerResearch/pretrain_zh | TigerResearch | 2023-06-14T13:50:32Z | 47 | 85 | null | [
"region:us"
] | 2023-06-14T13:50:32Z | 2023-06-01T01:45:01.000Z | 2023-06-01T01:45:01 | ---
dataset_info:
features:
- name: dataType
dtype: string
- name: title
dtype: string
- name: content
dtype: string
- name: uniqueKey
dtype: string
- name: titleUkey
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 58043923125
num_examples: 16905023
download_size: 25662051889
dataset_size: 58043923125
---
# Dataset Card for "pretrain_zh"
[Tigerbot](https://github.com/TigerResearch/TigerBot) pretrain数据的中文部分。
包含(未压缩前) 中文书籍zh-books 12G, 中文互联网zh-webtext 25G, 中文百科zh-wiki 19G
更多语料请关注开源模型及持续更新 [https://github.com/TigerResearch/TigerBot](https://github.com/TigerResearch/TigerBot)
<p align="center" width="40%">
</p>
## Usage
```python
import datasets
ds_sft = datasets.load_dataset('TigerResearch/pretrain_zh')
``` | [
-0.4131578207015991,
-0.20038872957229614,
-0.011863907799124718,
0.0839497447013855,
-0.8435505628585815,
-0.21018019318580627,
-0.13746453821659088,
-0.055396419018507004,
0.4120100140571594,
0.32402685284614563,
-0.9129474759101868,
-0.6935513615608215,
-0.08964525163173676,
0.186449319... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tum-nlp/sexism-socialmedia-balanced | tum-nlp | 2023-06-08T11:56:54Z | 47 | 1 | null | [
"license:cc-by-sa-4.0",
"region:us"
] | 2023-06-08T11:56:54Z | 2023-06-08T11:56:02.000Z | 2023-06-08T11:56:02 | ---
license: cc-by-sa-4.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
llm-book/jsnli | llm-book | 2023-10-25T15:22:46Z | 47 | 0 | null | [
"size_categories:100K<n<1M",
"language:ja",
"license:cc-by-sa-4.0",
"region:us"
] | 2023-10-25T15:22:46Z | 2023-06-19T12:31:46.000Z | 2023-06-19T12:31:46 | ---
language:
- ja
size_categories:
- 100K<n<1M
license:
- cc-by-sa-4.0
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 97491392
num_examples: 533005
- name: validation
num_bytes: 712792
num_examples: 3916
download_size: 44931163
dataset_size: 98204184
---
# Dataset Card for llm-book/jsnli
書籍『大規模言語モデル入門』で使用する [JSNLI](https://nlp.ist.i.kyoto-u.ac.jp/?日本語SNLI(JSNLI)データセット) のデータセットです。
JSNLI Version 1.1 のデータセットのうち、フィルタリング後の訓練セット (train_w_filtering) と検証セット (dev) を使用しています。
## Licence
CC BY-SA 4.0
| [
-0.41570422053337097,
-0.2776492238044739,
0.16204437613487244,
0.07420860975980759,
-0.7444119453430176,
-0.1706511229276657,
-0.14588287472724915,
-0.22442679107189178,
0.5009294152259827,
0.658653199672699,
-1.0315074920654297,
-0.9328863620758057,
-0.39313358068466187,
0.11038623005151... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
KagglingFace/vit-cats-dogs | KagglingFace | 2023-07-02T12:19:43Z | 47 | 1 | null | [
"license:mit",
"region:us"
] | 2023-07-02T12:19:43Z | 2023-07-02T11:56:09.000Z | 2023-07-02T11:56:09 | ---
license: mit
---
| [
-0.1285337507724762,
-0.18616777658462524,
0.6529126167297363,
0.49436259269714355,
-0.19319328665733337,
0.2360745370388031,
0.3607197403907776,
0.05056323483586311,
0.5793652534484863,
0.740013837814331,
-0.6508102416992188,
-0.23783975839614868,
-0.710224986076355,
-0.047825887799263,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SeyedAli/Persian-Text-Sentiment | SeyedAli | 2023-09-09T15:42:06Z | 47 | 1 | null | [
"task_categories:text-classification",
"language:fa",
"license:mit",
"region:us"
] | 2023-09-09T15:42:06Z | 2023-09-08T18:09:45.000Z | 2023-09-08T18:09:45 | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 10222986
num_examples: 55852
- name: test
num_bytes: 2575303
num_examples: 13964
download_size: 6076096
dataset_size: 12798289
task_categories:
- text-classification
language:
- fa
---
Dataset Classes
* negetive :0
* positive :1 | [
-0.4934436082839966,
-0.1949893981218338,
-0.2744881510734558,
0.20053285360336304,
-0.12737147510051727,
-0.08810877054929733,
0.27591672539711,
-0.042786359786987305,
0.7109588980674744,
0.6547881364822388,
-0.5027189254760742,
-0.5124761462211609,
-0.6773156523704529,
-0.031279399991035... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
TokenBender/roleplay_alpaca | TokenBender | 2023-09-24T19:32:28Z | 47 | 2 | null | [
"license:artistic-2.0",
"region:us"
] | 2023-09-24T19:32:28Z | 2023-09-10T13:03:22.000Z | 2023-09-10T13:03:22 | ---
license: artistic-2.0
---
| [
-0.1285337507724762,
-0.18616777658462524,
0.6529126167297363,
0.49436259269714355,
-0.19319328665733337,
0.2360745370388031,
0.3607197403907776,
0.05056323483586311,
0.5793652534484863,
0.740013837814331,
-0.6508102416992188,
-0.23783975839614868,
-0.710224986076355,
-0.047825887799263,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Hyder12/LLM_Bootcamp_Fine_tune_QnA | Hyder12 | 2023-09-25T21:53:39Z | 47 | 0 | null | [
"region:us"
] | 2023-09-25T21:53:39Z | 2023-09-20T04:00:25.000Z | 2023-09-20T04:00:25 | Entry not found | [
-0.32276490330696106,
-0.22568447887897491,
0.8622260093688965,
0.43461495637893677,
-0.5282987356185913,
0.7012965083122253,
0.7915716171264648,
0.07618637382984161,
0.7746024131774902,
0.25632190704345703,
-0.7852814197540283,
-0.22573809325695038,
-0.9104480743408203,
0.5715669393539429... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
faisaltareque/multilingual-news-prompt | faisaltareque | 2023-09-23T14:19:10Z | 47 | 0 | null | [
"region:us"
] | 2023-09-23T14:19:10Z | 2023-09-23T14:02:14.000Z | 2023-09-23T14:02:14 | ---
dataset_info:
features:
- name: id
dtype: string
- name: headline
dtype: string
- name: article
dtype: string
- name: lang
dtype: string
- name: image_caption_separated
dtype: string
- name: topic_word_separated
dtype: string
- name: image_based_top_3
dtype: string
- name: caption_based_top_3
dtype: string
- name: image_based_top_5
dtype: string
- name: caption_based_top_5
dtype: string
- name: image_based_top_10
dtype: string
- name: caption_based_top_10
dtype: string
- name: image_based_top_15
dtype: string
- name: caption_based_top_15
dtype: string
- name: topic_word_separated_new
dtype: string
- name: topic_word_count_new
dtype: int64
- name: prompt_type
dtype: string
- name: article_prompt
dtype: string
splits:
- name: train
num_bytes: 9136949083
num_examples: 394353
- name: valid
num_bytes: 121366337
num_examples: 5187
- name: test
num_bytes: 358666498
num_examples: 15577
download_size: 5317632829
dataset_size: 9616981918
---
# Dataset Card for "multilingual-news-prompt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6112954616546631,
-0.39528530836105347,
0.26022031903266907,
0.5267356038093567,
-0.31170302629470825,
0.13438162207603455,
-0.19141338765621185,
-0.15677635371685028,
0.9946469068527222,
0.29671379923820496,
-0.9592784643173218,
-0.888970673084259,
-0.531973123550415,
0.103548578917980... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SEACrowd/posp | SEACrowd | 2023-09-26T12:32:13Z | 47 | 0 | null | [
"language:ind",
"pos-tagging",
"region:us"
] | 2023-09-26T12:32:13Z | 2023-09-26T11:16:27.000Z | 2023-09-26T11:16:27 | ---
tags:
- pos-tagging
language:
- ind
---
# posp
POSP is a POS Tagging dataset containing 8400 sentences, collected from Indonesian news website with 26 POS tag classes.
The POS tag labels follow the Indonesian Association of Computational Linguistics (INACL) POS Tagging Convention.
POSP dataset is splitted into 3 sets with 6720 train, 840 validation, and 840 test data.
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@inproceedings{hoesen2018investigating,
title={Investigating Bi-LSTM and CRF with POS Tag Embedding for Indonesian Named Entity Tagger},
author={Devin Hoesen and Ayu Purwarianti},
booktitle={Proceedings of the 2018 International Conference on Asian Language Processing (IALP)},
pages={35--38},
year={2018},
organization={IEEE}
}
@inproceedings{wilie2020indonlu,
title={IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding},
author={Bryan Wilie and Karissa Vincentio and Genta Indra Winata and Samuel Cahyawijaya and X. Li and Zhi Yuan Lim and S. Soleman and R. Mahendra and Pascale Fung and Syafri Bahar and A. Purwarianti},
booktitle={Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing},
year={2020}
}
```
## License
Creative Common Attribution Share-Alike 4.0 International
## Homepage
[https://github.com/IndoNLP/indonlu](https://github.com/IndoNLP/indonlu)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) | [
-0.6360588669776917,
-0.7159399390220642,
0.024938900023698807,
0.4274260699748993,
-0.4355448782444,
-0.2550177276134491,
-0.3431988060474396,
-0.44045671820640564,
0.28091660141944885,
0.6210150122642517,
-0.07710626721382141,
-0.5893279314041138,
-0.3673505187034607,
0.4948183596134186,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Mahendrakharra/BBC-News-Articles-Summaries | Mahendrakharra | 2023-10-20T06:44:04Z | 47 | 1 | null | [
"license:apache-2.0",
"region:us"
] | 2023-10-20T06:44:04Z | 2023-10-20T06:42:20.000Z | 2023-10-20T06:42:20 | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: articles
dtype: string
- name: summaries
dtype: string
splits:
- name: train
num_bytes: 6073684
num_examples: 1800
- name: test
num_bytes: 1188941
num_examples: 425
download_size: 4242870
dataset_size: 7262625
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
XiaHan19/cmmlu | XiaHan19 | 2023-10-20T19:55:23Z | 47 | 0 | null | [
"task_categories:multiple-choice",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:zh",
"license:cc-by-nc-4.0",
"chinese",
"llm",
"evaluation",
"arxiv:2306.09212",
"region:us"
] | 2023-10-20T19:55:23Z | 2023-10-20T14:06:00.000Z | 2023-10-20T14:06:00 | ---
license: cc-by-nc-4.0
task_categories:
- multiple-choice
- question-answering
language:
- zh
tags:
- chinese
- llm
- evaluation
pretty_name: CMMLU
size_categories:
- 10K<n<100K
---
# CMMLU: Measuring massive multitask language understanding in Chinese
- **Homepage:** [https://github.com/haonan-li/CMMLU](https://github.com/haonan-li/CMMLU)
- **Repository:** [https://huggingface.co/datasets/haonan-li/cmmlu](https://huggingface.co/datasets/haonan-li/cmmlu)
- **Paper:** [CMMLU: Measuring Chinese Massive Multitask Language Understanding](https://arxiv.org/abs/2306.09212).
## Table of Contents
- [Introduction](#introduction)
- [Leaderboard](#leaderboard)
- [Data](#data)
- [Citation](#citation)
- [License](#license)
## Introduction
CMMLU is a comprehensive Chinese assessment suite specifically designed to evaluate the advanced knowledge and reasoning abilities of LLMs within the Chinese language and cultural context.
CMMLU covers a wide range of subjects, comprising 67 topics that span from elementary to advanced professional levels. It includes subjects that require computational expertise, such as physics and mathematics, as well as disciplines within humanities and social sciences.
Many of these tasks are not easily translatable from other languages due to their specific contextual nuances and wording.
Furthermore, numerous tasks within CMMLU have answers that are specific to China and may not be universally applicable or considered correct in other regions or languages.
## Leaderboard
Latest leaderboard is in our [github](https://github.com/haonan-li/CMMLU).
## Data
We provide development and test dataset for each of 67 subjects, with 5 questions in development set and 100+ quesitons in test set.
Each question in the dataset is a multiple-choice questions with 4 choices and only one choice as the correct answer.
Here are two examples:
```
题目:同一物种的两类细胞各产生一种分泌蛋白,组成这两种蛋白质的各种氨基酸含量相同,但排列顺序不同。其原因是参与这两种蛋白质合成的:
A. tRNA种类不同
B. 同一密码子所决定的氨基酸不同
C. mRNA碱基序列不同
D. 核糖体成分不同
答案是:C
```
```
题目:某种植物病毒V是通过稻飞虱吸食水稻汁液在水稻间传播的。稻田中青蛙数量的增加可减少该病毒在水稻间的传播。下列叙述正确的是:
A. 青蛙与稻飞虱是捕食关系
B. 水稻和病毒V是互利共生关系
C. 病毒V与青蛙是寄生关系
D. 水稻与青蛙是竞争关系
答案是:
```
#### Load data
```python
from datasets import load_dataset
cmmlu=load_dataset(r"haonan-li/cmmlu", 'agronomy')
print(cmmlu['test'][0])
```
#### Load all data at once
```python
task_list = ['agronomy', 'anatomy', 'ancient_chinese', 'arts', 'astronomy', 'business_ethics', 'chinese_civil_service_exam', 'chinese_driving_rule', 'chinese_food_culture', 'chinese_foreign_policy', 'chinese_history', 'chinese_literature',
'chinese_teacher_qualification', 'clinical_knowledge', 'college_actuarial_science', 'college_education', 'college_engineering_hydrology', 'college_law', 'college_mathematics', 'college_medical_statistics', 'college_medicine', 'computer_science',
'computer_security', 'conceptual_physics', 'construction_project_management', 'economics', 'education', 'electrical_engineering', 'elementary_chinese', 'elementary_commonsense', 'elementary_information_and_technology', 'elementary_mathematics',
'ethnology', 'food_science', 'genetics', 'global_facts', 'high_school_biology', 'high_school_chemistry', 'high_school_geography', 'high_school_mathematics', 'high_school_physics', 'high_school_politics', 'human_sexuality',
'international_law', 'journalism', 'jurisprudence', 'legal_and_moral_basis', 'logical', 'machine_learning', 'management', 'marketing', 'marxist_theory', 'modern_chinese', 'nutrition', 'philosophy', 'professional_accounting', 'professional_law',
'professional_medicine', 'professional_psychology', 'public_relations', 'security_study', 'sociology', 'sports_science', 'traditional_chinese_medicine', 'virology', 'world_history', 'world_religions']
from datasets import load_dataset
cmmlu = {k: load_dataset(r"haonan-li/cmmlu", k) for k in task_list}
```
## Citation
```
@misc{li2023cmmlu,
title={CMMLU: Measuring massive multitask language understanding in Chinese},
author={Haonan Li and Yixuan Zhang and Fajri Koto and Yifei Yang and Hai Zhao and Yeyun Gong and Nan Duan and Timothy Baldwin},
year={2023},
eprint={2306.09212},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## License
The CMMLU dataset is licensed under a
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-nc-sa/4.0/).
| [
-0.3426707983016968,
-0.7962832450866699,
0.4867807626724243,
0.2306753247976303,
-0.20297156274318695,
-0.0886649414896965,
-0.5449652075767517,
-0.05252194032073021,
0.15940327942371368,
0.2757505178451538,
-0.4781206548213959,
-0.7366830706596375,
-0.5766618251800537,
0.1281528025865554... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AdapterOcean/med_alpaca_standardized_cluster_85_std | AdapterOcean | 2023-10-24T02:22:46Z | 47 | 0 | null | [
"region:us"
] | 2023-10-24T02:22:46Z | 2023-10-24T02:22:42.000Z | 2023-10-24T02:22:42 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: cluster
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1664891
num_examples: 10997
download_size: 681626
dataset_size: 1664891
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "med_alpaca_standardized_cluster_85_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6282910704612732,
-0.5290157794952393,
0.35055091977119446,
0.2895018458366394,
-0.5861876606941223,
-0.19535541534423828,
0.30642834305763245,
-0.3798898458480835,
1.0298227071762085,
0.5499840974807739,
-0.6558672189712524,
-1.253124475479126,
-0.7906874418258667,
-0.19776122272014618... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
emi429/humansleepproject-rr-small-individuals | emi429 | 2023-10-26T18:41:16Z | 47 | 0 | null | [
"region:us"
] | 2023-10-26T18:41:16Z | 2023-10-26T18:41:07.000Z | 2023-10-26T18:41:07 | ---
dataset_info:
features:
- name: rr_intervals
sequence: float64
- name: sleep_stage
dtype: string
- name: patient_id
dtype: string
splits:
- name: test
num_bytes: 1631857
num_examples: 504
- name: train
num_bytes: 5747903
num_examples: 2070
download_size: 1335531
dataset_size: 7379760
---
# Dataset Card for "humansleepproject-rr-small-individuals"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4721018373966217,
-0.07703173905611038,
0.1771324872970581,
0.2874092161655426,
-0.10605686902999878,
0.1008518859744072,
0.13886982202529907,
-0.304879754781723,
1.0218631029129028,
0.362344890832901,
-0.8770853877067566,
-0.5855293273925781,
-0.40403905510902405,
-0.09966804087162018,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
JJhooww/dolphin_2-PTBR | JJhooww | 2023-11-05T11:38:32Z | 47 | 0 | null | [
"region:us"
] | 2023-11-05T11:38:32Z | 2023-11-03T02:29:53.000Z | 2023-11-03T02:29:53 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ftang97/sw-consultancy-agent | ftang97 | 2023-11-03T15:23:28Z | 47 | 0 | null | [
"region:us"
] | 2023-11-03T15:23:28Z | 2023-11-03T15:23:23.000Z | 2023-11-03T15:23:23 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 2311272.0
num_examples: 282
- name: test
num_bytes: 262272.0
num_examples: 32
download_size: 1195336
dataset_size: 2573544.0
---
# Dataset Card for "sw-consultancy-agent"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4986442029476166,
-0.11498375982046127,
0.07237371057271957,
0.35466277599334717,
-0.1487264633178711,
0.17914219200611115,
0.24767805635929108,
-0.40630635619163513,
0.8093038201332092,
0.6708956360816956,
-1.0192432403564453,
-0.7914113402366638,
-0.30803048610687256,
-0.2601843178272... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gowitheflow/wiki1M-word-random-shuffle | gowitheflow | 2023-11-03T21:57:53Z | 47 | 0 | null | [
"region:us"
] | 2023-11-03T21:57:53Z | 2023-11-03T21:45:11.000Z | 2023-11-03T21:45:11 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bragovo/dsum_ru | bragovo | 2023-11-05T18:59:11Z | 47 | 0 | null | [
"language:ru",
"region:us"
] | 2023-11-05T18:59:11Z | 2023-11-04T07:26:32.000Z | 2023-11-04T07:26:32 | ---
configs:
- config_name: dsum
task: summarization
data_files:
- split: train
path: data/train-*
language:
- ru
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dbaezaj/ner-dataset | dbaezaj | 2023-11-10T20:05:16Z | 47 | 0 | null | [
"region:us"
] | 2023-11-10T20:05:16Z | 2023-11-09T20:56:46.000Z | 2023-11-09T20:56:46 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vic0428/imdb-card-pred-decimal | vic0428 | 2023-11-18T06:20:19Z | 47 | 0 | null | [
"region:us"
] | 2023-11-18T06:20:19Z | 2023-11-10T01:06:42.000Z | 2023-11-10T01:06:42 | ---
dataset_info:
features:
- name: text
dtype: string
- name: prompt
dtype: string
- name: true_cardinality
dtype: int64
splits:
- name: train
num_bytes: 39101954.4
num_examples: 80000
- name: test
num_bytes: 9775488.6
num_examples: 20000
download_size: 8384711
dataset_size: 48877443.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "imdb-card-pred-decimal"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8244408965110779,
-0.18544697761535645,
0.03292844444513321,
0.20434650778770447,
-0.6008744835853577,
0.000012268622413103003,
0.08009583503007889,
0.05527135729789734,
1.0293127298355103,
0.48280400037765503,
-0.8595274090766907,
-0.7329005002975464,
-0.7949085831642151,
-0.0688439011... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
quccili/invoice | quccili | 2023-11-11T12:59:29Z | 47 | 0 | null | [
"region:us"
] | 2023-11-11T12:59:29Z | 2023-11-11T12:59:26.000Z | 2023-11-11T12:59:26 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 14986534.0
num_examples: 18
- name: validation
num_bytes: 14986534.0
num_examples: 18
- name: test
num_bytes: 14986534.0
num_examples: 18
download_size: 39577947
dataset_size: 44959602.0
---
# Dataset Card for "invoice"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3178991675376892,
0.02958887442946434,
0.1254829615354538,
0.27214398980140686,
-0.2490726113319397,
-0.0009217699407599866,
0.4067586362361908,
-0.2530744969844818,
0.7383735179901123,
0.756106436252594,
-0.6797443628311157,
-0.638482391834259,
-0.45107975602149963,
-0.4716247618198395... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kuanhuggingface/tencent_tts_encodec | kuanhuggingface | 2023-11-17T01:35:57Z | 47 | 0 | null | [
"region:us"
] | 2023-11-17T01:35:57Z | 2023-11-17T01:33:49.000Z | 2023-11-17T01:33:49 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: file_id
dtype: string
- name: instruction
dtype: string
- name: transcription
dtype: string
- name: src_encodec_0
sequence: int64
- name: src_encodec_1
sequence: int64
- name: src_encodec_2
sequence: int64
- name: src_encodec_3
sequence: int64
- name: src_encodec_4
sequence: int64
- name: src_encodec_5
sequence: int64
- name: src_encodec_6
sequence: int64
- name: src_encodec_7
sequence: int64
- name: tgt_encodec_0
sequence: int64
- name: tgt_encodec_1
sequence: int64
- name: tgt_encodec_2
sequence: int64
- name: tgt_encodec_3
sequence: int64
- name: tgt_encodec_4
sequence: int64
- name: tgt_encodec_5
sequence: int64
- name: tgt_encodec_6
sequence: int64
- name: tgt_encodec_7
sequence: int64
splits:
- name: train
num_bytes: 18583644220
num_examples: 266780
- name: validation
num_bytes: 527818324
num_examples: 7620
- name: test
num_bytes: 508374588
num_examples: 7620
download_size: 470732178
dataset_size: 19619837132
---
# Dataset Card for "tencent_tts_encodec"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4510171711444855,
-0.241208016872406,
0.3265891969203949,
0.42925718426704407,
-0.275423526763916,
0.16479170322418213,
-0.10438919067382812,
0.021701982244849205,
1.0066993236541748,
0.458609938621521,
-0.7029346227645874,
-0.8601439595222473,
-0.5664229393005371,
0.1112247109413147,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jlbaker361/small_multiplication_whole | jlbaker361 | 2023-11-17T05:53:40Z | 47 | 0 | null | [
"region:us"
] | 2023-11-17T05:53:40Z | 2023-11-17T04:47:33.000Z | 2023-11-17T04:47:33 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: float64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1343.111111111111
num_examples: 40
- name: test
num_bytes: 167.88888888888889
num_examples: 5
download_size: 4215
dataset_size: 1511.0
---
# Dataset Card for "small_multiplication_whole"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5561444759368896,
-0.48626604676246643,
0.19615599513053894,
0.541283130645752,
-0.39656564593315125,
-0.2979372441768646,
0.15598993003368378,
-0.11392595618963242,
1.093079924583435,
0.34643256664276123,
-0.6473739147186279,
-0.40439286828041077,
-0.685209333896637,
-0.266779184341430... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
imvladikon/hebrew_speech_campus | imvladikon | 2023-11-20T21:46:41Z | 47 | 3 | null | [
"task_categories:automatic-speech-recognition",
"size_categories:10K<n<100K",
"language:he",
"region:us"
] | 2023-11-20T21:46:41Z | 2023-11-18T18:39:11.000Z | 2023-11-18T18:39:11 | ---
language:
- he
size_categories:
- 10K<n<100K
task_categories:
- automatic-speech-recognition
dataset_info:
features:
- name: uid
dtype: string
- name: file_id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: n_segment
dtype: int32
- name: duration_ms
dtype: float32
- name: language
dtype: string
- name: sample_rate
dtype: int32
- name: course
dtype: string
- name: sentence_length
dtype: int32
- name: n_tokens
dtype: int32
splits:
- name: train
num_bytes: 17559119499.576
num_examples: 75924
download_size: 17274739665
dataset_size: 17559119499.576
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
## Data Description
Hebrew Speech Recognition dataset from [Campus IL](https://campus.gov.il/).
Data was scraped from the Campus website, which contains video lectures from various courses in Hebrew.
Then subtitles were extracted from the videos and aligned with the audio.
Subtitles that are not on Hebrew were removed (WIP: need to remove non-Hebrew audio as well, e.g. using simple classifier).
Samples with duration less than 3 second were removed.
Total duration of the dataset is 152 hours.
Outliers in terms of the duration/char ratio were not removed, so it's possible to find suspiciously long or short sentences compared to the duration.
Note: if loading is slow, just clone it :
`git clone hebrew_speech_campus && cd hebrew_speech_campus && git lfs pull`
and load it from the folder `load_dataset("./hebrew_speech_campus")`
## Data Format
Audio files are in WAV format, 16kHz sampling rate, 16bit, mono. Ignore `path` field, use `audio.array` field value.
## Data Usage
```python
from datasets import load_dataset
ds = load_dataset("imvladikon/hebrew_speech_campus", split="train", streaming=True)
print(next(iter(ds)))
```
## Data Sample
```
{'uid': '10c3eda27cf173ab25bde755d0023abed301fcfd',
'file_id': '10c3eda27cf173ab25bde755d0023abed301fcfd_13',
'audio': {'path': '/content/hebrew_speech_campus/data/from_another_angle-_mathematics_teaching_practices/10c3eda27cf173ab25bde755d0023abed301fcfd_13.wav',
'array': array([ 5.54326562e-07, 3.60812592e-05, -2.35188054e-04, ...,
2.34067178e-04, 1.55649337e-04, 6.32447700e-05]),
'sampling_rate': 16000},
'sentence': 'הדוברים צריכים לקחת עליו אחריות, ולהיות מחויבים לו כלומר, השיח צריך להיות מחויב',
'n_segment': 13,
'duration_ms': 6607.98193359375,
'language': 'he',
'sample_rate': 16000,
'course': 'from_another_angle-_mathematics_teaching_practices',
'sentence_length': 79,
'n_tokens': 13}
```
## Data Splits and Stats
Split: train
Number of samples: 75924
## Citation
Please cite the following if you use this dataset in your work:
```
@misc{imvladikon2023hebrew_speech_campus,
author = {Gurevich, Vladimir},
title = {Hebrew Speech Recognition Dataset: Campus},
year = {2023},
howpublished = \url{https://huggingface.co/datasets/imvladikon/hebrew_speech_campus},
}
```
| [
-0.24652481079101562,
-0.6021994352340698,
0.010790057480335236,
0.12026206403970718,
-0.2965680658817291,
-0.12390182912349701,
-0.4805515706539154,
-0.2021416872739792,
0.3512609302997589,
0.21649496257305145,
-0.44219768047332764,
-0.8072877526283264,
-0.5176733136177063,
-0.06036657467... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
susnato/plant_disease_detection_processed | susnato | 2023-11-25T12:05:23Z | 47 | 0 | null | [
"task_categories:object-detection",
"license:cc-by-4.0",
"region:us"
] | 2023-11-25T12:05:23Z | 2023-11-24T10:43:53.000Z | 2023-11-24T10:43:53 | ---
license: cc-by-4.0
task_categories:
- object-detection
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int64
- name: height
dtype: int64
- name: objects
struct:
- name: area
sequence: int64
- name: bbox
sequence:
sequence: int64
- name: category
sequence: int64
- name: pixel_values
sequence:
sequence:
sequence: float32
- name: pixel_mask
sequence:
sequence: int64
- name: labels
struct:
- name: area
sequence: float32
- name: boxes
sequence:
sequence: float32
- name: class_labels
sequence: int64
- name: image_id
sequence: int64
- name: iscrowd
sequence: int64
- name: orig_size
sequence: int64
- name: size
sequence: int64
splits:
- name: train
num_bytes: 27853534555.06
num_examples: 2110
- name: test
num_bytes: 2810816579.0
num_examples: 214
download_size: 5331925364
dataset_size: 30664351134.06
---
This Dataset is created from processing the files from this GitHub repository : [PlantDoc-Object-Detection-Dataset](https://github.com/pratikkayal/PlantDoc-Object-Detection-Dataset/tree/master)
Citation
BibTeX:
```
@inproceedings{10.1145/3371158.3371196,
author = {Singh, Davinder and Jain, Naman and Jain, Pranjali and Kayal, Pratik and Kumawat, Sudhakar and Batra, Nipun},
title = {PlantDoc: A Dataset for Visual Plant Disease Detection},
year = {2020},
isbn = {9781450377386},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3371158.3371196},
doi = {10.1145/3371158.3371196},
booktitle = {Proceedings of the 7th ACM IKDD CoDS and 25th COMAD},
pages = {249–253},
numpages = {5},
keywords = {Deep Learning, Object Detection, Image Classification},
location = {Hyderabad, India},
series = {CoDS COMAD 2020}
}
``` | [
-0.2788114845752716,
-0.6087499260902405,
0.36295077204704285,
-0.4948284327983856,
-0.39741700887680054,
-0.08290969580411911,
-0.02613728679716587,
-0.4134702980518341,
-0.10074929147958755,
0.30130913853645325,
-0.4018082320690155,
-1.0529861450195312,
-0.5449612736701965,
0.12080314755... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
NamCyan/repo-codegen-v3 | NamCyan | 2023-11-28T02:32:31Z | 47 | 0 | null | [
"region:us"
] | 2023-11-28T02:32:31Z | 2023-11-26T17:04:00.000Z | 2023-11-26T17:04:00 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mideind/icelandic-error-corpus-IceEC | mideind | 2022-10-25T09:51:04Z | 46 | 1 | null | [
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:is",
"license:cc-by-4.0",
"region:us"
] | 2022-10-25T09:51:04Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language:
- is
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
pretty_name: Icelandic Error Corpus
---
# Icelandic Error Corpus
Refer to [https://github.com/antonkarl/iceErrorCorpus](https://github.com/antonkarl/iceErrorCorpus) for a description of the dataset.
Please cite the dataset as follows if you use it.
```
Anton Karl Ingason, Lilja Björk Stefánsdóttir, Þórunn Arnardóttir, and Xindan Xu. 2021. The Icelandic Error Corpus (IceEC). Version 1.1. (https://github.com/antonkarl/iceErrorCorpus)
``` | [
-0.19225822389125824,
-0.35171380639076233,
0.10565721988677979,
0.5064899325370789,
-0.14333033561706543,
0.10684116184711456,
0.0012875678949058056,
-0.32025372982025146,
0.3112260103225708,
0.51817786693573,
-0.5775015950202942,
-0.628646731376648,
-0.6622685194015503,
0.447275996208190... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
codyburker/yelp_review_sampled | codyburker | 2022-03-05T17:29:30Z | 46 | 0 | null | [
"region:us"
] | 2022-03-05T17:29:30Z | 2022-03-05T17:12:15.000Z | 2022-03-05T17:12:15 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
blinoff/kinopoisk | blinoff | 2022-10-23T16:51:58Z | 46 | 3 | null | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:ru",
"region:us"
] | 2022-10-23T16:51:58Z | 2022-04-26T09:47:00.000Z | 2022-04-26T09:47:00 | ---
language:
- ru
multilinguality:
- monolingual
pretty_name: Kinopoisk
size_categories:
- 10K<n<100K
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
### Dataset Summary
Kinopoisk movie reviews dataset (TOP250 & BOTTOM100 rank lists).
In total it contains 36,591 reviews from July 2004 to November 2012.
With following distribution along the 3-point sentiment scale:
- Good: 27,264;
- Bad: 4,751;
- Neutral: 4,576.
### Data Fields
Each sample contains the following fields:
- **part**: rank list top250 or bottom100;
- **movie_name**;
- **review_id**;
- **author**: review author;
- **date**: date of a review;
- **title**: review title;
- **grade3**: sentiment score Good, Bad or Neutral;
- **grade10**: sentiment score on a 10-point scale parsed from text;
- **content**: review text.
### Python
```python3
import pandas as pd
df = pd.read_json('kinopoisk.jsonl', lines=True)
df.sample(5)
```
### Citation
```
@article{blinov2013research,
title={Research of lexical approach and machine learning methods for sentiment analysis},
author={Blinov, PD and Klekovkina, Maria and Kotelnikov, Eugeny and Pestov, Oleg},
journal={Computational Linguistics and Intellectual Technologies},
volume={2},
number={12},
pages={48--58},
year={2013}
}
```
| [
-0.43138518929481506,
-0.3669019341468811,
0.2895537316799164,
0.37839436531066895,
-0.6491606831550598,
-0.023594345897436142,
0.17600859701633453,
-0.051651936024427414,
0.6480013728141785,
0.4254070520401001,
-0.5441492199897766,
-1.0480698347091675,
-0.6926649808883667,
0.3348973095417... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
domenicrosati/QA2D | domenicrosati | 2022-10-25T10:13:31Z | 46 | 2 | null | [
"task_categories:text2text-generation",
"task_ids:text-simplification",
"annotations_creators:machine-generated",
"annotations_creators:crowdsourced",
"annotations_creators:found",
"language_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categorie... | 2022-10-25T10:13:31Z | 2022-05-09T23:35:19.000Z | 2022-05-09T23:35:19 | ---
annotations_creators:
- machine-generated
- crowdsourced
- found
language_creators:
- machine-generated
- crowdsourced
language: []
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
- extended|squad
- extended|race
- extended|newsqa
- extended|qamr
- extended|movieQA
task_categories:
- text2text-generation
task_ids:
- text-simplification
pretty_name: QA2D
---
# Dataset Card for QA2D
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://worksheets.codalab.org/worksheets/0xd4ebc52cebb84130a07cbfe81597aaf0/
- **Repository:** https://github.com/kelvinguu/qanli
- **Paper:** https://arxiv.org/abs/1809.02922
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Existing datasets for natural language inference (NLI) have propelled research on language understanding. We propose a new method for automatically deriving NLI datasets from the growing abundance of large-scale question answering datasets. Our approach hinges on learning a sentence transformation model which converts question-answer pairs into their declarative forms. Despite being primarily trained on a single QA dataset, we show that it can be successfully applied to a variety of other QA resources. Using this system, we automatically derive a new freely available dataset of over 500k NLI examples (QA-NLI), and show that it exhibits a wide range of inference phenomena rarely seen in previous NLI datasets.
This Question to Declarative Sentence (QA2D) Dataset contains 86k question-answer pairs and their manual transformation into declarative sentences. 95% of question answer pairs come from SQuAD (Rajkupar et al., 2016) and the remaining 5% come from four other question answering datasets.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
en
## Dataset Structure
### Data Instances
See below.
### Data Fields
- `dataset`: lowercased name of dataset (movieqa, newsqa, qamr, race, squad)
- `example_uid`: unique id of example within dataset (there are examples with the same uids from different datasets, so the combination of dataset + example_uid should be used for unique indexing)
- `question`: tokenized (space-separated) question from the source QA dataset
- `answer`: tokenized (space-separated) answer span from the source QA dataset
- `turker_answer`: tokenized (space-separated) answer sentence collected from MTurk
- `rule-based`: tokenized (space-separated) answer sentence, generated by the rule-based model
### Data Splits
| Dataset Split | Number of Instances in Split |
| ------------- |----------------------------- |
| Train | 60,710 |
| Dev | 10,344 |
## Dataset Creation
### Curation Rationale
This Question to Declarative Sentence (QA2D) Dataset contains 86k question-answer pairs and their manual transformation into declarative sentences. 95% of question answer pairs come from SQuAD (Rajkupar et al., 2016) and the remaining 5% come from four other question answering datasets.
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
@article{DBLP:journals/corr/abs-1809-02922,
author = {Dorottya Demszky and
Kelvin Guu and
Percy Liang},
title = {Transforming Question Answering Datasets Into Natural Language Inference
Datasets},
journal = {CoRR},
volume = {abs/1809.02922},
year = {2018},
url = {http://arxiv.org/abs/1809.02922},
eprinttype = {arXiv},
eprint = {1809.02922},
timestamp = {Fri, 05 Oct 2018 11:34:52 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1809-02922.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | [
-0.35273000597953796,
-0.8484819531440735,
0.33906424045562744,
-0.00927277747541666,
-0.1626112014055252,
-0.01554016675800085,
0.10365087538957596,
-0.23779766261577606,
0.17825254797935486,
0.6312947273254395,
-0.8019318580627441,
-0.6853063106536865,
-0.2997404634952545,
0.364341408014... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
VanessaSchenkel/opus_books_en_pt | VanessaSchenkel | 2022-08-06T22:46:10Z | 46 | 1 | null | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:translation",
"size_categories:1K<n<10K",
"source_datasets:extended|opus_books",
"language:en",
"language:pt",
"license:afl-3.0",
"region:us"
] | 2022-08-06T22:46:10Z | 2022-08-06T22:34:58.000Z | 2022-08-06T22:34:58 | ---
annotations_creators:
- found
language:
- en
- pt
language_creators:
- found
license:
- afl-3.0
multilinguality:
- translation
pretty_name: VanessaSchenkel/opus_books_en_pt
size_categories:
- 1K<n<10K
source_datasets:
- extended|opus_books
tags: []
task_categories:
- translation
task_ids: []
---
How to use it:
```
from datasets import load_dataset
remote_dataset = load_dataset("VanessaSchenkel/opus_books_en_pt", field="data")
remote_dataset
```
Output:
```
DatasetDict({
train: Dataset({
features: ['id', 'translation'],
num_rows: 1404
})
})
```
Exemple:
```
remote_dataset["train"][5]
```
Output:
```
{'id': '5',
'translation': {'en': "There was nothing so very remarkable in that; nor did Alice think it so very much out of the way to hear the Rabbit say to itself, 'Oh dear!",
'pt': 'Não havia nada de tão extraordinário nisso; nem Alice achou assim tão fora do normal ouvir o Coelho dizer para si mesmo: —"Oh, céus!'}}
``` | [
-0.37294337153434753,
-0.3663668632507324,
-0.08903435617685318,
0.06655514240264893,
-0.4840869605541229,
-0.3378662168979645,
-0.3540460467338562,
-0.14792472124099731,
0.4109690189361572,
0.48468637466430664,
-0.5561131834983826,
-0.8774475455284119,
-0.31460559368133545,
0.601081311702... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
heegyu/namuwiki | heegyu | 2022-10-01T02:40:40Z | 46 | 2 | null | [
"task_categories:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:ko",
"license:cc-by-nc-sa-2.0",
"region:us"
] | 2022-10-01T02:40:40Z | 2022-10-01T00:40:12.000Z | 2022-10-01T00:40:12 | ---
license: cc-by-nc-sa-2.0
language:
- ko
language_creators:
- other
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
task_categories:
- other
---
# namu.wiki database dump
##
https://namu.wiki/ database dump 2022/03/01<br/>
- 867024 rows
- download size: 3GB
## Usage
```bash
pip install datasets
```
```python
from datasets import load_dataset
dataset = load_dataset("heegyu/namuwiki")
print(dataset["train"][0])
```
```
{'title': '!!아앗!!',
'text': '\n[목차]\n\n\'\'\'{{{+1 !!ああっと!!}}}\'\'\'\n\n== 개요 ==\n[[파일:3444050440.jpg|width=60%]]\n▲[[신 세계수의 미궁 2 파프니르기사|신 세계수의 미궁 2]]에서 뜬 !!아앗!!\n\n[[세계수의 미궁 시리즈]]에 전통으로 등장하는 대사. [[세계수의 미궁 2 제왕의 성배|2편]]부터 등장했으며 훌륭한 [[사망 플래그]]의 예시이다.\n\n세계수의 모험가들이 탐험하는 던전인 수해의 구석구석에는 채취/벌채/채굴 포인트가 있으며, 이를 위한 채집 스킬에 투자하면 제한된 채집 기회에서 보다 큰 이득을 챙길 수 있다. 그러나 분배할 수 있는 스킬 포인트는 한정되어 있기 때문에 채집 스킬에 투자하는 만큼 전투 스킬 레벨은 낮아지게 된다.[* 다만 채집 시스템은 신 세계수 시리즈의 그리모어 복제, 복합 채집 스킬인 야생의 감, 5편의 종족 특유 스킬, 크로스의 1레벨이 만렙인 채집 스킬 등으로 편의성이 점차 나아져서 채집 스킬 때문에 스킬 트리가 내려가는 일은 점점 줄어들었다.] !!아앗!!이 발생하는 과정을 요약하면 다음과 같다.\n\n 1. 채집용 캐릭터들로 이루어진 약한 파티(ex: [[레인저(세계수의 미궁 2)|레인저]] 5명)가 수해에 입장한다.\n 1. 필드 전투를 피해 채집 포인트에 도착한 후 열심히 아이템을 캐는 중에...\n 1. \'\'\'!!아앗!!\'\'\' ~~라플레시아가 나타났다!~~\n 이때 등장하는 것은 [[FOE(세계수의 미궁 시리즈)|FOE]]는 아니지만 \'\'\'훨씬 위층에 등장하는 강력한 필드 몬스터이며 선제 공격을 당하게 된다!\'\'\'\n 1. \'\'\'으앙 죽음\'\'\'(hage)\n\n여담으로 !!아앗!!의 유래는 1인칭 던전 크롤러의 원조 [[위저드리]]에서 함정을 건드렸을 때 나오는 대사 Oops!(おおっと!)라고 한다.\n\n== 각 작품에서의 모습 ==\n=== [[세계수의 미궁 2 제왕의 성배]] ===\n!!아앗!!의 악랄함은 첫 등장한 작품이자 시리즈 중에서도 불친절하기로 정평이 난 2편이 절정이었다. 그야말로 위의 !!아앗!! 시퀀스 그대로, 묻지도 따지지도 않고 채집할 때마다 일정 확률로 \'\'\'강제로\'\'\' 전투에 돌입해야 했다. 게다가 이럴 때 쓰라고 있는 레인저의 스킬 \'위험 감지(중간 확률로 적의 선제 공격을 무효화)\'는 정작 작동하지 않는다!\n\n참고로 2편에서 채집 도중 !!아앗!!이 뜰 확률은 [[http://www.atlusnet.jp/topic/detail/910|고작 1%다.]] [[던파확률의 법칙|낮아 보이는 확률이어도 플레이 중 한 번이라도 일어나는 것]]을 경험하는 체감 확률을 고려하여 확률을 설정한다고.\n\n=== [[세계수의 미궁 3 성해의 내방자]] ===\n다행히 채집 중 낮은 확률로 "좋은 아이템을 얻을 수 있을 것 같지만... 주변에서 몬스터들의 기척이 느껴진다."는 메시지가 뜨고 이때 운이 좋으면 레어 아이템을 얻을 수 있지만 반대의 경우 적과 싸우게 되는 것으로 조정되었다.\n\n=== [[세계수의 미궁 4 전승의 거신]] ===\n기본적인 것은 3편과 같지만, 4편에서는 움직이지 않고 채집할 때도 턴이 경과하도록 조정되었기 때문에 주변에 있는 FOE를 잊고 채집에 몰두하다가 FOE와 부딪히면 FOE 버전 !!아앗!!이 뜬다. 그리고 난이도 CASUAL로 플레이시, FOE로 인한 !!아앗!!을 제외하면 절대로 발생하지 않는다.\n\n=== [[신 세계수의 미궁 밀레니엄의 소녀|신 세계수의]] [[신 세계수의 미궁 2 파프니르기사|미궁 시리즈]] ===\n채집 방식이 한 턴으로 끝나는 구조[* 채집으로 한 번 아이템을 획득하면 "다시, (채집 스킬)에 의해..."가 뜨면서 한꺼번에 획득되는 구조.]로 바뀐 덕분인지 강제 조우로 다시 회귀해버렸다(...). 그나마 위험 감지 먹통과 같은 버그성 난점들은 수정되었다. 그 이후에 나온 [[세계수의 미궁 5 오랜 신화의 끝]]과 시리즈의 집대성 작품이자 3DS 마지막 작품인 [[세계수의 미궁 X]]도 마찬가지.\n\n=== [[세계수의 미궁 X]] ===\n본작의 채집은 신 세계수 시리즈와 같은 매커니즘이라 굳이 언급할 필요는 없으나, 퀘스트중에 2편의 !!아앗!! 시퀀스를 재현하면서 \'\'\'라플레시아\'\'\'가 등장하는 퀘스트가 존재한다.(...) 깨알같이 시스템 메세지 창이 아니라 대화창을 이용해서 완벽 재현한 것이 포인트.\n\n=== [[페르소나 Q 섀도우 오브 더 래버린스]] ===\n세계수 시스템을 기반으로 한 [[페르소나 시리즈]]와의 콜라보 작품인 페르소나 Q에서도 등장한다. 3, 4편과 같이 파워 스폿에서 채집 도중 메시지가 뜨며, 실패하면 파티에 참가하고 있는 멤버 중 한 명의 [[http://nico.ms/sm25683358|!!아앗!! 하는 음성]] ~~또는 [[코로마루|개소리]]~~과 함께 그 던전의 \'강적\'인 거대 [[섀도(페르소나 시리즈)|섀도우]]가 나타난다.\n\n그러나 내비 전용 스킬인 뱀눈 노려보기(위험 감지와 같은 효과)와 채집 보조 스킬은 파티의 전투력에 전혀 지장을 주지 않으며, \'대안심\'을 달면 거의 볼 일이 없어져서 초중반 이후에는 존재감이 급격히 줄어든다.\n[[분류:세계수의 미궁 시리즈]]',
'contributors': '110.46.34.123,kirby10,max0243,218.54.117.149,ruby3141,121.165.63.239,iviyuki,1.229.200.194,anatra95,kiri47,175.127.134.2,nickchaos71,chkong1998,kiwitree2,namubot,huwieblusnow',
'namespace': ''}
``` | [
-0.6648781895637512,
-0.6946192383766174,
0.1311725527048111,
0.3124914765357971,
-0.5236685276031494,
-0.10012611746788025,
0.23935116827487946,
-0.42836886644363403,
1.0348554849624634,
0.480536550283432,
-0.49104827642440796,
-0.4403489828109741,
-0.6944301128387451,
0.07552091032266617... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arize-ai/beer_reviews_label_drift_neg | arize-ai | 2022-10-19T13:20:26Z | 46 | 0 | null | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"region:us"
] | 2022-10-19T13:20:26Z | 2022-10-19T12:24:21.000Z | 2022-10-19T12:24:21 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: sentiment-classification-reviews-with-drift
size_categories:
- 10K<n<100K
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for `reviews_with_drift`
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [language](#language)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added (`age`, `gender`, `context`) as well as a made up timestamp `prediction_ts` of when the inference took place.
### Supported Tasks and Leaderboards
`text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).
### language
Text is mainly written in english.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@fjcasti1](https://github.com/fjcasti1) for adding this dataset. | [
-0.6274514198303223,
-0.45711129903793335,
0.2552686929702759,
0.13153021037578583,
-0.3834384083747864,
0.16594748198986053,
-0.3392210900783539,
-0.19979636371135712,
0.6283900141716003,
0.6299205422401428,
-1.0277609825134277,
-0.9896200299263,
-0.5473517179489136,
0.03818346560001373,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
joelniklaus/MultiLegalPile_Chunks_500 | joelniklaus | 2023-02-24T03:41:56Z | 46 | 1 | null | [
"region:us"
] | 2023-02-24T03:41:56Z | 2022-11-17T06:35:40.000Z | 2022-11-17T06:35:40 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
joelniklaus/MultiLegalPile_Wikipedia_Filtered | joelniklaus | 2022-11-29T21:52:23Z | 46 | 0 | null | [
"task_categories:fill-mask",
"annotations_creators:other",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:bg",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language... | 2022-11-29T21:52:23Z | 2022-11-17T19:28:00.000Z | 2022-11-17T19:28:00 | ---
annotations_creators:
- other
language_creators:
- found
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
license:
- cc-by-4.0
multilinguality:
- multilingual
paperswithcode_id: null
pretty_name: "MultiLegalPile_Wikipedia_Filtered: A filtered version of the MultiLegalPile dataset, together with wikipedia articles."
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- fill-mask
---
# Dataset Card for MultiLegalPile_Wikipedia_Filtered: A filtered version of the MultiLegalPile dataset, together with wikipedia articles
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Joel Niklaus](mailto:joel.niklaus.2@bfh.ch)
### Dataset Summary
The Multi_Legal_Pile is a large-scale multilingual legal dataset suited for pretraining language models.
It spans over 24 languages and four legal text types.
### Supported Tasks and Leaderboards
The dataset supports the tasks of fill-mask.
### Languages
The following languages are supported:
bg, cs, da, de, el, en, es, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv
## Dataset Structure
It is structured in the following format: {language}_{text_type}_{shard}.jsonl.xz
text_type is one of the following:
- caselaw
- contracts
- legislation
- other
- wikipedia
Use the dataset like this:
```python
from datasets import load_dataset
config = 'en_contracts' # {language}_{text_type}
dataset = load_dataset('joelito/Multi_Legal_Pile', config, split='train', streaming=True)
```
'config' is a combination of language and text_type, e.g. 'en_contracts' or 'de_caselaw'.
To load all the languages or all the text_types, use 'all' instead of the language or text_type (e.g., '
all_legislation').
### Data Instances
The file format is jsonl.xz and there is a `train` and `validation` split available.
Since some configurations are very small or non-existent, they might not contain a train split or not be present at all.
The complete dataset consists of five large subsets:
- [Native Multi Legal Pile](https://huggingface.co/datasets/joelito/Multi_Legal_Pile)
- [Eurlex Resources](https://huggingface.co/datasets/joelito/eurlex_resources)
- [MC4 Legal](https://huggingface.co/datasets/joelito/mc4_legal)
- [Pile of Law](https://huggingface.co/datasets/pile-of-law/pile-of-law)
- [EU Wikipedias](https://huggingface.co/datasets/joelito/EU_Wikipedias)
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
This dataset has been created by combining the following datasets:
Native Multi Legal Pile, Eurlex Resources, MC4 Legal, Pile of Law, EU Wikipedias.
It has been filtered to remove short documents (less than 64 whitespace-separated tokens) and
documents with more than 30% punctuation or numbers (see prepare_legal_data.py for more details).
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
TODO add citation
```
### Contributions
Thanks to [@JoelNiklaus](https://github.com/joelniklaus) for adding this dataset.
| [
-0.4936216175556183,
-0.4580005705356598,
0.2020283341407776,
0.24689370393753052,
-0.3669467568397522,
0.008032126352190971,
-0.43684524297714233,
-0.25977107882499695,
0.46604660153388977,
0.7712966799736023,
-0.3625462055206299,
-0.8584863543510437,
-0.6007028818130493,
0.10540912300348... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jamescalam/ml-qa | jamescalam | 2023-01-04T12:26:06Z | 46 | 0 | null | [
"region:us"
] | 2023-01-04T12:26:06Z | 2023-01-04T12:21:40.000Z | 2023-01-04T12:21:40 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jlbaker361/mnist_sorted_v0.0 | jlbaker361 | 2023-02-04T17:09:27Z | 46 | 0 | null | [
"region:us"
] | 2023-02-04T17:09:27Z | 2023-02-04T17:09:24.000Z | 2023-02-04T17:09:24 | ---
dataset_info:
features:
- name: label
dtype: int64
- name: sequence
sequence: int64
- name: occurence
dtype: int64
- name: split
dtype: string
splits:
- name: train
num_bytes: 84223889
num_examples: 68614
download_size: 12695868
dataset_size: 84223889
---
# Dataset Card for "mnist_sorted_v0.0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6393502354621887,
0.03221666440367699,
0.23591648042201996,
0.22103960812091827,
-0.515250563621521,
0.00136813975404948,
0.43069902062416077,
-0.22249403595924377,
1.1198912858963013,
0.7079877853393555,
-0.8781882524490356,
-0.8678348660469055,
-0.7507739663124084,
-0.1619116514921188... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
djstrong/oscar-small | djstrong | 2023-03-07T19:57:38Z | 46 | 1 | oscar | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:oscar",
"language:af",
"language:am",
"language:ar",
"language:arz",
"language:as",
"language:az",
"language:azb"... | 2023-03-07T19:57:38Z | 2023-03-07T19:55:38.000Z | 2023-03-07T19:55:38 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- af
- am
- ar
- arz
- as
- az
- azb
- ba
- be
- bg
- bn
- bo
- br
- ca
- ce
- ceb
- ckb
- cs
- cv
- cy
- da
- de
- dv
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gl
- gu
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mhr
- mk
- ml
- mn
- mr
- ms
- mt
- my
- nds
- ne
- nl
- nn
- 'no'
- or
- os
- pa
- pl
- pnb
- ps
- pt
- ro
- ru
- sa
- sah
- sd
- sh
- si
- sk
- sl
- sq
- sr
- sv
- sw
- ta
- te
- tg
- th
- tk
- tl
- tr
- tt
- ug
- uk
- ur
- uz
- vi
- yi
- zh
license:
- cc0-1.0
multilinguality:
- multilingual
source_datasets:
- oscar
task_categories:
- text-generation
task_ids:
- language-modeling
paperswithcode_id: oscar
pretty_name: OSCAR
---
## WARNING: this dataset is an extract of the OSCAR dataset published here to simulate the use of the full dataset in low-resource contexts.
Using this dataset is equivalent to using a processed version of OSCAR legally speaking. I take no credit for the gathering of the original data and hence refer entirely to the original dataset in the card below.
# Dataset Card for "oscar"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://oscar-corpus.com](https://oscar-corpus.com)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
OSCAR or **O**pen **S**uper-large **C**rawled [**A**LMAnaCH](https://team.inria.fr/almanach/) co**R**pus is a huge multilingual corpus obtained by language classification and filtering of the [Common Crawl](https://commoncrawl.org/) corpus using the [goclassy](https://github.com/pjox/goclassy) architecture. Data is distributed by language in both original and deduplicated form.
### Supported Tasks and Leaderboards
OSCAR is mainly inteded to pretrain language models and word represantations.
### Languages
All the data is distributed by language, both the original and the deduplicated versions of the data are available. 166 different languages are available. The table in subsection [Data Splits Sample Size](#data-splits-sample-size) provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR.
## Dataset Structure
We show detailed information for all the configurations of the dataset.
## Dataset Creation
### Curation Rationale
OSCAR was constructed new pipeline derived from the [fastText's one](https://github.com/facebookresearch/fastText), called [_goclassy_](https://github.com/pjox/goclassy). Goclassy reuses the [fastText linear classifier](https://fasttext.cc) and the pre-trained fastText model for language recognition, but it completely rewrites and parallelises their pipeline in an asynchronous manner.
The order of operations is more or less the same as in the fastText pre-processing pipeline but instead of clustering multiple operations into a single blocking process, a worker is launched for each operation but bounding the number of possible parallel operations at a given time by the number of available threads instead of the number of CPUs. Goclassy is implemented in the [Go programming language](https://golang.org/) so it lets the [Go runtime](https://golang.org/src/runtime/mprof.go) handle the scheduling of the processes. Thus the goclassy's pipeline one does not have to wait for a whole WET file to download, decompress and classify in order to start downloading and processing the next one, a new file will start downloading and processing as soon as the scheduler is able to allocate a new process.
Filtering and cleaning processes at line level are done before feeding each line to the classifier. Lines shorter than 100 UTF-8 characters and lines containing invalid UTF-8 characters are discarted and are not classified. After all files are proccesed the deduplicated versions are constructed and everything is then splitted in shards and compressed.
### Source Data
#### Initial Data Collection and Normalization
[Common Crawl](https://commoncrawl.org/) is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected [nofollow](http://microformats.org/wiki/rel-nofollow) and [robots.txt](https://www.robotstxt.org/) policies.
Each monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics.
To construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of OSCAR, the **November 2018** snapshot was used. It surpasses 20TB of uncompressed data and contains more than 50 thousand plain text files where each file consists of the plain text from multiple websites along its metadata header.
#### Who are the source language producers?
The data comes from multiple web pages in a large variety of languages.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
Being constructed from Common Crawl, Personal and sensitive information might be present. This **must** be considered before training deep learning models with OSCAR, specially in the case of text-generation models.
## Considerations for Using the Data
### Social Impact of Dataset
OSCAR is intended to bring more data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures.
### Discussion of Biases
OSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models.
### Other Known Limitations
The [fastText linear classifier](https://fasttext.cc) is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource langiuages. Some audits have already been done by [third parties](https://arxiv.org/abs/2010.14571).
## Additional Information
### Dataset Curators
The corpus was put together by [Pedro J. Ortiz](https://pjortiz.eu/), [Benoît Sagot](http://pauillac.inria.fr/~sagot/), and [Laurent Romary](https://cv.archives-ouvertes.fr/laurentromary), during work done at [Inria](https://www.inria.fr/en), particularly at the [ALMAnaCH team](https://team.inria.fr/almanach/).
### Licensing Information
These data are released under this licensing scheme
We do not own any of the text from which these data has been extracted.
We license the actual packaging of these data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/
To the extent possible under law, Inria has waived all copyright and related or neighboring rights to OSCAR
This work is published from: France.
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
### Citation Information
```
@inproceedings{ortiz-suarez-etal-2020-monolingual,
title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages",
author = "Ortiz Su{'a}rez, Pedro Javier and
Romary, Laurent and
Sagot, Benoit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.156",
pages = "1703--1714",
abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.",
}
@inproceedings{OrtizSuarezSagotRomary2019,
author = {Pedro Javier {Ortiz Su{'a}rez} and Benoit Sagot and Laurent Romary},
title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019},
editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{"u}ngen and Caroline Iliadi},
publisher = {Leibniz-Institut f{"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-9021},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215},
pages = {9 -- 16},
year = {2019},
abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.},
language = {en}
}
```
### Contributions
Thanks to [@pjox](https://github.com/pjox) and [@lhoestq](https://github.com/lhoestq) for adding this dataset. | [
-0.3555430471897125,
-0.3912069499492645,
0.14327536523342133,
0.03831089287996292,
-0.3860980272293091,
0.030332503840327263,
-0.15352393686771393,
-0.6219257116317749,
0.588081419467926,
0.4502665400505066,
-0.27204006910324097,
-0.460385262966156,
-0.7097356915473938,
0.1958768367767334... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rcds/lower_court_insertion_swiss_judgment_prediction | rcds | 2023-03-28T08:19:04Z | 46 | 0 | null | [
"task_categories:text-classification",
"task_categories:other",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:extended|swiss_judgment_prediction",
"language:de",
... | 2023-03-28T08:19:04Z | 2023-03-10T14:05:58.000Z | 2023-03-10T14:05:58 | ---
annotations_creators:
- expert-generated
language:
- de
- fr
- it
- en
language_creators:
- expert-generated
- found
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
pretty_name: LowerCourtInsertionSwissJudgmentPrediction
size_categories:
- 1K<n<10K
source_datasets:
- extended|swiss_judgment_prediction
tags:
- explainability-judgment-prediction
task_categories:
- text-classification
- other
task_ids: []
---
# Dataset Card for "LowerCourtInsertionSwissJudgmentPrediction": An implementation of lower court insertion bias analysis for Swiss judgment prediction
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Summary](#dataset-summary)
- [Documents](#documents)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset **str**ucture](#dataset-**str**ucture)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Summary
This dataset contains an implementation of lower-court-insertion for the SwissJudgmentPrediction task.
Note that this dataset only provides a test set and should be used in comination with the [Swiss-Judgment-Prediction](https://huggingface.co/datasets/swiss_judgment_prediction) dataset.
### Documents
Lower-Court-Insertion-Swiss-Judgment-Prediction is a subset of the [Swiss-Judgment-Prediction](https://huggingface.co/datasets/swiss_judgment_prediction) dataset.
The Swiss-Judgment-Prediction dataset is a multilingual, diachronic dataset of 85K Swiss Federal Supreme Court (FSCS) cases annotated with the respective binarized judgment outcome (approval/dismissal), the publication year, the legal area and the canton of origin per case. Lower-Court-Insertion-Swiss-Judgment-Prediction extends this dataset by adding lower court insertion.
### Supported Tasks and Leaderboards
LowerCourtInsertionSwissJudgmentPrediction can be used for performing the LowerCourtInsertion in the legal judgment prediction task.
### Languages
Switzerland has four official languages with 3 languages (German, French and Italian) being represented in more than 1000 Swiss Federal Supreme court decisions. The decisions are written by the judges and clerks in the language of the proceedings.
## Dataset structure
### Data Instances
#### Multilingual use of the dataset
When the dataset is used in a multilingual setting selecting the the 'all' flag:
```python
from datasets import load_dataset
dataset = load_dataset('rcds/lower_court_insertion_swiss_judgment_prediction', 'all')
```
#### Monolingual use of the dataset
When the dataset is used in a monolingual setting selecting the ISO language code for one of the 3 supported languages. For example:
```python
from datasets import load_dataset
dataset = load_dataset('rcds/lower-court-insertion_swiss_judgment_prediction', 'de')
```
### Data Fields
The following data fields are provided for documents (test):
id: (**int**) a unique identifier of the for the document<br/>
year: (**int**) the publication year<br/>
label: (**str**) the judgment outcome: dismissal or approval<br/>
language: (**str**) one of (de, fr, it)<br/>
region: (**str**) the region of the lower court<br/>
canton: (**str**) the canton of the lower court<br/>
legal area: (**str**) the legal area of the case<br/>
explainability_label: (**str**) the explainability label assigned to the occluded text: (Lower court, Baseline)<br/>
text: (**str**) the facts of the case w/o the occluded text except for cases w/ explainability label "Baseline" (contain entire facts)<br/>
lower_court: (**str**) the inserted lower_court (for Baseline there is no insertion)<br/>
### Data Splits (Including Swiss Judgment Prediction)
Language | Subset | Number of Rows (Test)
|-----|-----|------|
German| de| __378__
French | fr| __414__
Italian | it| __335__
All | all | __1127__
Language | Subset | Number of Documents (Test)
| ----------- | ----------- | ----------- |
German| de | __38__
French | fr | __36__
Italian | it | __34__
All | all | __108__
## Dataset Creation
### Curation Rationale
The dataset was curated by Niklaus et al. (2021) and Nina Baumgartner.
### Source Data
#### Initial Data Collection and Normalization
The original data are available at the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML.
#### Who are the source language producers?
Switzerland has four official languages with 3 languages (German, French and Italian) being represented in more than 1000 Swiss Federal Supreme court decisions. The decisions are written by the judges and clerks in the language of the proceedings.
### Annotations
#### Annotation process
The decisions have been annotated with the binarized judgment outcome using parsers and regular expressions. In addition the a subset of the test set (27 cases in German, 24 in French and 23 in Italian spanning over the years 2017 an 20200) was annotated by legal experts with the lower court. These lower court annotations were then use the insert each lower court into each case once (instead of the original lower court). Allowing an analysis of the changes in the models performance for each inserted lower court, giving insight into a possible bias among them. The legal expert annotation were conducted from April 2020 to August 2020.
#### Who are the annotators?
Joel Niklaus and Adrian Jörg annotated the binarized judgment outcomes. Metadata is published by the Swiss Federal Supreme Court (https://www.bger.ch). The group of legal experts consists of Thomas Lüthi (lawyer), Lynn Grau (law student at master's level) and Angela Stefanelli (law student at master's level).
### Personal and Sensitive Information
The dataset contains publicly available court decisions from the Swiss Federal Supreme Court. Personal or sensitive information has been anonymized by the court before publication according to the following guidelines: https://www.bger.ch/home/juridiction/anonymisierungsregeln.html.
## Additional Information
### Dataset Curators
Niklaus et al. (2021) and Nina Baumgartner
### Licensing Information
We release the data under CC-BY-4.0 which complies with the court licensing (https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf)
© Swiss Federal Supreme Court, 2000-2020
The copyright for the editorial content of this website and the consolidated texts, which is owned by the Swiss Federal Supreme Court, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source: https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf
### Citation Information
```
@misc{baumgartner_nina_occlusion_2019,
title = {From Occlusion to Transparancy – An Occlusion-Based Explainability Approach for Legal Judgment Prediction in Switzerland},
shorttitle = {From Occlusion to Transparancy},
abstract = {Natural Language Processing ({NLP}) models have been used for more and more complex tasks such as Legal Judgment Prediction ({LJP}). A {LJP} model predicts the outcome of a legal case by utilizing its facts. This increasing deployment of Artificial Intelligence ({AI}) in high-stakes domains such as law and the involvement of sensitive data has increased the need for understanding such systems. We propose a multilingual occlusion-based explainability approach for {LJP} in Switzerland and conduct a study on the bias using Lower Court Insertion ({LCI}). We evaluate our results using different explainability metrics introduced in this thesis and by comparing them to high-quality Legal Expert Annotations using Inter Annotator Agreement. Our findings show that the model has a varying understanding of the semantic meaning and context of the facts section, and struggles to distinguish between legally relevant and irrelevant sentences. We also found that the insertion of a different lower court can have an effect on the prediction, but observed no distinct effects based on legal areas, cantons, or regions. However, we did identify a language disparity with Italian performing worse than the other languages due to representation inequality in the training data, which could lead to potential biases in the prediction in multilingual regions of Switzerland. Our results highlight the challenges and limitations of using {NLP} in the judicial field and the importance of addressing concerns about fairness, transparency, and potential bias in the development and use of {NLP} systems. The use of explainable artificial intelligence ({XAI}) techniques, such as occlusion and {LCI}, can help provide insight into the decision-making processes of {NLP} systems and identify areas for improvement. Finally, we identify areas for future research and development in this field in order to address the remaining limitations and challenges.},
author = {{Baumgartner, Nina}},
year = {2022},
langid = {english}
}
```
### Contributions
Thanks to [@ninabaumgartner](https://github.com/ninabaumgartner) for adding this dataset.
| [
-0.35501208901405334,
-0.6261898279190063,
0.4870508015155792,
0.12034685909748077,
-0.45346665382385254,
-0.3053736090660095,
-0.15451602637767792,
-0.5309342741966248,
0.17597652971744537,
0.5890535116195679,
-0.5668877363204956,
-0.7975023984909058,
-0.6489760875701904,
-0.0366317965090... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Den4ikAI/russian_dialogues | Den4ikAI | 2023-03-12T07:58:54Z | 46 | 8 | null | [
"task_categories:conversational",
"size_categories:1M<n<10M",
"language:ru",
"license:mit",
"region:us"
] | 2023-03-12T07:58:54Z | 2023-03-12T06:54:22.000Z | 2023-03-12T06:54:22 | ---
license: mit
task_categories:
- conversational
language:
- ru
size_categories:
- 1M<n<10M
---
Датасет русских диалогов собранных с Telegram чатов.
Диалоги имеют разметку по релевантности.
Также были сгенерированы негативные примеры с помощью перемешивания похожих ответов.
Количество диалогов - 2 миллиона
Формат датасета:
```
{
'question': 'Привет',
'answer': 'Привет, как дела?'
'relevance': 1
}
```
Программа парсинга: https://github.com/Den4ikAI/telegram_chat_parser
### Citation:
```
@MISC{russian_instructions,
author = {Denis Petrov},
title = {Russian dialogues dataset for conversational agents},
url = {https://huggingface.co/datasets/Den4ikAI/russian_dialogues},
year = 2023
}
``` | [
-0.12132454663515091,
-0.8871964812278748,
0.373798668384552,
0.34708860516548157,
-0.5587767362594604,
-0.07350621372461319,
0.14623768627643585,
-0.26437586545944214,
0.3073781430721283,
0.15884828567504883,
-0.8095527291297913,
-0.7121314406394958,
-0.35627126693725586,
0.18135853111743... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
RealTimeData/bbc_news_march_2023 | RealTimeData | 2023-04-12T20:59:10Z | 46 | 0 | null | [
"license:cc-by-2.0",
"region:us"
] | 2023-04-12T20:59:10Z | 2023-04-12T20:58:46.000Z | 2023-04-12T20:58:46 | ---
license: cc-by-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
patomp/thai-mscoco-2014-captions | patomp | 2023-05-02T15:52:54Z | 46 | 0 | null | [
"region:us"
] | 2023-05-02T15:52:54Z | 2023-04-25T10:38:36.000Z | 2023-04-25T10:38:36 | ---
dataset_info:
features:
- name: image
dtype: image
- name: filepath
dtype: string
- name: sentids
list: int32
- name: filename
dtype: string
- name: imgid
dtype: int32
- name: split
dtype: string
- name: sentences_tokens
list:
list: string
- name: sentences_raw
list: string
- name: sentences_sentid
list: int32
- name: cocoid
dtype: int32
- name: th_sentences_raw
sequence: string
splits:
- name: test
num_bytes: 819234726.0
num_examples: 5000
- name: validation
num_bytes: 807387321.0
num_examples: 5000
- name: train
num_bytes: 18882795327.165
num_examples: 113287
download_size: 20158273111
dataset_size: 20509417374.165
---
## Usage
```python
from datasets import load_dataset
dataset = load_dataset("patomp/thai-mscoco-2014-captions")
dataset
```
output
```python
DatasetDict({
train: Dataset({
features: ['image', 'filepath', 'sentids', 'filename', 'imgid', 'split', 'sentences_tokens', 'sentences_raw', 'sentences_sentid', 'cocoid', 'th_sentences_raw'],
num_rows: 113287
})
validation: Dataset({
features: ['image', 'filepath', 'sentids', 'filename', 'imgid', 'split', 'sentences_tokens', 'sentences_raw', 'sentences_sentid', 'cocoid', 'th_sentences_raw'],
num_rows: 5000
})
test: Dataset({
features: ['image', 'filepath', 'sentids', 'filename', 'imgid', 'split', 'sentences_tokens', 'sentences_raw', 'sentences_sentid', 'cocoid', 'th_sentences_raw'],
num_rows: 5000
})
})
```
A sample
```python
dataset["validation"][0]
```
output
```python
{
"image":<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x336 at 0x7F6C5A83F430>,
"filepath":"COCO_val2014_000000184613.jpg",
"sentids":[474921,479322,479334,481560,483594],
"filename":"COCO_val2014_000000184613.jpg",
"imgid":2,
"split":"val",
"sentences_tokens":[
["a", "child","holding", "a","flowered","umbrella","and","petting","a","yak"],["a","young","man","holding","an","umbrella","next","to","a","herd","of","cattle"],
["a","young","boy","barefoot","holding","an","umbrella","touching","the","horn","of","a","cow"],
["a","young","boy","with","an","umbrella","who","is","touching","the","horn","of","a","cow"],
["a","boy","holding","an","umbrella","while","standing","next","to","livestock"]
],
"sentences_raw":[
"A child holding a flowered umbrella and petting a yak.",
"A young man holding an umbrella next to a herd of cattle.",
"a young boy barefoot holding an umbrella touching the horn of a cow",
"A young boy with an umbrella who is touching the horn of a cow.",
"A boy holding an umbrella while standing next to livestock."
],
"sentences_sentid":[474921,479322,479334,481560,483594],
"cocoid":184613,
"th_sentences_raw":[
"เด็กถือร่มที่มีดอกหนึ่งคันและลูบคลูบลํา",
"ชายหนุ่มคนหนึ่งถือร่มไว้ข้างๆ ฝูงวัว",
"เด็กหนุ่มคนหนึ่งเท้าเปล่าจับร่มจับแตรของวัว",
"เด็กชายที่มีร่มสัมผัสแตรของวัว",
"เด็กชายถือร่มในขณะที่ยืนถัดจากปศุสัตว์"
]
}
```
## Dataset Construction
The dataset contructed from translating the captions of [MS COCO 2014 dataset](https://huggingface.co/datasets/HuggingFaceM4/COCO) [1] to Thai by using [NMT](https://airesearch.in.th/releases/machine-translation-models/) provided by VISTEC-depa Thailand Artificial Intelligence Research Institute [2]. The translated of 3 splits (train, validation and test) dataset was published in the [Huggingface](https://huggingface.co/datasets/patomp/thai-mscoco-2014-captions).
## References
[1] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. 2014. Microsoft COCO: Common Objects in Context. In Computer Vision – ECCV 2014, Springer International Publishing, Cham, 740–755.
[2] English-Thai Machine Translation Models. (2020, June 23). VISTEC-depa Thailand Artificial Intelligence Research Institute. https://airesearch.in.th/releases/machine-translation-models/ | [
-0.47303473949432373,
-0.46789857745170593,
-0.04412263259291649,
0.4361222982406616,
-0.6056538820266724,
-0.11755155026912689,
-0.28518980741500854,
-0.41761356592178345,
0.13224376738071442,
0.4990893602371216,
-0.4382401406764984,
-0.6854069232940674,
-0.6330321431159973,
0.57110005617... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
doushabao4766/resume_zh_ner | doushabao4766 | 2023-05-23T06:22:19Z | 46 | 0 | null | [
"region:us"
] | 2023-05-23T06:22:19Z | 2023-05-23T06:18:34.000Z | 2023-05-23T06:18:34 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
splits:
- name: train
num_bytes: 1893971
num_examples: 3821
- name: test
num_bytes: 231104
num_examples: 477
- name: validation
num_bytes: 212262
num_examples: 463
download_size: 0
dataset_size: 2337337
---
# Dataset Card for "resume_zh_ner"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4230527877807617,
-0.034039441496133804,
0.21067655086517334,
0.24659883975982666,
-0.0899595394730568,
-0.09896856546401978,
0.1704462617635727,
0.1761726289987564,
0.9269098043441772,
0.5228296518325806,
-0.9921743869781494,
-0.9185237884521484,
-0.5067327618598938,
-0.202220022678375... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
clarin-knext/quora-pl-qrels | clarin-knext | 2023-06-07T08:13:49Z | 46 | 0 | null | [
"language:pl",
"arxiv:2305.19840",
"region:us"
] | 2023-06-07T08:13:49Z | 2023-06-06T22:18:44.000Z | 2023-06-06T22:18:44 | ---
language:
- pl
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl | [
-0.2209920436143875,
-0.9029766917228699,
0.5094642043113708,
0.2354191392660141,
-0.318521112203598,
-0.1491902619600296,
-0.16673962771892548,
-0.4962919354438782,
-0.01896025240421295,
0.41122618317604065,
-0.5503097772598267,
-0.6913566589355469,
-0.4166175127029419,
-0.048304717987775... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
RiTA-nlp/ITALIC | RiTA-nlp | 2023-06-29T12:58:56Z | 46 | 2 | null | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"task_ids:intent-classification",
"annotations_creators:crowdsourced",
"language_creators:Italian",
"license:cc-by-nc-nd-4.0",
"arxiv:2204.08582",
"arxiv:2306.08502",
"region:us"
] | 2023-06-29T12:58:56Z | 2023-06-13T21:03:20.000Z | 2023-06-13T21:03:20 | ---
pretty_name: ITALIC
annotations_creators:
- crowdsourced
language_creators:
- Italian
language_bcp47:
- it
size_categories:
it: 10K<n<100K
task_categories:
- automatic-speech-recognition
- audio-classification
task_ids:
- intent-classification
license: cc-by-nc-nd-4.0
---
# Dataset Card for ITALIC: An ITALian Intent Classification Dataset
ITALIC is an intent classification dataset for the Italian language, which is the first of its kind.
It includes spoken and written utterances and is annotated with 60 intents.
The dataset is available on [Zenodo](https://zenodo.org/record/8040649) and connectors ara available for the [HuggingFace Hub](https://huggingface.co/datasets/RiTA-nlp/ITALIC).
### Latest Updates
- **June 15th, 2023**: ITALIC dataset has been released on [Zenodo](https://zenodo.org/record/8040649): https://zenodo.org/record/8040649.
## Table of Contents
- [Data collection](#data-collection)
- [Dataset](#dataset)
- [Usage](#usage)
- [Models used in the paper](#models-used-in-the-paper)
- [SLU intent classification](#slu-intent-classification)
- [ASR](#asr)
- [NLU intent classification](#nlu-intent-classification)
- [Citation](#citation)
- [License](#license)
## Data collection
The data collection follows the MASSIVE NLU dataset which contains an annotated textual dataset for 60 intents. The data collection process is described in the paper [Massive Natural Language Understanding](https://arxiv.org/abs/2204.08582).
Following the MASSIVE NLU dataset, a pool of 70+ volunteers has been recruited to annotate the dataset. The volunteers were asked to record their voice while reading the utterances (the original text is available on MASSIVE dataset). Together with the audio, the volunteers were asked to provide a self-annotated description of the recording conditions (e.g., background noise, recording device). The audio recordings have also been validated and, in case of errors, re-recorded by the volunteers.
All the audio recordings included in the dataset have received a validation from at least two volunteers. All the audio recordings have been validated by native italian speakers (self-annotated).
## Dataset
The dataset is available on the [Zenodo](https://zenodo.org/record/8040649). It is composed of 3 different splits:
- `massive`: all the utterances are randomly shuffled and divided into 3 splits (train, validation, test).
- `hard_speaker`: the utterances are divided into 3 splits (train, validation, test) based on the speaker. Each split only contains utterances from a pool of speakers that do not overlap with the other splits.
- `hard_noisy`: the utterances are divided into 3 splits (train, validation, test) based on the recording conditions. The test split only contains utterances with the highest level of noise.
Each split contains the following annotations:
- `utt`: the original text of the utterance.
- `audio`: the audio recording of the utterance.
- `intent`: the intent of the utterance.
- `speaker`: the speaker of the utterance. The speaker is identified by a unique identifier and has been anonymized.
- `age`: the age of the speaker.
- `is_native`: whether the speaker is a native italian speaker or not.
- `gender`: the gender of the speaker (self-annotated).
- `region`: the region of the speaker (self-annotated).
- `nationality`: the nationality of the speaker (self-annotated).
- `lisp`: any kind of lisp of the speaker (self-annotated). It can be empty in case of no lisp.
- `education`: the education level of the speaker (self-annotated).
- `environment`: the environment of the recording (self-annotated).
- `device`: the device used for the recording (self-annotated).
## Usage
The dataset can be loaded using the `datasets` library. You need to install the following dependencies:
```bash
pip install datasets
pip install librosa
pip install soundfile
```
Then, you can load the dataset as follows:
```python
from datasets import load_dataset
# Please be sure to use use_auth_token=True and to set the access token
# using huggingface-cli login
# or follow https://huggingface.co/docs/hub/security-tokens
# configs "hard_speaker" and "hard_noisy" are also available (to substitute "massive")
italic = load_dataset("RiTA-nlp/ITALIC", "massive", use_auth_token=True)
italic_train = italic["train"]
italic_valid = italic["validation"]
italic_test = italic["test"]
```
The dataset has been designed for intent classification tasks. The `intent` column can be used as the label. However, the dataset can be used for other tasks as well.
- **Intent classification**: the `intent` column can be used as the label.
- **Speaker identification**: the `speaker` column can be used as the label.
- **Automatic speech recognition**: the `utt` column can be used as the label.
- **Accent identification**: the `region` column can be used as the label.
For more information about the dataset, please refer to the [paper](https://arxiv.org/abs/2306.08502).
## Models used in the paper
### Hardware settings
All experiments were conducted on a private workstation with Intel Core i9-10980XE CPU, 1 $\times$ NVIDIA RTX A6000 GPU, 64 GB of RAM running Ubuntu 22.04 LTS.
### Parameter settings
The parameters used for the training of the models are set to allow a fair comparison between the different models and to follow the recommendations of the related literature. The parameters are summarized in the following table:
| Model | Task | Parameters | Learning rate | Batch size | Max epochs | Warmup | Weight decay | Avg. training time | Avg. inference time |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| facebook/wav2vec2-xls-r-300m | SLU | 300M | 1e-4 | 128 | 30 | 0.1 ratio | 0.01 | 9m 35s per epoch | 13ms per sample |
| facebook/wav2vec2-xls-r-1b | SLU | 1B | 1e-4 | 32 | 30 | 0.1 ratio | 0.01 | 21m 30s per epoch | 29ms per sample |
| jonatasgrosman/wav2vec2-large-xlsr-53-italian | SLU | 300M | 1e-4 | 128 | 30 | 0.1 ratio | 0.01 | 9m 35s per epoch | 13ms per sample |
| jonatasgrosman/wav2vec2-xls-r-1b-italian | SLU | 1B | 1e-4 | 32 | 30 | 0.1 ratio | 0.01 | 21m 30s per epoch | 29ms per sample |
| ALM/whisper-it-small-augmented | ASR | 224M | 1e-5 | 8 | 5 | 500 steps | 0.01 | 26m 30s per epoch | 25ms per sample |
| EdoAbati/whisper-medium-it-2 | ASR | 769M | 1e-5 | 8 | 5 | 500 steps | 0.01 | 49m per epoch | 94ms per sample |
| EdoAbati/whisper-large-v2-it | ASR | 1.5B | 1e-5 | 8 | 5 | 500 steps | 0.01 | 1h 17m per epoch | 238ms per sample |
| bert-base-multilingual-uncased | NLU | 167M | 5e-5 | 8 | 5 | 500 steps | 0.01 | 1m 22s per epoch | 1.5ms per sample |
| facebook/mbart-large-cc25 | NLU | 611M | 5e-5 | 8 | 5 | 500 steps | 0.01 | 7m 53s per epoch | 4.7ms per sample |
| dbmdz/bert-base-italian-xxl-uncased | NLU | 110M | 5e-5 | 8 | 5 | 500 steps | 0.01 | 1m 30s per epoch | 1.4ms per sample |
| morenolq/bart-it | NLU | 141M | 5e-5 | 8 | 5 | 500 steps | 0.01 | 1m 54s per epoch | 1.9 ms per sample |
In all cases, we opted for the AdamW optimizer. All experiments were run on a single NVIDIA A6000 GPU.
### SLU intent classification
The models used in the paper are available on the [Hugging Face Hub](https://huggingface.co/models).
- 🌍 [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m)
- 🌍 [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b)
- 🇮🇹 [jonatasgrosman/wav2vec2-xls-r-1b-italian](https://huggingface.co/jonatasgrosman/wav2vec2-xls-r-1b-italian)
- 🇮🇹 [jonatasgrosman/wav2vec2-large-xlsr-53-italian](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-italian)
### ASR
The models used in the paper are available on the [Hugging Face Hub](https://huggingface.co/models).
- 🌍 Whisper large (zero-shot ASR): [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2)
- 🇮🇹 Whisper small: [ALM/whisper-it-small-augmented](https://huggingface.co/ALM/whisper-it-small-augmented)
- 🇮🇹 Whisper medium: [EdoAbati/whisper-medium-it-2](https://huggingface.co/EdoAbati/whisper-medium-it-2)
- 🇮🇹 Whisper large: [EdoAbati/whisper-large-v2-it](https://huggingface.co/EdoAbati/whisper-large-v2-it)
### NLU intent classification
The models used in the paper are available on the [Hugging Face Hub](https://huggingface.co/models).
- 🌍 [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased)
- 🌍 [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25)
- 🇮🇹 [dbmdz/bert-base-italian-xxl-uncased](https://huggingface.co/dbmdz/bert-base-italian-xxl-uncased)
- 🇮🇹 [morenolq/bart-it](https://huggingface.co/morenolq/bart-it)
## Citation
If you use this dataset in your research, please cite the following paper (**interspeech 2023** version is coming soon after the proceedings are published):
```bibtex
@article{koudounas2023italic,
title={ITALIC: An Italian Intent Classification Dataset},
author={Koudounas, Alkis and La Quatra, Moreno and Vaiani, Lorenzo and Colomba, Luca and Attanasio, Giuseppe and Pastor, Eliana and Cagliero, Luca and Baralis, Elena},
journal={arXiv preprint arXiv:2306.08502},
year={2023}
}
```
## License
The dataset is licensed under the [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/).
- [Paper describing the dataset and initial experiments](https://arxiv.org/abs/2306.08502)
- [Dataset on Zenodo](https://zenodo.org/record/8040649)
- [https://creativecommons.org/licenses/by-nc-sa/4.0/](https://creativecommons.org/licenses/by/4.0/)https://creativecommons.org/licenses/by/4.0/
| [
-0.4479552209377289,
-0.5811574459075928,
0.29586678743362427,
0.31841036677360535,
-0.16747309267520905,
-0.3580159842967987,
-0.44794031977653503,
-0.5947325825691223,
0.43696802854537964,
0.30752435326576233,
-0.6973742246627808,
-0.8077651858329773,
-0.4595591723918915,
0.0988279879093... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pankajmathur/alpaca_orca | pankajmathur | 2023-06-26T14:39:11Z | 46 | 18 | null | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2023-06-26T14:39:11Z | 2023-06-24T18:20:35.000Z | 2023-06-24T18:20:35 | ---
license: cc-by-nc-sa-4.0
task_categories:
- text-generation
language:
- en
size_categories:
- 10K<n<100K
---
Explain tuned Alpaca dataset ~52K created using approaches from Orca Research Paper.
We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets.
This helps student models like [orca_mini_13b](https://huggingface.co/psmathur/orca_mini_13b) to learn thought process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version).
Please see how the **System** prompt is added before each **instruction**. | [
-0.6994594931602478,
-0.8971205949783325,
0.1367453783750534,
-0.27048420906066895,
-0.3398424983024597,
-0.28293561935424805,
0.13381807506084442,
-0.5111128091812134,
0.3342824876308441,
0.72453373670578,
-1.0047094821929932,
-0.22863109409809113,
-0.11513824760913849,
-0.176358804106712... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jjzha/skillspan | jjzha | 2023-09-07T12:12:10Z | 46 | 0 | null | [
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2023-09-07T12:12:10Z | 2023-07-04T13:37:04.000Z | 2023-07-04T13:37:04 | ---
license: cc-by-4.0
language: en
---
This is the SkillSpan dataset created by:
```
@inproceedings{zhang-etal-2022-skillspan,
title = "{S}kill{S}pan: Hard and Soft Skill Extraction from {E}nglish Job Postings",
author = "Zhang, Mike and
Jensen, Kristian and
Sonniks, Sif and
Plank, Barbara",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.366",
doi = "10.18653/v1/2022.naacl-main.366",
pages = "4962--4984"
}
```
There are document delimiters indicated by `idx`.
Number of samples (sentences):
- train: 4800
- dev: 3174
- test: 3569
Sources:
- Stackoverflow (tech)
- STAR (house)
Type of tags:
- Generic BIO tags with keys `tags_skill` and `tags_knowledge`
Sample:
```
{
"idx": 53,
"tokens": ["Drive", "our", "IT", "compliance", "agenda", "and", "develop", "our", "processes"],
"tags_skill": ["B", "I", "I", "I", "I", "O", "B", "I", "I"],
"tags_knowledge": ["O", "O", "O", "O", "O", "O", "O", "O", "O"],
"source": "house"
}
``` | [
-0.22405868768692017,
-0.3385237753391266,
0.21507982909679413,
0.0508476085960865,
0.1025676354765892,
0.1695275455713272,
-0.3043034076690674,
-0.16935603320598602,
0.28423842787742615,
0.5659036040306091,
-0.5655478239059448,
-0.9119511842727661,
-0.7532700300216675,
0.39415085315704346... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PL-MTEB/ppc-pairclassification | PL-MTEB | 2023-08-11T11:00:22Z | 46 | 0 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2023-08-11T11:00:22Z | 2023-08-11T11:00:07.000Z | 2023-08-11T11:00:07 | ---
license: cc-by-nc-sa-4.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mlabonne/medical-mqca-fr | mlabonne | 2023-09-09T16:18:56Z | 46 | 0 | null | [
"region:us"
] | 2023-09-09T16:18:56Z | 2023-09-09T12:54:31.000Z | 2023-09-09T12:54:31 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: eval
path: data/eval-*
dataset_info:
features:
- name: Specialite
dtype: string
- name: Serie
dtype: int64
- name: Question
dtype: int64
- name: N_Question
dtype: int64
- name: Answer
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 4455800
num_examples: 3836
- name: eval
num_bytes: 172116
num_examples: 150
download_size: 2123478
dataset_size: 4627916
---
# Dataset Card for "medical-mqca-fr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3956739902496338,
-0.1393909603357315,
0.3894960880279541,
0.050619419664144516,
-0.31108999252319336,
0.16326405107975006,
0.5564758777618408,
-0.16703087091445923,
1.0487465858459473,
0.38111013174057007,
-0.9814440011978149,
-0.839759111404419,
-0.6024789810180664,
-0.142711400985717... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Fraol/LLM-Data5 | Fraol | 2023-09-18T03:13:03Z | 46 | 0 | null | [
"region:us"
] | 2023-09-18T03:13:03Z | 2023-09-18T02:45:05.000Z | 2023-09-18T02:45:05 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 462063994
num_examples: 388405
- name: validation
num_bytes: 57196523
num_examples: 48550
- name: test
num_bytes: 57443243
num_examples: 48552
download_size: 352680335
dataset_size: 576703760
---
# Dataset Card for "LLM-Data5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6446465253829956,
0.00987611897289753,
0.45916053652763367,
0.21902941167354584,
-0.3010306656360626,
0.0686807706952095,
0.483642041683197,
-0.24518528580665588,
0.718116819858551,
0.61018967628479,
-1.031529188156128,
-1.0866814851760864,
-0.612437903881073,
0.0350843220949173,
-0.3... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pablo-moreira/gpt4all-j-prompt-generations-pt | pablo-moreira | 2023-10-06T16:02:12Z | 46 | 1 | null | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:pt",
"license:apache-2.0",
"region:us"
] | 2023-10-06T16:02:12Z | 2023-09-28T01:43:05.000Z | 2023-09-28T01:43:05 | ---
language:
- pt
license: apache-2.0
size_categories:
- 100K<n<1M
task_categories:
- text-generation
pretty_name: GPT4All Prompt Generations translated into Portuguese using Google Translate.
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: source
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 1956916380
num_examples: 808812
download_size: 1134108118
dataset_size: 1956916380
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "gpt4all-j-prompt-generations-pt"
## Dataset Description
Copy translated into Portuguese of the dataset [gpt4all_prompt_generations](https://huggingface.co/datasets/nomic-ai/gpt4all_prompt_generations) using the googletrans library.
## Translate
[translate_dataset.ipynb](translate_dataset.ipynb)
## Usage
[dataset_usage.ipynb](dataset_usage.ipynb) | [
-0.3157222867012024,
-0.28309863805770874,
0.15447555482387543,
0.5923649668693542,
-0.5890965461730957,
0.20715346932411194,
0.34036675095558167,
-0.04303232952952385,
0.5418364405632019,
0.5870242714881897,
-0.9064383506774902,
-0.7394319176673889,
-0.43141865730285645,
0.515034556388855... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
raicrits/fever_folds | raicrits | 2023-10-24T13:50:58Z | 46 | 0 | null | [
"region:us"
] | 2023-10-24T13:50:58Z | 2023-10-12T17:25:44.000Z | 2023-10-12T17:25:44 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BrunoGR/emotional_response_spanish_dataset | BrunoGR | 2023-11-21T06:47:22Z | 46 | 0 | null | [
"region:us"
] | 2023-11-21T06:47:22Z | 2023-10-20T00:30:25.000Z | 2023-10-20T00:30:25 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: index
dtype: float64
- name: input
dtype: string
- name: output
dtype: string
- name: Prompt_sp
dtype: string
- name: Prompt_mix
dtype: string
- name: Prompt_en
dtype: string
splits:
- name: train
num_bytes: 139130014
num_examples: 41910
- name: test
num_bytes: 5047940
num_examples: 1320
- name: validation
num_bytes: 8297080
num_examples: 2220
download_size: 43129906
dataset_size: 152475034
---
# Dataset Card for "emotional_response_spanish_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7060391902923584,
-0.3427274823188782,
0.07256773114204407,
0.7464427351951599,
-0.15663418173789978,
0.08984926342964172,
0.10188718885183334,
-0.31645524501800537,
1.048789620399475,
0.2692979574203491,
-1.059181809425354,
-0.6946717500686646,
-0.6777561902999878,
-0.00028446270152926... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Kabatubare/medical | Kabatubare | 2023-10-28T03:57:40Z | 46 | 1 | null | [
"language:en",
"license:other",
"healthcare",
"qna",
"nlp",
"english",
"region:us"
] | 2023-10-28T03:57:40Z | 2023-10-23T18:59:09.000Z | 2023-10-23T18:59:09 | ---
tags:
- healthcare
- qna
- nlp
- english
license: other
language:
- en
pretty_name: Medical QnA Datasets
---
# Dataset Card for "Medical" Healthcare QnA Datasets
## Dataset Details
### Dataset Description
The "Medical" dataset is a specialized subset curated from the larger MedDialog collection, featuring healthcare dialogues between doctors and patients. This dataset focuses on conversations from Icliniq, HealthcareMagic, and HealthTap. Written primarily in English, it is designed to serve a broad range of applications such as NLP research, healthcare chatbot development, and medical information retrieval. The dataset contains 24,000 rows.
- **Data Sources**: Curated from MedDialog, focusing on Icliniq, HealthcareMagic, and HealthTap
- **Size**: 24,000 rows
- **Language**: English
### Direct Uses:
- NLP research in healthcare dialogues
- Development of healthcare question-answering systems
- Medical information retrieval
### Limitations and Recommendations:
- Not a substitute for certified medical advice
- Exercise caution in critical healthcare applications
| [
-0.16072291135787964,
-0.34605392813682556,
0.26665133237838745,
-0.25626540184020996,
-0.3524829149246216,
0.3190787732601166,
0.18628689646720886,
-0.2975316345691681,
0.6149349212646484,
0.7046786546707153,
-0.864302396774292,
-0.8704008460044861,
-0.22980938851833344,
0.133215785026550... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
carlot/AIShell | carlot | 2023-10-25T07:11:42Z | 46 | 2 | null | [
"region:us"
] | 2023-10-25T07:11:42Z | 2023-10-25T06:51:11.000Z | 2023-10-25T06:51:11 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 17298206024.556
num_examples: 120098
- name: validation
num_bytes: 2355985522.02
num_examples: 14326
- name: test
num_bytes: 1041830607.408
num_examples: 7176
download_size: 20301958805
dataset_size: 20696022153.984
---
# Dataset Card for "Aishell1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6008387207984924,
-0.11364185810089111,
0.015959281474351883,
0.18922242522239685,
-0.18296262621879578,
0.020479867234826088,
0.5073896050453186,
-0.17450514435768127,
0.9306460022926331,
0.5289806723594666,
-0.8730328679084778,
-0.9846289157867432,
-0.6935087442398071,
-0.228924617171... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Jelles2/SocialDealSet | Jelles2 | 2023-11-02T14:29:54Z | 46 | 0 | null | [
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2023-11-02T14:29:54Z | 2023-10-30T13:11:47.000Z | 2023-10-30T13:11:47 | ---
license: cc-by-4.0
task_categories:
- text-generation
pretty_name: SocialDealSet
size_categories:
- n<1K
language:
- en
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
coastalcph/fm_classifier-1-n | coastalcph | 2023-11-04T10:39:14Z | 46 | 0 | null | [
"region:us"
] | 2023-11-04T10:39:14Z | 2023-11-01T16:47:12.000Z | 2023-11-01T16:47:12 | ---
dataset_info:
features:
- name: query
dtype: string
- name: answer
list:
- name: wikidata_id
dtype: string
- name: name
dtype: string
- name: id
dtype: string
- name: relation
dtype: string
- name: date
dtype: int64
- name: type
dtype: string
- name: is_mutable
dtype: int64
splits:
- name: train
num_bytes: 1199458.9463519314
num_examples: 6824
- name: validation
num_bytes: 1017432.6521589737
num_examples: 5911
- name: test
num_bytes: 838131.8596491228
num_examples: 4256
download_size: 1322431
dataset_size: 3055023.458160028
---
# Dataset Card for "fm_classifier-1-n"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7292582392692566,
-0.1984916627407074,
0.14514124393463135,
0.2605239152908325,
-0.2833280563354492,
-0.20751690864562988,
0.3303474187850952,
-0.12296437472105026,
0.7800358533859253,
0.1533076912164688,
-0.981450617313385,
-0.7193575501441956,
-0.7452252507209778,
-0.07191302627325058... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
wrahmed/magento | wrahmed | 2023-11-03T21:14:44Z | 46 | 1 | null | [
"region:us"
] | 2023-11-03T21:14:44Z | 2023-11-03T21:13:34.000Z | 2023-11-03T21:13:34 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
alexemanuel27/org-acad-train-test | alexemanuel27 | 2023-11-04T18:21:26Z | 46 | 0 | null | [
"region:us"
] | 2023-11-04T18:21:26Z | 2023-11-04T18:11:21.000Z | 2023-11-04T18:11:21 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
- name: title
dtype: string
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
splits:
- name: train
num_bytes: 435339
num_examples: 69
- name: validation
num_bytes: 193409
num_examples: 31
download_size: 51330
dataset_size: 628748
---
# Dataset Card for "org-acad-train-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.645195722579956,
-0.20359109342098236,
-0.021020984277129173,
0.10332852602005005,
-0.047307323664426804,
0.06506487727165222,
0.29465681314468384,
-0.1017284095287323,
0.5843371748924255,
0.1430564522743225,
-0.7132779955863953,
-0.48865336179733276,
-0.4488350450992584,
-0.22268076241... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arthurmluz/temario_data-wiki_gptextsum_results | arthurmluz | 2023-11-08T17:55:23Z | 46 | 0 | null | [
"region:us"
] | 2023-11-08T17:55:23Z | 2023-11-08T17:55:00.000Z | 2023-11-08T17:55:00 | ---
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
- name: summary
dtype: string
- name: gen_summary
dtype: string
- name: rouge
struct:
- name: rouge1
dtype: float64
- name: rouge2
dtype: float64
- name: rougeL
dtype: float64
- name: rougeLsum
dtype: float64
- name: bert
struct:
- name: f1
sequence: float64
- name: hashcode
dtype: string
- name: precision
sequence: float64
- name: recall
sequence: float64
splits:
- name: validation
num_bytes: 208005
num_examples: 25
download_size: 164069
dataset_size: 208005
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "temario_data-wiki_gptextsum_results"
rouge= {'rouge1': 0.21036975294101332, 'rouge2': 0.07970392536191843, 'rougeL': 0.1477604081207584, 'rougeLsum': 0.1477604081207584}
bert= {'precision': 0.7488837575912476, 'recall': 0.6433243179321289, 'f1': 0.6917135095596314} | [
-0.2602599561214447,
-0.4603317379951477,
0.32232338190078735,
-0.06908289343118668,
-0.529598593711853,
-0.1266469955444336,
-0.513016939163208,
-0.0027193455025553703,
0.8387311100959778,
0.22639603912830353,
-0.3536931872367859,
-1.1119647026062012,
-0.8915797472000122,
0.03264621645212... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
princeton-nlp/SWE-bench_oracle | princeton-nlp | 2023-11-16T22:05:22Z | 46 | 1 | null | [
"arxiv:2310.06770",
"region:us"
] | 2023-11-16T22:05:22Z | 2023-11-09T01:36:53.000Z | 2023-11-09T01:36:53 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: instance_id
dtype: string
- name: text
dtype: string
- name: repo
dtype: string
- name: base_commit
dtype: string
- name: problem_statement
dtype: string
- name: hints_text
dtype: string
- name: created_at
dtype: string
- name: patch
dtype: string
- name: test_patch
dtype: string
- name: version
dtype: string
- name: FAIL_TO_PASS
dtype: string
- name: PASS_TO_PASS
dtype: string
- name: environment_setup_commit
dtype: string
splits:
- name: train
num_bytes: 2927236667
num_examples: 18817
- name: dev
num_bytes: 26551408
num_examples: 225
- name: test
num_bytes: 246539542
num_examples: 2294
- name: validation
num_bytes: 25752989
num_examples: 191
download_size: 1279869834
dataset_size: 3226080606
---
# Dataset Card for "SWE-bench_oracle"
### Dataset Summary
SWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.
The dataset was released as part of [SWE-bench: Can Language Models Resolve Real-World GitHub Issues?](https://arxiv.org/abs/2310.06770)
This dataset `SWE-bench_oracle` includes a formatting of each instance using the "Oracle" retrieval setting as described in the paper. The `text` column can be used directly with LMs to generate patch files.
Models are instructed to generate [`patch`](https://en.wikipedia.org/wiki/Patch_(Unix)) formatted file using the following template:
```diff
<patch>
diff
--- a/path/to/file.py
--- b/path/to/file.py
@@ -1,3 +1,3 @@
This is a test file.
-It contains several lines.
+It has been modified.
This is the third line.
</patch>
```
This format can be used directly with the [SWE-bench inference scripts](https://github.com/princeton-nlp/SWE-bench/tree/main/inference). Please refer to these scripts for more details on inference.
### Supported Tasks and Leaderboards
SWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at www.swebench.com
### Languages
The text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.
## Dataset Structure
### Data Instances
An example of a SWE-bench datum is as follows:
```
instance_id: (str) - A formatted instance identifier, usually as repo_owner__repo_name-PR-number.
text: (str) - The input text including instructions, the "Oracle" retrieved file, and an example of the patch format for output.
patch: (str) - The gold patch, the patch generated by the PR (minus test-related code), that resolved the issue.
repo: (str) - The repository owner/name identifier from GitHub.
base_commit: (str) - The commit hash of the repository representing the HEAD of the repository before the solution PR is applied.
hints_text: (str) - Comments made on the issue prior to the creation of the solution PR’s first commit creation date.
created_at: (str) - The creation date of the pull request.
test_patch: (str) - A test-file patch that was contributed by the solution PR.
problem_statement: (str) - The issue title and body.
version: (str) - Installation version to use for running evaluation.
environment_setup_commit: (str) - commit hash to use for environment setup and installation.
FAIL_TO_PASS: (str) - A json list of strings that represent the set of tests resolved by the PR and tied to the issue resolution.
PASS_TO_PASS: (str) - A json list of strings that represent tests that should pass before and after the PR application.
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4996466636657715,
-0.5753958821296692,
0.2655700743198395,
0.4018482565879822,
-0.08488734066486359,
-0.27874964475631714,
-0.308683842420578,
-0.32009202241897583,
0.34530141949653625,
0.5777051448822021,
-0.7503330707550049,
-0.6257489919662476,
-0.1930227279663086,
0.0730594992637634... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lawinsider/uk_ner_contracts | lawinsider | 2023-11-15T14:17:31Z | 46 | 0 | null | [
"task_categories:token-classification",
"language:uk",
"legal",
"region:us"
] | 2023-11-15T14:17:31Z | 2023-11-15T12:17:27.000Z | 2023-11-15T12:17:27 | ---
task_categories:
- token-classification
language:
- uk
tags:
- legal
pretty_name: uk NER contracts
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PAug/lisa-formal-v1 | PAug | 2023-11-15T14:50:53Z | 46 | 0 | null | [
"region:us"
] | 2023-11-15T14:50:53Z | 2023-11-15T14:49:03.000Z | 2023-11-15T14:49:03 | ---
dataset_info:
features:
- name: repo_id
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1321105
num_examples: 104
download_size: 400787
dataset_size: 1321105
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "lisa-formal-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5720789432525635,
-0.54752516746521,
0.165644571185112,
0.15586872398853302,
-0.23614419996738434,
-0.24843595921993256,
0.3611912131309509,
-0.29990559816360474,
1.0637352466583252,
0.6624968647956848,
-1.0877729654312134,
-0.9562375545501709,
-0.6467752456665039,
-0.1526670604944229,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
felipeoes/filtered_long_answers_qa_blue_amazon_legislation_56k | felipeoes | 2023-11-19T21:00:43Z | 46 | 0 | null | [
"region:us"
] | 2023-11-19T21:00:43Z | 2023-11-19T04:44:26.000Z | 2023-11-19T04:44:26 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: file_name
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: new_questions
dtype: string
- name: new_long_answers
dtype: string
- name: url
sequence: 'null'
- name: __index_level_0__
dtype: float64
splits:
- name: train
num_bytes: 65873196
num_examples: 44796
- name: test
num_bytes: 8290067
num_examples: 5599
- name: validation
num_bytes: 8166999
num_examples: 5598
download_size: 43267608
dataset_size: 82330262
---
# Dataset Card for "filtered_long_answers_qa_blue_amazon_legislation_56k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5271763205528259,
-0.3128952085971832,
0.3375932276248932,
0.2916582226753235,
-0.6392067074775696,
-0.09575061500072479,
0.32790932059288025,
-0.23586757481098175,
0.6183837652206421,
0.9868373274803162,
-0.8320596218109131,
-0.8035289645195007,
-0.24830365180969238,
-0.112084671854972... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.