id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
open-phi/rag-textbook-instruct-full | open-phi | 2023-10-11T04:57:32Z | 90 | 5 | null | [
"region:us"
] | 2023-10-11T04:57:32Z | 2023-10-10T18:53:45.000Z | 2023-10-10T18:53:45 | ---
dataset_info:
features:
- name: formatted_prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 117082216
num_examples: 8340
download_size: 44011549
dataset_size: 117082216
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "rag-textbook-instruct-full"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5955942273139954,
-0.15283560752868652,
0.24423880875110626,
0.06140613183379173,
-0.29236021637916565,
-0.036739449948072433,
0.0883963480591774,
-0.0014475418720394373,
0.6795961260795593,
0.6117850542068481,
-0.6074336171150208,
-0.7537165880203247,
-0.5098639130592346,
-0.3292350172... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
CJWeiss/multitiny | CJWeiss | 2023-10-26T21:32:27Z | 90 | 0 | null | [
"region:us"
] | 2023-10-26T21:32:27Z | 2023-10-26T21:32:01.000Z | 2023-10-26T21:32:01 | ---
dataset_info:
features:
- name: id
dtype: string
- name: sources
sequence: string
- name: summary/long
dtype: string
- name: summary/short
dtype: string
- name: summary/tiny
dtype: string
splits:
- name: train
num_bytes: 489812218.2614571
num_examples: 1207
- name: test
num_bytes: 97877726.43171807
num_examples: 251
- name: valid
num_bytes: 63699346.36563877
num_examples: 145
download_size: 465403499
dataset_size: 651389291.0588139
---
# Dataset Card for "multitiny"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7044655084609985,
-0.2774072587490082,
0.21275237202644348,
0.3481068015098572,
-0.20942755043506622,
0.18035362660884857,
0.08021869510412216,
-0.27810734510421753,
1.0606805086135864,
0.432827889919281,
-0.8921781778335571,
-0.6036669611930847,
-0.6359778046607971,
-0.2205317169427871... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
euisuh15/train_on_style | euisuh15 | 2023-11-21T11:17:34Z | 90 | 0 | null | [
"region:us"
] | 2023-11-21T11:17:34Z | 2023-11-14T09:56:12.000Z | 2023-11-14T09:56:12 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
maximedb/mfaq_light | maximedb | 2021-12-29T14:46:14Z | 89 | 0 | null | [
"region:us"
] | 2021-12-29T14:46:14Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
HuggingFaceM4/vatex | HuggingFaceM4 | 2022-05-13T21:27:03Z | 89 | 3 | null | [
"region:us"
] | 2022-05-13T21:27:03Z | 2022-05-13T20:11:59.000Z | 2022-05-13T20:11:59 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
RCC-MSU/collection3 | RCC-MSU | 2023-01-31T09:47:58Z | 89 | 4 | null | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:other",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:ru",
"license:other",
"region:us"
] | 2023-01-31T09:47:58Z | 2022-08-23T14:03:02.000Z | 2022-08-23T14:03:02 | ---
annotations_creators:
- other
language:
- ru
language_creators:
- found
license:
- other
multilinguality:
- monolingual
pretty_name: Collection3
size_categories:
- 10K<n<100K
source_datasets: []
tags: []
task_categories:
- token-classification
task_ids:
- named-entity-recognition
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
splits:
- name: test
num_bytes: 935298
num_examples: 1922
- name: train
num_bytes: 4380588
num_examples: 9301
- name: validation
num_bytes: 1020711
num_examples: 2153
download_size: 878777
dataset_size: 6336597
---
# Dataset Card for Collection3
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Collection3 homepage](http://labinform.ru/pub/named_entities/index.htm)
- **Repository:** [Needs More Information]
- **Paper:** [Two-stage approach in Russian named entity recognition](https://ieeexplore.ieee.org/document/7584769)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Collection3 is a Russian dataset for named entity recognition annotated with LOC (location), PER (person), and ORG (organization) tags. Dataset is based on collection [Persons-1000](http://ai-center.botik.ru/Airec/index.php/ru/collections/28-persons-1000) originally containing 1000 news documents labeled only with names of persons.
Additional labels were obtained using guidelines similar to MUC-7 with web-based tool [Brat](http://brat.nlplab.org/) for collaborative text annotation.
Currently dataset contains 26K annotated named entities (11K Persons, 7K Locations and 8K Organizations).
Conversion to the IOB2 format and splitting into train, validation and test sets was done by [DeepPavlov team](http://files.deeppavlov.ai/deeppavlov_data/collection3_v2.tar.gz).
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
Russian
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{
"id": "851",
"ner_tags": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 1, 2, 0, 0, 0],
"tokens": ['Главный', 'архитектор', 'программного', 'обеспечения', '(', 'ПО', ')', 'американского', 'высокотехнологичного', 'гиганта', 'Microsoft', 'Рэй', 'Оззи', 'покидает', 'компанию', '.']
}
```
### Data Fields
- id: a string feature.
- tokens: a list of string features.
- ner_tags: a list of classification labels (int). Full tagset with indices:
```
{'O': 0, 'B-PER': 1, 'I-PER': 2, 'B-ORG': 3, 'I-ORG': 4, 'B-LOC': 5, 'I-LOC': 6}
```
### Data Splits
|name|train|validation|test|
|---------|----:|---------:|---:|
|Collection3|9301|2153|1922|
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
```
@inproceedings{mozharova-loukachevitch-2016-two-stage-russian-ner,
author={Mozharova, Valerie and Loukachevitch, Natalia},
booktitle={2016 International FRUCT Conference on Intelligence, Social Media and Web (ISMW FRUCT)},
title={Two-stage approach in Russian named entity recognition},
year={2016},
pages={1-6},
doi={10.1109/FRUCT.2016.7584769}}
``` | [
-0.5019559860229492,
-0.5449048280715942,
0.18294692039489746,
0.16013360023498535,
-0.3500170409679413,
0.12173011898994446,
-0.48205381631851196,
-0.5635520219802856,
0.3308425545692444,
0.35172778367996216,
-0.5861462354660034,
-1.0226236581802368,
-0.6362107396125793,
0.191626712679862... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-eval-project-squad-54745b0c-1311450108 | autoevaluate | 2022-08-24T20:37:49Z | 89 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-24T20:37:49Z | 2022-08-24T20:35:01.000Z | 2022-08-24T20:35:01 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad
eval_info:
task: extractive_question_answering
model: Aiyshwariya/bert-finetuned-squad
metrics: []
dataset_name: squad
dataset_config: plain_text
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: Aiyshwariya/bert-finetuned-squad
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ahmetgunduz](https://huggingface.co/ahmetgunduz) for evaluating this model. | [
-0.5216713547706604,
-0.5221879482269287,
0.2358594536781311,
0.20572640001773834,
0.006827694829553366,
0.047059230506420135,
0.06045643240213394,
-0.4656223952770233,
0.10252438485622406,
0.3787803649902344,
-1.304275393486023,
-0.07041557878255844,
-0.4571656286716461,
0.064417608082294... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ashraq/ott-qa-20k | ashraq | 2022-10-21T09:06:25Z | 89 | 3 | null | [
"region:us"
] | 2022-10-21T09:06:25Z | 2022-10-18T19:30:29.000Z | 2022-10-18T19:30:29 | ---
dataset_info:
features:
- name: url
dtype: string
- name: title
dtype: string
- name: header
sequence: string
- name: data
sequence:
sequence: string
- name: section_title
dtype: string
- name: section_text
dtype: string
- name: uid
dtype: string
- name: intro
dtype: string
splits:
- name: train
num_bytes: 41038376
num_examples: 20000
download_size: 23329221
dataset_size: 41038376
---
# Dataset Card for "ott-qa-20k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
The data was obtained from [here](https://github.com/wenhuchen/OTT-QA) | [
-0.5627517104148865,
-0.3637886941432953,
0.3900587558746338,
0.07411988824605942,
-0.37613555788993835,
0.03755694255232811,
0.435423880815506,
-0.3946923315525055,
0.7008324265480042,
0.6117822527885437,
-0.7901650071144104,
-0.8155582547187805,
-0.48753291368484497,
-0.1564711332321167,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
krr-oxford/OntoLAMA | krr-oxford | 2023-08-07T16:22:39Z | 89 | 1 | null | [
"task_categories:text-classification",
"size_categories:1M<n<10M",
"language:en",
"license:apache-2.0",
"Ontologies",
"Subsumption Inference",
"Natural Language Inference",
"Conceptual Knowledge",
"LMs-as-KBs",
"region:us"
] | 2023-08-07T16:22:39Z | 2023-03-02T00:45:25.000Z | 2023-03-02T00:45:25 | ---
license: apache-2.0
task_categories:
- text-classification
tags:
- Ontologies
- Subsumption Inference
- Natural Language Inference
- Conceptual Knowledge
- LMs-as-KBs
pretty_name: OntoLAMA
size_categories:
- 1M<n<10M
language:
- en
dataset_info:
- config_name: schemaorg-atomic-SI
features:
- name: v_sub_concept
dtype: string
- name: v_super_concept
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative_subsumption
'1': positive_subsumption
- name: axiom
dtype: string
splits:
- name: train
num_bytes: 103485
num_examples: 808
- name: validation
num_bytes: 51523
num_examples: 404
- name: test
num_bytes: 361200
num_examples: 2830
download_size: 82558
dataset_size: 516208
- config_name: doid-atomic-SI
features:
- name: v_sub_concept
dtype: string
- name: v_super_concept
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative_subsumption
'1': positive_subsumption
- name: axiom
dtype: string
splits:
- name: train
num_bytes: 15803053
num_examples: 90500
- name: validation
num_bytes: 1978584
num_examples: 11312
- name: test
num_bytes: 1977582
num_examples: 11314
download_size: 3184028
dataset_size: 19759219
- config_name: foodon-atomic-SI
features:
- name: v_sub_concept
dtype: string
- name: v_super_concept
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative_subsumption
'1': positive_subsumption
- name: axiom
dtype: string
splits:
- name: train
num_bytes: 128737404
num_examples: 768486
- name: validation
num_bytes: 16090857
num_examples: 96060
- name: test
num_bytes: 16098373
num_examples: 96062
download_size: 28499028
dataset_size: 160926634
- config_name: go-atomic-SI
features:
- name: v_sub_concept
dtype: string
- name: v_super_concept
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative_subsumption
'1': positive_subsumption
- name: axiom
dtype: string
splits:
- name: train
num_bytes: 152537233
num_examples: 772870
- name: validation
num_bytes: 19060490
num_examples: 96608
- name: test
num_bytes: 19069265
num_examples: 96610
download_size: 32379717
dataset_size: 190666988
- config_name: bimnli
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': contradiction
'1': entailment
splits:
- name: train
num_bytes: 43363266
num_examples: 235622
- name: validation
num_bytes: 4818648
num_examples: 26180
- name: test
num_bytes: 2420273
num_examples: 12906
download_size: 19264134
dataset_size: 50602187
- config_name: foodon-complex-SI
features:
- name: v_sub_concept
dtype: string
- name: v_super_concept
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative_subsumption
'1': positive_subsumption
- name: axiom
dtype: string
- name: anchor_axiom
dtype: string
splits:
- name: train
num_bytes: 2553731
num_examples: 3754
- name: validation
num_bytes: 1271721
num_examples: 1850
- name: test
num_bytes: 8926305
num_examples: 13080
download_size: 1064602
dataset_size: 12751757
- config_name: go-complex-SI
features:
- name: v_sub_concept
dtype: string
- name: v_super_concept
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative_subsumption
'1': positive_subsumption
- name: axiom
dtype: string
- name: anchor_axiom
dtype: string
splits:
- name: train
num_bytes: 45328802
num_examples: 72318
- name: validation
num_bytes: 5671713
num_examples: 9040
- name: test
num_bytes: 5667069
num_examples: 9040
download_size: 5059364
dataset_size: 56667584
---
# OntoLAMA: LAnguage Model Analysis for Ontology Subsumption Inference
### Dataset Summary
OntoLAMA is a set of language model (LM) probing datasets for ontology subsumption inference.
The work follows the "LMs-as-KBs" literature but focuses on conceptualised knowledge extracted from formalised KBs
such as the OWL ontologies. Specifically, the subsumption inference (SI) task is introduced and formulated in the
Natural Language Inference (NLI) style, where the sub-concept and the super-concept involved in a subsumption
axiom are verbalised and fitted into a template to form the premise and hypothesis, respectively.
The sampled axioms are verified through ontology reasoning. The SI task is further divided into Atomic SI and
Complex SI where the former involves only atomic named concepts and the latter involves both atomic and complex concepts.
Real-world ontologies of different scales and domains are used for constructing OntoLAMA and in total there are four Atomic
SI datasets and two Complex SI datasets.
See dataset specifications: https://krr-oxford.github.io/DeepOnto/ontolama/
### Languages
The text in the dataset is in English, as used in the source ontologies. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
An example in the **Atomic SI** dataset created from the Gene Ontology (GO) is as follows:
```
{
'v_sub_concept': 'ctpase activity',
'v_super_concept': 'ribonucleoside triphosphate phosphatase activity',
'label': 1,
'axiom': 'SubClassOf(<http://purl.obolibrary.org/obo/GO_0043273> <http://purl.obolibrary.org/obo/GO_0017111>)'
}
```
An example in the **Complex SI** dataset created from the Food Ontology (FoodOn) is as follows:
```
{
'v_sub_concept': 'ham and cheese sandwich that derives from some lima bean (whole)',
'v_super_concept': 'lima bean substance',
'label': 0,
'axiom': 'SubClassOf(ObjectIntersectionOf(<http://purl.obolibrary.org/obo/FOODON_03307824> ObjectSomeValuesFrom(<http://purl.obolibrary.org/obo/RO_0001000> <http://purl.obolibrary.org/obo/FOODON_03302053>)) <http://purl.obolibrary.org/obo/FOODON_00002776>)',
'anchor_axiom': 'EquivalentClasses(<http://purl.obolibrary.org/obo/FOODON_00002776> ObjectIntersectionOf(<http://purl.obolibrary.org/obo/FOODON_00002000> ObjectSomeValuesFrom(<http://purl.obolibrary.org/obo/RO_0001000> <http://purl.obolibrary.org/obo/FOODON_03302053>)) )'
}
```
An example in the **biMNLI** dataset created from the MNLI dataset is as follows:
```
{
'premise': 'At the turn of the 19th century Los Angeles and Salt Lake City were among the burgeoning metropolises of the new American West.',
'hypothesis': 'Salt Lake City was booming in the early 19th century.',
'label': 1
}
```
### Data Fields
#### SI Data Fields
- `v_sub_concept`: verbalised sub-concept expression.
- `v_super_concept`: verbalised super-concept expression.
- `label`: a binary class label indicating whether two concepts really form a subsumption relationship (`1` means yes).
- `axiom`: a string representation of the original subsumption axiom which is useful for tracing back to the ontology.
- `anchor_axiom`: (for complex SI only) a string representation of the anchor equivalence axiom used for sampling the `axiom`.
#### biMNLI Data Fields
- `premise`: inheritated from the MNLI dataset.
- `hypothesis`: inheritated from the MNLI dataset.
- `label`: a binary class label indicating `contradiction` (`0`) or `entailment` (`1`).
### Data Splits
| Source | #NamedConcepts | #EquivAxioms | #Dataset (Train/Dev/Test) |
|------------|----------------|--------------|------------------------------------------------------------------------|
| Schema.org | 894 | - | Atomic SI: 808/404/2,830 |
| DOID | 11,157 | - | Atomic SI: 90,500/11,312/11,314 |
| FoodOn | 30,995 | 2,383 | Atomic SI: 768,486/96,060/96,062 <br /> Complex SI: 3,754/1,850/13,080 |
| GO | 43,303 | 11,456 | Atomic SI: 772,870/96,608/96,610 <br /> Complex SI: 72,318/9,040/9,040 |
| MNLI | - | - | biMNLI: 235,622/26,180/12,906 |
### Licensing Information
Creative Commons Attribution 4.0 International
### Citation Information
The relevant paper has been accepted at Findings of ACL 2023.
```
@inproceedings{he-etal-2023-language,
title = "Language Model Analysis for Ontology Subsumption Inference",
author = "He, Yuan and
Chen, Jiaoyan and
Jimenez-Ruiz, Ernesto and
Dong, Hang and
Horrocks, Ian",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2023",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-acl.213",
doi = "10.18653/v1/2023.findings-acl.213",
pages = "3439--3453"
}
``` | [
-0.3581013083457947,
-0.6624276041984558,
0.4226299524307251,
-0.1884041726589203,
0.2186998873949051,
-0.2853763699531555,
0.2215450257062912,
-0.3253719210624695,
0.26948410272598267,
0.51118004322052,
-0.5549395680427551,
-0.7667372226715088,
-0.4103742837905884,
0.016068682074546814,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bbaaaa/iwslt14-de-en | bbaaaa | 2023-04-04T02:05:40Z | 89 | 0 | iwslt-2014 | [
"task_categories:translation",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:translation",
"source_datasets:original",
"language:de",
"language:en",
"license:cc-by-nc-nd-4.0",
"region:us"
] | 2023-04-04T02:05:40Z | 2023-03-07T07:09:44.000Z | 2023-03-07T07:09:44 | ---
annotations_creators:
- crowdsourced
language:
- de
- en
language_creators:
- expert-generated
license:
- cc-by-nc-nd-4.0
multilinguality:
- translation
pretty_name: IWSLT 2014
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: iwslt-2014
---
# Dataset Card for IWSLT 2014
## Dataset Description
- **Homepage:** [https://sites.google.com/site/iwsltevaluation2014](https://sites.google.com/site/iwsltevaluation2014)
dataset_info:
- config_name: de-en
features:
- name: translation
languages:
- de
- en
splits:
- name: train
num_examples: 171721
- name: test
num_examples: 4698
- name: validation
num_examples: 887
| [
-0.657453715801239,
-0.14045953750610352,
0.1821020096540451,
0.6780261993408203,
-0.5007481575012207,
0.17551137506961823,
0.0543876513838768,
-0.15428432822227478,
-0.12566405534744263,
0.4396040141582489,
-1.0505247116088867,
-0.7053877115249634,
-0.699648380279541,
0.19402815401554108,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
EMBO/SourceData | EMBO | 2023-11-22T20:11:49Z | 89 | 4 | null | [
"task_categories:token-classification",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"biology",
"medical",
"NER",
"NEL",
"arxiv:2310.20440",
"doi:10.57967/hf/0495",
"region:us"
] | 2023-11-22T20:11:49Z | 2023-03-27T11:19:24.000Z | 2023-03-27T11:19:24 | ---
license: cc-by-4.0
task_categories:
- token-classification
language:
- en
tags:
- biology
- medical
- NER
- NEL
size_categories:
- 10K<n<100K
pretty_name: SODA-NLP
---
# SourceData Dataset
> The largest annotated biomedical corpus for machine learning and AI in the publishing context.
SourceData is the largest annotated biomedical dataset for NER and NEL.
It is unique on its focus on the core of scientific evidence:
figure captions. It is also unique on its real-world configuration, since it does not
present isolated sentences out of more general context. It offers full annotated figure
captions that can be further enriched in context using full text, abstracts, or titles.
The goal is to extract the nature of the experiments on them described.
SourceData presents also its uniqueness by labelling the causal relationship
between biological entities present in experiments, assigning experimental roles
to each biomedical entity present in the corpus.
SourceData consistently annotates
nine different biological entities (genes, proteins, cells, tissues,
subcellular components, species, small molecules, and diseases). It is
the first dataset annotating experimental assays
and the roles played on them by the biological entities.
Each entity is linked to their correspondent ontology, allowing
for entity disambiguation and NEL.
## Cite our work
```latex
@ARTICLE{2023arXiv231020440A,
author = {{Abreu-Vicente}, Jorge and {Sonntag}, Hannah and {Eidens}, Thomas and {Lemberger}, Thomas},
title = "{The SourceData-NLP dataset: integrating curation into scientific publishing for training large language models}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language},
year = 2023,
month = oct,
eid = {arXiv:2310.20440},
pages = {arXiv:2310.20440},
archivePrefix = {arXiv},
eprint = {2310.20440},
primaryClass = {cs.CL},
adsurl = {https://ui.adsabs.harvard.edu/abs/2023arXiv231020440A},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
@article {Liechti2017,
author = {Liechti, Robin and George, Nancy and Götz, Lou and El-Gebali, Sara and Chasapi, Anastasia and Crespo, Isaac and Xenarios, Ioannis and Lemberger, Thomas},
title = {SourceData - a semantic platform for curating and searching figures},
year = {2017},
volume = {14},
number = {11},
doi = {10.1038/nmeth.4471},
URL = {https://doi.org/10.1038/nmeth.4471},
eprint = {https://www.biorxiv.org/content/early/2016/06/20/058529.full.pdf},
journal = {Nature Methods}
}
```
## Dataset usage
The dataset has a semantic versioning.
Specifying the version at loaded will give different versions.
Below we is shown the code needed to load the latest available version of the dataset.
Check below at `Changelog` to see the changes in the different versions.
```python
from datasets import load_dataset
# Load NER
ds = load_dataset("EMBO/SourceData", "NER", version="2.0.3")
# Load PANELIZATION
ds = load_dataset("EMBO/SourceData", "PANELIZATION", version="2.0.3")
# Load GENEPROD ROLES
ds = load_dataset("EMBO/SourceData", "ROLES_GP", version="2.0.3")
# Load SMALL MOLECULE ROLES
ds = load_dataset("EMBO/SourceData", "ROLES_SM", version="2.0.3")
# Load MULTI ROLES
ds = load_dataset("EMBO/SourceData", "ROLES_MULTI", version="2.0.3")
```
## Dataset Description
- **Homepage:** https://sourcedata.embo.org
- **Repository:** https://github.com/source-data/soda-data
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** thomas.lemberger@embo.org, jorge.abreu@embo.org
Note that we offer the `XML` serialized dataset. This includes all the data needed to perform NEL in SourceData.
For reproducibility, for each big version of the dataset we provide `split_vx.y.z.json` files to generate the
train, validation, test splits.
### Supported Tasks and Leaderboards
Tags are provided as [IOB2-style tags](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)).
`PANELIZATION`: figure captions (or figure legends) are usually composed of segments that each refer to one of several 'panels' of the full figure. Panels tend to represent results obtained with a coherent method and depicts data points that can be meaningfully compared to each other. `PANELIZATION` provide the start (B-PANEL_START) of these segments and allow to train for recogntion of the boundary between consecutive panel lengends.
`NER`: biological and chemical entities are labeled. Specifically the following entities are tagged:
- `SMALL_MOLECULE`: small molecules
- `GENEPROD`: gene products (genes and proteins)
- `SUBCELLULAR`: subcellular components
- `CELL_LINE`: cell lines
- `CELL_TYPE`: cell types
- `TISSUE`: tissues and organs
- `ORGANISM`: species
- `DISEASE`: diseases (see limitations)
- `EXP_ASSAY`: experimental assays
`ROLES`: the role of entities with regard to the causal hypotheses tested in the reported results. The tags are:
- `CONTROLLED_VAR`: entities that are associated with experimental variables and that subjected to controlled and targeted perturbations.
- `MEASURED_VAR`: entities that are associated with the variables measured and the object of the measurements.
In the case of experimental roles, it is generated separatedly for `GENEPROD` and `SMALL_MOL` and there is also the `ROLES_MULTI`
that takes both at the same time.
### Languages
The text in the dataset is English.
## Dataset Structure
### Data Instances
### Data Fields
- `words`: `list` of `strings` text tokenized into words.
- `panel_id`: ID of the panel to which the example belongs to in the SourceData database.
- `label_ids`:
- `entity_types`: `list` of `strings` for the IOB2 tags for entity type; possible value in `["O", "I-SMALL_MOLECULE", "B-SMALL_MOLECULE", "I-GENEPROD", "B-GENEPROD", "I-SUBCELLULAR", "B-SUBCELLULAR", "I-CELL_LINE", "B-CELL_LINE", "I-CELL_TYPE", "B-CELL_TYPE", "I-TISSUE", "B-TISSUE", "I-ORGANISM", "B-ORGANISM", "I-EXP_ASSAY", "B-EXP_ASSAY"]`
- `roles`: `list` of `strings` for the IOB2 tags for experimental roles; values in `["O", "I-CONTROLLED_VAR", "B-CONTROLLED_VAR", "I-MEASURED_VAR", "B-MEASURED_VAR"]`
- `panel_start`: `list` of `strings` for IOB2 tags `["O", "B-PANEL_START"]`
- `multi roles`: There are two different label sets. `labels` is like in `roles`. `is_category` tags `GENEPROD` and `SMALL_MOLECULE`.
### Data Splits
* NER and ROLES
```
DatasetDict({
train: Dataset({
features: ['words', 'labels', 'tag_mask', 'text'],
num_rows: 55250
})
test: Dataset({
features: ['words', 'labels', 'tag_mask', 'text'],
num_rows: 6844
})
validation: Dataset({
features: ['words', 'labels', 'tag_mask', 'text'],
num_rows: 7951
})
})
```
* PANELIZATION
```
DatasetDict({
train: Dataset({
features: ['words', 'labels', 'tag_mask'],
num_rows: 14655
})
test: Dataset({
features: ['words', 'labels', 'tag_mask'],
num_rows: 1871
})
validation: Dataset({
features: ['words', 'labels', 'tag_mask'],
num_rows: 2088
})
})
```
## Dataset Creation
### Curation Rationale
The dataset was built to train models for the automatic extraction of a knowledge graph based from the scientific literature. The dataset can be used to train models for text segmentation, named entity recognition and semantic role labeling.
### Source Data
#### Initial Data Collection and Normalization
Figure legends were annotated according to the SourceData framework described in Liechti et al 2017 (Nature Methods, 2017, https://doi.org/10.1038/nmeth.4471). The curation tool at https://curation.sourcedata.io was used to segment figure legends into panel legends, tag enities, assign experiemental roles and normalize with standard identifiers (not available in this dataset). The source data was downloaded from the SourceData API (https://api.sourcedata.io) on 21 Jan 2021.
#### Who are the source language producers?
The examples are extracted from the figure legends from scientific papers in cell and molecular biology.
### Annotations
#### Annotation process
The annotations were produced manually with expert curators from the SourceData project (https://sourcedata.embo.org)
#### Who are the annotators?
Curators of the SourceData project.
### Personal and Sensitive Information
None known.
## Considerations for Using the Data
### Social Impact of Dataset
Not applicable.
### Discussion of Biases
The examples are heavily biased towards cell and molecular biology and are enriched in examples from papers published in EMBO Press journals (https://embopress.org)
The annotation of diseases has been added recently to the dataset. Although they appear, the number is very low and they are not consistently tagged through the entire dataset.
We recommend to use the diseases by filtering the examples that contain them.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Thomas Lemberger, EMBO.
Jorge Abreu Vicente, EMBO
### Licensing Information
CC BY 4.0
### Citation Information
We are currently working on a paper to present the dataset. It is expected to be ready by 2023 spring. In the meantime, the following paper should be cited.
```latex
@article {Liechti2017,
author = {Liechti, Robin and George, Nancy and Götz, Lou and El-Gebali, Sara and Chasapi, Anastasia and Crespo, Isaac and Xenarios, Ioannis and Lemberger, Thomas},
title = {SourceData - a semantic platform for curating and searching figures},
year = {2017},
volume = {14},
number = {11},
doi = {10.1038/nmeth.4471},
URL = {https://doi.org/10.1038/nmeth.4471},
eprint = {https://www.biorxiv.org/content/early/2016/06/20/058529.full.pdf},
journal = {Nature Methods}
}
```
### Contributions
Thanks to [@tlemberger](https://github.com/tlemberger>) and [@drAbreu](https://github.com/drAbreu>) for adding this dataset.
## Changelog
* **v2.0.3** - Data curated until 20.09.2023. Correction of 2,000+ unnormalized cell entities that have been now divided into cell line and cell type. Specially relevant for NER, not that important for NEL.
* **v2.0.2** - Data curated until 20.09.2023. This version will also include the patch for milti-word generic terms.
* **v1.0.2** - Modification of the generic patch in v1.0.1 to include generic terms of more than a word.
* **v1.0.1** - Added a first patch of generic terms. Terms such as cells, fluorescence, or animals where originally tagged, but in this version they are removed.
* **v1.0.0** - First publicly available version of the dataset. Data curated until March 2023.
| [
-0.32512861490249634,
-0.6920741200447083,
0.2361815720796585,
0.04993502050638199,
-0.22476047277450562,
-0.0441947840154171,
-0.18262213468551636,
-0.35330721735954285,
0.4708857238292694,
0.3901282846927643,
-0.5658053159713745,
-0.7580848336219788,
-0.4755520820617676,
0.50643688440322... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
medalpaca/medical_meadow_health_advice | medalpaca | 2023-04-06T16:51:22Z | 89 | 3 | null | [
"task_categories:question-answering",
"task_categories:text-classification",
"language:en",
"region:us"
] | 2023-04-06T16:51:22Z | 2023-04-06T16:47:45.000Z | 2023-04-06T16:47:45 | ---
task_categories:
- question-answering
- text-classification
language:
- en
---
# Health Advice
## Dataset Description
- **Paper:** https://experts.syr.edu/en/publications/detecting-causal-language-use-in-science-findings
### Dataset Summary
This is the dataset use in the paper: Detecting Causal Language Use in Science Findings.
It was cleaned and formated to fit into the alpaca template.
### Citation Information
```
@inproceedings{yu-etal-2019-detecting,
title = "Detecting Causal Language Use in Science Findings",
author = "Yu, Bei and
Li, Yingya and
Wang, Jun",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D19-1473",
doi = "10.18653/v1/D19-1473",
pages = "4664--4674",
}
``` | [
0.06051511690020561,
-0.8784054517745972,
0.48847895860671997,
0.4802068769931793,
-0.3968328535556793,
-0.5009607076644897,
-0.041323449462652206,
-0.6141064763069153,
0.5772815346717834,
0.4765632152557373,
-0.2742869555950165,
-0.6378077268600464,
-0.8513111472129822,
0.4213250577449798... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
magicgh/alpaca-cleaned | magicgh | 2023-04-10T07:48:32Z | 89 | 1 | null | [
"license:cc-by-4.0",
"region:us"
] | 2023-04-10T07:48:32Z | 2023-04-10T07:48:04.000Z | 2023-04-10T07:48:04 | ---
license: cc-by-4.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MU-NLPC/Calc-gsm8k | MU-NLPC | 2023-10-30T15:54:45Z | 89 | 1 | null | [
"task_categories:text-generation",
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"arxiv:2305.15017",
"arxiv:2110.14168",
"region:us"
] | 2023-10-30T15:54:45Z | 2023-04-16T21:07:44.000Z | 2023-04-16T21:07:44 | ---
language:
- en
license: mit
size_categories:
- 1K<n<10K
task_categories:
- text-generation
- question-answering
dataset_info:
- config_name: default
features:
- name: id
dtype: string
- name: question
dtype: string
- name: chain
dtype: string
- name: result
dtype: string
- name: result_float
dtype: float64
splits:
- name: train
num_bytes: 5373420.477987422
num_examples: 7273
- name: validation
num_bytes: 147763.5220125786
num_examples: 200
- name: test
num_bytes: 993169
num_examples: 1319
download_size: 3140154
dataset_size: 6514353.0
- config_name: original-splits
features:
- name: id
dtype: string
- name: question
dtype: string
- name: chain
dtype: string
- name: result
dtype: string
- name: result_float
dtype: float64
splits:
- name: train
num_bytes: 5521184
num_examples: 7473
- name: test
num_bytes: 993169
num_examples: 1319
download_size: 0
dataset_size: 6514353
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
- config_name: original-splits
data_files:
- split: train
path: original-splits/train-*
- split: test
path: original-splits/test-*
---
# Dataset Card for Calc-gsm8k
## Summary
This dataset is an instance of gsm8k dataset, converted to a simple html-like language that can be easily parsed (e.g. by BeautifulSoup). The data contains 3 types of tags:
- gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case)
- output: An output of the external tool
- result: The final answer to the mathematical problem (a number)
## Supported Tasks
The dataset is intended for training Chain-of-Thought reasoning **models able to use external tools** to enhance the factuality of their responses.
This dataset presents in-context scenarios where models can outsource the computations in the reasoning chain to a calculator.
## Construction Process
The answers in the original dataset were in a structured but non-standard format. So, the answers were parsed, all arithmetical expressions
were evaluated using a sympy-based calculator, the outputs were checked to be consistent with the intermediate results and exported
into a simple html-like language that BeautifulSoup can parse.
We also perform in-dataset and cross-dataset data-leak detection within the [Calc-X collection](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483)
However, in case of gsm8k, we found no data leaks and removed no examples from the data.
## Content and Data splits
For convenience, we created a validation set by sampling 200 random examples from the original train split. This is the default variant:
```python
datasets.load_dataset("MU-NLPC/Calc-gsm8k")
```
The original data splits can be loaded using:
```python
datasets.load_dataset("MU-NLPC/Calc-gsm8k", "original-splits")
```
For more info about the content of the dataset, see [gsm8k HF dataset](https://huggingface.co/datasets/gsm8k) and the [official repository](https://github.com/openai/grade-school-math).
## Related work
This dataset was created as a part of a larger effort in training models capable of using a calculator during inference, which we call Calcformers.
- [**Calc-X collection**](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483) - datasets for training Calcformers
- [**Calcformers collection**](https://huggingface.co/collections/MU-NLPC/calcformers-65367392badc497807b3caf5) - calculator-using models we trained and published on HF
- [**Calc-X and Calcformers paper**](https://arxiv.org/abs/2305.15017)
- [**Calc-X and Calcformers repo**](https://github.com/prompteus/calc-x)
Here are links to the original dataset:
- [**original gsm8k dataset**](https://huggingface.co/datasets/gsm8k)
- [**original gsm8k paper**](https://arxiv.org/abs/2110.14168)
- [**original gsm8k repo**](https://github.com/openai/grade-school-math)
## Licence
MIT, consistently with the original dataset.
## Cite
If you use this version of the dataset in research, please cite the [original GSM8K paper](https://arxiv.org/abs/2110.14168), and [Calc-X collection](https://arxiv.org/abs/2305.15017) as follows:
```bibtex
@inproceedings{kadlcik-etal-2023-soft,
title = "Calc-X and Calcformers: Empowering Arithmetical Chain-of-Thought through Interaction with Symbolic Systems",
author = "Marek Kadlčík and Michal Štefánik and Ondřej Sotolář and Vlastimil Martinek",
booktitle = "Proceedings of the The 2023 Conference on Empirical Methods in Natural Language Processing: Main track",
month = dec,
year = "2023",
address = "Singapore, Singapore",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2305.15017",
}
``` | [
-0.34333279728889465,
-0.280935674905777,
0.3222210109233856,
-0.028086543083190918,
-0.01931832730770111,
-0.16842792928218842,
-0.0870811715722084,
-0.20631776750087738,
0.19379346072673798,
0.4072801172733307,
-0.5533031225204468,
-0.4610064625740051,
-0.4300684630870819,
0.179457828402... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
theblackcat102/sharegpt-english | theblackcat102 | 2023-04-22T03:57:11Z | 89 | 5 | null | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:other",
"region:us"
] | 2023-04-22T03:57:11Z | 2023-04-22T03:40:58.000Z | 2023-04-22T03:40:58 | ---
license: other
task_categories:
- text-generation
language:
- en
size_categories:
- 10K<n<100K
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lighteval/wikitext_103 | lighteval | 2023-05-12T14:47:20Z | 89 | 0 | null | [
"region:us"
] | 2023-05-12T14:47:20Z | 2023-05-12T13:47:15.000Z | 2023-05-12T13:47:15 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
danielv835/personal_finance_v0.2 | danielv835 | 2023-05-13T21:06:35Z | 89 | 12 | null | [
"region:us"
] | 2023-05-13T21:06:35Z | 2023-05-13T21:06:30.000Z | 2023-05-13T21:06:30 | ---
dataset_info:
features:
- name: context
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 105692600
num_examples: 56557
- name: test
num_bytes: 1825911
num_examples: 1000
download_size: 64159306
dataset_size: 107518511
---
# Dataset Card for "personal_finance_v0.2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4567191004753113,
-0.19822093844413757,
0.12914372980594635,
0.4096739590167999,
-0.14753812551498413,
-0.002888081595301628,
0.27139732241630554,
-0.13060401380062103,
0.7076382040977478,
0.6365323066711426,
-0.7254858613014221,
-0.6579204797744751,
-0.4669483006000519,
-0.471519112586... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pytorch-survival/nwtco | pytorch-survival | 2023-05-15T10:54:55Z | 89 | 0 | null | [
"region:us"
] | 2023-05-15T10:54:55Z | 2023-05-15T10:54:51.000Z | 2023-05-15T10:54:51 | ---
dataset_info:
features:
- name: stage
dtype: int64
- name: age
dtype: float32
- name: in.subcohort
dtype: float32
- name: instit_2
dtype: float32
- name: histol_2
dtype: float32
- name: study_4
dtype: float32
- name: event_time
dtype: float32
- name: event_indicator
dtype: int64
splits:
- name: train
num_bytes: 161120
num_examples: 4028
download_size: 41178
dataset_size: 161120
---
# Dataset Card for "nwtco"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5377203822135925,
-0.08727938681840897,
0.17915958166122437,
0.3198348879814148,
-0.31243598461151123,
0.01670624502003193,
0.28768062591552734,
-0.5023419857025146,
0.7616599202156067,
0.73548823595047,
-0.876285195350647,
-0.9169028997421265,
-0.5990079045295715,
0.00908997468650341,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
openchat/openchat_sharegpt4_dataset | openchat | 2023-07-01T13:20:31Z | 89 | 117 | null | [
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | 2023-07-01T13:20:31Z | 2023-05-26T13:45:36.000Z | 2023-05-26T13:45:36 | ---
task_categories:
- conversational
- text-generation
language:
- en
pretty_name: OpenChat
size_categories:
- 1K<n<10K
---
This repository contains cleaned and filtered ShareGPT GPT-4 data used to train OpenChat. Details can be found in the [OpenChat repository](https://github.com/imoneoi/openchat). | [
-0.6088218688964844,
-0.5741557478904724,
0.16189897060394287,
-0.09859652072191238,
-0.1411149650812149,
-0.021635640412569046,
-0.009331831708550453,
-0.24900110065937042,
0.28041210770606995,
0.934871256351471,
-0.935779869556427,
-0.3039730489253998,
-0.04000032693147659,
-0.2428446114... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
warshakhan/donut_vqa_ISynHMP | warshakhan | 2023-09-15T07:12:51Z | 89 | 1 | null | [
"task_categories:visual-question-answering",
"language:en",
"license:unknown",
"medical",
" prescriptions",
"region:us"
] | 2023-09-15T07:12:51Z | 2023-09-14T11:10:50.000Z | 2023-09-14T11:10:50 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 578804498
num_examples: 2800
- name: valid
num_bytes: 85350687
num_examples: 400
- name: test
num_bytes: 172300907
num_examples: 800
download_size: 804418576
dataset_size: 836456092
license: unknown
task_categories:
- visual-question-answering
language:
- en
tags:
- medical
- ' prescriptions'
---
# Dataset Card for "donut_vqa_ISynHMP"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.21433283388614655,
-0.18345262110233307,
0.23093928396701813,
0.1025298535823822,
-0.14849722385406494,
0.2686128318309784,
0.09896941483020782,
-0.0822184830904007,
1.0307399034500122,
0.5308089852333069,
-0.9205808639526367,
-0.7165781855583191,
-0.5971201658248901,
-0.398297727108001... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lucas-meyer/asr_xh | lucas-meyer | 2023-10-16T21:54:54Z | 89 | 0 | null | [
"region:us"
] | 2023-10-16T21:54:54Z | 2023-10-16T21:07:38.000Z | 2023-10-16T21:07:38 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 3767023248.632
num_examples: 2506
- name: validation
num_bytes: 287475823.0
num_examples: 338
- name: test
num_bytes: 596246711.0
num_examples: 627
download_size: 2040812826
dataset_size: 4650745782.632
---
# Dataset Card for "asr_xh"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5432369112968445,
-0.04486807435750961,
0.057391032576560974,
0.07180818170309067,
-0.20902395248413086,
0.17020489275455475,
0.312106192111969,
-0.2604992091655731,
0.9273452758789062,
0.540615975856781,
-0.7978478670120239,
-0.6995657086372375,
-0.6242111921310425,
-0.2478424608707428... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
CASIA-LM/ChineseWebText | CASIA-LM | 2023-11-13T01:59:09Z | 89 | 16 | null | [
"arxiv:2311.01149",
"region:us"
] | 2023-11-13T01:59:09Z | 2023-11-02T15:49:54.000Z | 2023-11-02T15:49:54 | # ChineseWebText: Large-Scale High-quality Chinese Web Text Extracted with Effective Evaluation Model
This directory contains the ChineseWebText dataset, and the EvalWeb tool-chain to process CommonCrawl Data. Our EvalWeb tool is publicly available on github https://github.com/CASIA-LM/ChineseWebText.
# ChineseWebText
- ### Dataset Overview
We release the latest and largest Chinese dataset **ChineseWebText**, which consists of **1.42 TB** data and each text is assigned a quality score, facilitating LLM researchers to select data according to a new quality threshold. We also release a much **Cleaner subset** of **600 GB** Chinese texts with quality exceeding **90%** .
<div align=center><img src="./pictures/Overview_of_output_datasets.png" style="zoom:67%;" /></div>
- ### Data Example
```json
{
"title": "潍坊银行2021年上半年净利润同比增长29.57% 不良率降至1.10%_财经_中国网",
"score": 0.95,
"text": "潍坊银行2021年上半年净利润同比增长29.57% 不良率降至1.10%\n中国网财经8月24日讯 潍坊银行昨日披露2021年二季度信息报告显示,截至2021 年6月末,潍坊银行资产总额1920.44亿元,较上年末增长9.34%;负债总额1789.16亿元,较上年末增长10.54%。2021年上半年,潍坊银行实现净利润 6.09亿元,同比增长29.57%。\n资产质量方面,截至2021年6月末,潍坊银行不良贷款率1.10%,较上年末下降0.13个百分点。\n资本金方面,截至 2021年6月末,潍坊银行资本充足率、核心一级资本充足率、一级资本充足率分别为11.66%、7.89%、10.13%,分别较上年末下降1.89、0.89、1.15 个百分点。",
"url": "http://finance.china.com.cn/news/special/2021bnb/20210824/5638343.shtml",
"source\_domain": "finance.china.com.cn"
}
```
- "title": 【string】The title of the data text.
- "score": 【float】Quality score generated by the quality evaluation model.
- "text": 【string】Text content of data sample.
- "url": 【string】External URL, points to the original web address of the text.
- "source_domain": 【string】The domain name of the source website.
# Citation
Please cite the paper if you use the data in this repo.
```shell
@misc{chen2023chinesewebtext,
title={ChineseWebText: Large-scale High-quality Chinese Web Text Extracted with Effective Evaluation Model},
author={Jianghao Chen and Pu Jian and Tengxiao Xi and Dongyi Yi and Qianlong Du and Chenglin Ding and Guibo Zhu and Chengqing Zong and Jinqiao Wang and Jiajun Zhang},
year={2023},
eprint={2311.01149},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | [
-0.2895003855228424,
-0.6031818985939026,
0.14093178510665894,
0.4737762212753296,
-0.5843622088432312,
-0.20960493385791779,
-0.2626437842845917,
-0.47500571608543396,
0.020537065342068672,
0.2661586105823517,
-0.29618141055107117,
-0.7580338716506958,
-0.39574894309043884,
-0.01496817916... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
islamrokon/Dataset | islamrokon | 2023-11-11T17:34:53Z | 89 | 0 | null | [
"region:us"
] | 2023-11-11T17:34:53Z | 2023-11-11T17:34:27.000Z | 2023-11-11T17:34:27 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 379777.2888616891
num_examples: 735
- name: test
num_bytes: 42369.71113831089
num_examples: 82
download_size: 165978
dataset_size: 422147.0
---
# Dataset Card for "Dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6340661644935608,
-0.26226529479026794,
0.15891331434249878,
0.22118574380874634,
-0.22690987586975098,
0.11057689040899277,
0.31312087178230286,
-0.18028026819229126,
0.946476936340332,
0.501311182975769,
-0.874888002872467,
-0.7644774913787842,
-0.6311537027359009,
-0.268536239862442,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arcee-ai/nuclear_patents | arcee-ai | 2023-11-17T23:19:24Z | 89 | 0 | null | [
"region:us"
] | 2023-11-17T23:19:24Z | 2023-11-17T05:04:52.000Z | 2023-11-17T05:04:52 | ---
dataset_info:
features:
- name: patent_number
dtype: string
- name: section
dtype: string
- name: raw_text
dtype: string
splits:
- name: train
num_bytes: 388930493
num_examples: 37248
download_size: 146263843
dataset_size: 388930493
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "nuclear_patents"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5238432288169861,
-0.07895703613758087,
0.3949052691459656,
0.1778896003961563,
-0.22375507652759552,
0.13005632162094116,
0.3292931020259857,
-0.08906291425228119,
0.5838650465011597,
0.5124573111534119,
-0.4619831144809723,
-0.8427221775054932,
-0.6246493458747864,
-0.2899543344974518... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
msivanes/github-issues | msivanes | 2021-12-03T21:24:58Z | 88 | 0 | null | [
"region:us"
] | 2021-12-03T21:24:58Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
openclimatefix/goes-mrms | openclimatefix | 2023-05-12T08:56:03Z | 88 | 0 | null | [
"region:us"
] | 2023-05-12T08:56:03Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | # Dataset Card for Goes-MRMS
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This dataset is a combination of GOES-16 data and MRMS radar precipitation data to roughly match the unreleased dataset used to train Google Research's MetNet. In the papers they used GOES-16 satellite imagery, MultiRadar/Multi-System (MRMS) instantaneous precipitation, hourly cumulative precipitation, and High Resolution Rapid Refresh NWP initializations as inputs to predict future MRMS precipitation rates. The precipitation rates were binned into 0.2mm/hr bins to make the output a classification task, and allow for the models to predict a probability distribution over the region of interest.
Additionally, the input image patches are much larger than the target image patches. For MetNet, the input images covered 512x512 km area, while the target was the center 64x64 km crop. For MetNet-2 the input covered 2048x2048 km with the target being the central 512x512 km.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
MetNet (January 2018-July 2019) (16 days training, 2 days validation, 2 days test)
MetNet-2 (July 2017-August 2020) (Non-overlapping time ranges with 12 hour black outs in between)
Full (July 2017-January 2022) (Train: 2017-2020. except for first of the month, Validation: first of the month July 2017-2020, Test: 2021-2022)
## Dataset Creation
### Curation Rationale
The original curation rationale was for forecasting precipitation rate in a probabilistic way. This dataset covers a different time period than in the original paper, going from July 2017 through December 2021. There is a split available to match the temporal coverage of the original MetNet paper, (Janurary 2018 to July 2019) or the MetNet-2 paper (July 2017 to August 2020).
### Source Data
#### Initial Data Collection and Normalization
From the MetNet paper: "For both MRMS and GOES we acquired data for the period January 2018 through July 2019. We split the data temporally into three non-overlapping data sets by repeatedly using approximately 16 days for training followed by two days for validation and two days for testing. From these temporal splits we randomly extracted 13,717 test and validation samples and kept increasing the training set size until we observed no over-fitting at 1.72 million training samples."
From the MetNet-2 paper: "The training data consists of 1,230,585 patches of size 2048 km x 2048 km at the input and targets of size 512 km x 512 km including all 360 (2 to 720 minutes) time slices. The training area covers a region of 7000x2500 kilometers. We sample target patches from the input context region minus an all around border of 512 km. The input context is padded for all regions outside of the 7000x2500 CONUS. The validation data used for developing the models consists of 11,991 patches and the test data of 39,864 patches. The training, validation and test data are drawn from non-overlapping ranges of hours, with black out periods of 12 hours in between, over a period of observations of 3 years from July 2017 to August 2020. This ensures that the model does not learn any spurious training and evaluation correlations within any single day. HRRR only generates forecasts starting at full hours."
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
Jacob Bieker (jacob@openclimatefix.org)
MetNet-1 split: MetNet Authors
MetNet-2 split: MetNet-2 Authors
### Licensing Information
All data is open and without restrictions from NOAA.
### Citation Information
Please cite NOAA as the data provider. | [
-0.4919968247413635,
-0.294055312871933,
0.37687569856643677,
0.18670283257961273,
-0.4696354568004608,
-0.2336876094341278,
-0.2224520891904831,
-0.3624035716056824,
0.12202666699886322,
0.5172843337059021,
-0.8613170385360718,
-0.5482283234596252,
-0.5598492622375488,
-0.1145996376872062... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yxchar/imdb-tlm | yxchar | 2021-11-04T18:01:06Z | 88 | 0 | null | [
"region:us"
] | 2021-11-04T18:01:06Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
crystina-z/nocs-mrtydi | crystina-z | 2022-03-06T18:23:15Z | 88 | 0 | null | [
"region:us"
] | 2022-03-06T18:23:15Z | 2022-03-06T01:54:51.000Z | 2022-03-06T01:54:51 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pszemraj/booksum-short | pszemraj | 2023-02-27T08:45:01Z | 88 | 1 | null | [
"task_categories:summarization",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"source_datasets:kmfoda/booksum",
"language:en",
"license:bsd-3-clause",
"booksum",
"long-document",
"region:us"
] | 2023-02-27T08:45:01Z | 2022-11-23T16:40:45.000Z | 2022-11-23T16:40:45 | ---
source_datasets: kmfoda/booksum
license:
- bsd-3-clause
train-eval-index:
- config: pszemraj--booksum_short
task: summarization
task_id: summarization
splits:
eval_split: test
col_mapping:
chapter: text
summary_text: target
task_categories:
- summarization
- text2text-generation
language:
- en
tags:
- booksum
- long-document
size_categories:
- 10K<n<100K
---
---
# booksum short
`BookSum` but all summaries with length greater than 512 `long-t5` tokens are filtered out.
The columns `chapter_length` and `summary_length` **in this dataset** have been updated to reflect the total of Long-T5 tokens in the respective source text.
## Token Length Distribution for inputs
 | [
-0.452030748128891,
-0.13238564133644104,
0.37781259417533875,
0.0651092678308487,
-1.1038730144500732,
0.2524206340312958,
-0.20359231531620026,
-0.5237325429916382,
0.5702962875366211,
0.8549673557281494,
-0.6287470459938049,
-1.0076096057891846,
-0.8517695665359497,
0.47672000527381897,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
patrickvonplaten/restore_punctuation_medium_num_beams_1 | patrickvonplaten | 2022-12-28T21:14:09Z | 88 | 0 | null | [
"speechbox_punc",
"region:us"
] | 2022-12-28T21:14:09Z | 2022-12-28T20:45:23.000Z | 2022-12-28T20:45:23 | ---
tags:
- speechbox_punc
--- | [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MohamedRashad/ChatGPT-prompts | MohamedRashad | 2023-01-26T22:54:31Z | 88 | 30 | null | [
"region:us"
] | 2023-01-26T22:54:31Z | 2023-01-26T22:32:41.000Z | 2023-01-26T22:32:41 | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# ChatGPT-Prompts Dataset
## Description
This dataset aims to provide an evaluation data for the Language Models to come. It has been generated using [LearnGPT website](https://www.emergentmind.com/).
| [
-0.34413549304008484,
-0.7396044135093689,
0.19473062455654144,
0.2916199266910553,
-0.18909433484077454,
0.18153849244117737,
-0.11073845624923706,
0.1416308432817459,
-0.2105053812265396,
0.2518371045589447,
-1.1124235391616821,
-0.35296282172203064,
-0.2939416468143463,
-0.3612452149391... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BelleGroup/train_1M_CN | BelleGroup | 2023-04-03T08:23:17Z | 88 | 109 | null | [
"task_categories:text2text-generation",
"size_categories:100K<n<1M",
"language:zh",
"license:gpl-3.0",
"region:us"
] | 2023-04-03T08:23:17Z | 2023-03-31T08:53:50.000Z | 2023-03-31T08:53:50 | ---
license: gpl-3.0
task_categories:
- text2text-generation
language:
- zh
size_categories:
- 100K<n<1M
---
## 内容
包含约100万条由[BELLE](https://github.com/LianjiaTech/BELLE)项目生成的中文指令数据。
## 样例
```
{
"instruction": "给定一个文字输入,将其中的所有数字加1。\n“明天的会议在9点开始,记得准时到达。”\n",
"input": "",
"output": "“明天的会议在10点开始,记得准时到达。”"
}
```
### 字段:
```
instruction: 指令
input: 输入(本数据集均为空)
output: 输出
```
## 使用限制
仅允许将此数据集及使用此数据集生成的衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。
本数据集不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张。因使用本数据集带来的任何损害、纠纷,本项目不承担任何责任。
| [
-0.23519383370876312,
-0.6840639114379883,
0.30780327320098877,
0.7593039274215698,
-0.3815680742263794,
-0.4118094742298126,
0.29784443974494934,
-0.17227360606193542,
0.46416881680488586,
0.6146251559257507,
-0.8424740433692932,
-1.0696409940719604,
-0.7840135097503662,
-0.08905573189258... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lexlms/legal_lama | lexlms | 2023-07-24T13:13:15Z | 88 | 7 | null | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended",
"language:en",
"license:cc-by-nc-sa-4.0",
"... | 2023-07-24T13:13:15Z | 2023-05-10T16:07:14.000Z | 2023-05-10T16:07:14 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended
task_categories:
- text-generation
- fill-mask
task_ids:
- masked-language-modeling
pretty_name: LegalLAMA
tags:
- legal
- law
---
# Dataset Card for "LegalLAMA"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Specifications](#supported-tasks-and-leaderboards)
## Dataset Description
- **Homepage:** https://github.com/coastalcph/lexlms
- **Repository:** https://github.com/coastalcph/lexlms
- **Paper:** https://arxiv.org/abs/2305.07507
- **Point of Contact:** [Ilias Chalkidis](mailto:ilias.chalkidis@di.ku.dk)
### Dataset Summary
LegalLAMA is a diverse probing benchmark suite comprising 8 sub-tasks that aims to assess the acquaintance of legal knowledge that PLMs acquired in pre-training.
### Dataset Specifications
| Corpus | Corpus alias | Examples | Avg. Tokens | Labels |
|--------------------------------------|----------------------|-----------|-------------|--------|
| Criminal Code Sections (Canada) | `canadian_sections` | 321 | 72 | 144 |
| Legal Terminology (EU) | `cjeu_term` | 2,127 | 164 | 23 |
| Contractual Section Titles (US) | `contract_sections` | 1,527 | 85 | 20 |
| Contract Types (US) | `contract_types` | 1,089 | 150 | 15 |
| ECHR Articles (CoE) | `ecthr_articles` | 5,072 | 69 | 13 |
| Legal Terminology (CoE) | `ecthr_terms` | 6,803 | 97 | 250 |
| Crime Charges (US) | `us_crimes` | 4,518 | 118 | 59 |
| Legal Terminology (US) | `us_terms` | 5,829 | 308 | 7 |
### Usage
Load a specific sub-corpus, given the corpus alias, as presented above.
```python
from datasets import load_dataset
dataset = load_dataset('lexlms/legal_lama', name='ecthr_terms')
```
### Citation
[*Ilias Chalkidis\*, Nicolas Garneau\*, Catalina E.C. Goanta, Daniel Martin Katz, and Anders Søgaard.*
*LeXFiles and LegalLAMA: Facilitating English Multinational Legal Language Model Development.*
*2022. In the Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics. Toronto, Canada.*](https://aclanthology.org/2023.acl-long.865/)
```
@inproceedings{chalkidis-etal-2023-lexfiles,
title = "{L}e{XF}iles and {L}egal{LAMA}: Facilitating {E}nglish Multinational Legal Language Model Development",
author = "Chalkidis, Ilias and
Garneau, Nicolas and
Goanta, Catalina and
Katz, Daniel and
S{\o}gaard, Anders",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.865",
pages = "15513--15535",
}
``` | [
-0.3430405557155609,
-0.4587031900882721,
0.4368448853492737,
0.16484564542770386,
-0.46983909606933594,
-0.07258989661931992,
-0.21356014907360077,
-0.41632482409477234,
0.3852919936180115,
0.5244418978691101,
-0.27179086208343506,
-1.0731799602508545,
-0.47268182039260864,
0.147676855325... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
whu9/xsum_postprocess | whu9 | 2023-06-03T06:11:57Z | 88 | 0 | null | [
"region:us"
] | 2023-06-03T06:11:57Z | 2023-06-03T06:11:45.000Z | 2023-06-03T06:11:45 | ---
dataset_info:
features:
- name: source
dtype: string
- name: summary
dtype: string
- name: source_num_tokens
dtype: int64
- name: summary_num_tokens
dtype: int64
splits:
- name: train
num_bytes: 479957379
num_examples: 203788
- name: validation
num_bytes: 26334240
num_examples: 11313
- name: test
num_bytes: 26797491
num_examples: 11319
download_size: 338633607
dataset_size: 533089110
---
# Dataset Card for "xsum_postprocess"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4930371642112732,
0.025476381182670593,
0.23839624226093292,
0.11425343155860901,
-0.24961021542549133,
-0.06987901031970978,
0.25240612030029297,
-0.1344476342201233,
1.0829519033432007,
0.7362645864486694,
-0.8098557591438293,
-0.6810317039489746,
-0.8655821681022644,
-0.3483952879905... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hezarai/lscp-pos-500k | hezarai | 2023-09-02T08:41:54Z | 88 | 0 | null | [
"task_categories:token-classification",
"language:fa",
"region:us"
] | 2023-09-02T08:41:54Z | 2023-06-25T11:28:38.000Z | 2023-06-25T11:28:38 | ---
task_categories:
- token-classification
language:
- fa
pretty_name: LSCP Dataset (500k samples version)
---
This is a 500 thousand sample version of the original [LSCP dataset](https://iasbs.ac.ir/~ansari/lscp/) that only contains the text and part-of-speech tags and is used for sequence labeling.
### Citation
```bibtex
@InProceedings{abdikhojasteh:2020:LREC,
author = {Abdi Khojasteh, Hadi and Ansari, Ebrahim and Bohlouli, Mahdi},
title = {LSCP: Enhanced Large Scale Colloquial Persian Language Understanding},
booktitle = {Proceedings of the Twelfth International Conference on Language Resources and Evaluation (LREC 2020)},
year = {2020}
address = {Marseille, France},
publisher = {European Language Resources Association}
pages = {6323--6327},
url = {https://www.aclweb.org/anthology/2020.lrec-1.776}
}
``` | [
-0.3201351761817932,
-0.3191016912460327,
0.3424001932144165,
-0.13086068630218506,
-0.1795894354581833,
0.29530999064445496,
-0.531342625617981,
-0.2883365750312805,
0.44551506638526917,
0.6248985528945923,
-0.646912693977356,
-0.40466830134391785,
-0.1397850513458252,
0.3931249976158142,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
axiong/pmc_llama_instructions | axiong | 2023-11-23T08:47:30Z | 88 | 12 | null | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"license:openrail",
"biology",
"med",
"region:us"
] | 2023-11-23T08:47:30Z | 2023-09-01T00:56:32.000Z | 2023-09-01T00:56:32 | ---
license: openrail
task_categories:
- question-answering
- text-generation
language:
- en
tags:
- biology
- med
---
This repo provides part of the dataset used for PMC-LLaMA-13B's instruction tuning.
| Data | Size | Link |
| --- | --- | --- |
| ChatDoctor | 100K | https://www.yunxiangli.top/ChatDoctor/ |
| MedQA | 10.2K | https://huggingface.co/datasets/GBaker/MedQA-USMLE-4-options |
| MedMCQA | 183K | https://huggingface.co/datasets/medmcqa |
| PubmedQA | 211K | https://huggingface.co/datasets/pubmed_qa |
| LiveQA | 635 | https://huggingface.co/datasets/truehealth/liveqa |
| MedicationQA | 690 | https://huggingface.co/datasets/truehealth/medicationqa |
| UMLS | 99K | https://www.nlm.nih.gov/research/umls/index.html |
The whole instruction dataset is composed of 7 parts. We have covered them in this dataset repo except for *ChatDoctor*.
You should consider merge ChatDoctor's data for complete dataset.
| [
-0.4103669822216034,
-0.2848520278930664,
0.29294678568840027,
0.15635369718074799,
-0.381961852312088,
0.1700269728899002,
-0.02235987037420273,
0.030353911221027374,
0.24246059358119965,
0.9206468462944031,
-1.0698362588882446,
-0.7936257719993591,
-0.43903467059135437,
0.079335190355777... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
open-llm-leaderboard/details_budecosystem__genz-70b | open-llm-leaderboard | 2023-10-23T19:01:46Z | 88 | 0 | null | [
"region:us"
] | 2023-10-23T19:01:46Z | 2023-09-13T09:54:20.000Z | 2023-09-13T09:54:20 | ---
pretty_name: Evaluation run of budecosystem/genz-70b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [budecosystem/genz-70b](https://huggingface.co/budecosystem/genz-70b) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_budecosystem__genz-70b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-23T19:01:32.642131](https://huggingface.co/datasets/open-llm-leaderboard/details_budecosystem__genz-70b/blob/main/results_2023-10-23T19-01-32.642131.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.421875,\n \
\ \"em_stderr\": 0.005057576044482799,\n \"f1\": 0.5428481543624201,\n\
\ \"f1_stderr\": 0.004562270615925701,\n \"acc\": 0.5862101051177826,\n\
\ \"acc_stderr\": 0.011727291302229777\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.421875,\n \"em_stderr\": 0.005057576044482799,\n \
\ \"f1\": 0.5428481543624201,\n \"f1_stderr\": 0.004562270615925701\n \
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3373768006065201,\n \
\ \"acc_stderr\": 0.013023665136222105\n },\n \"harness|winogrande|5\":\
\ {\n \"acc\": 0.835043409629045,\n \"acc_stderr\": 0.010430917468237448\n\
\ }\n}\n```"
repo_url: https://huggingface.co/budecosystem/genz-70b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|arc:challenge|25_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_23T19_01_32.642131
path:
- '**/details_harness|drop|3_2023-10-23T19-01-32.642131.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-23T19-01-32.642131.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_23T19_01_32.642131
path:
- '**/details_harness|gsm8k|5_2023-10-23T19-01-32.642131.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-23T19-01-32.642131.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hellaswag|10_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-13T09-54-04.852738.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-13T09-54-04.852738.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-13T09-54-04.852738.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_23T19_01_32.642131
path:
- '**/details_harness|winogrande|5_2023-10-23T19-01-32.642131.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-23T19-01-32.642131.parquet'
- config_name: results
data_files:
- split: 2023_09_13T09_54_04.852738
path:
- results_2023-09-13T09-54-04.852738.parquet
- split: 2023_10_23T19_01_32.642131
path:
- results_2023-10-23T19-01-32.642131.parquet
- split: latest
path:
- results_2023-10-23T19-01-32.642131.parquet
---
# Dataset Card for Evaluation run of budecosystem/genz-70b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/budecosystem/genz-70b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [budecosystem/genz-70b](https://huggingface.co/budecosystem/genz-70b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_budecosystem__genz-70b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-23T19:01:32.642131](https://huggingface.co/datasets/open-llm-leaderboard/details_budecosystem__genz-70b/blob/main/results_2023-10-23T19-01-32.642131.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.421875,
"em_stderr": 0.005057576044482799,
"f1": 0.5428481543624201,
"f1_stderr": 0.004562270615925701,
"acc": 0.5862101051177826,
"acc_stderr": 0.011727291302229777
},
"harness|drop|3": {
"em": 0.421875,
"em_stderr": 0.005057576044482799,
"f1": 0.5428481543624201,
"f1_stderr": 0.004562270615925701
},
"harness|gsm8k|5": {
"acc": 0.3373768006065201,
"acc_stderr": 0.013023665136222105
},
"harness|winogrande|5": {
"acc": 0.835043409629045,
"acc_stderr": 0.010430917468237448
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.41181546449661255,
-0.688989520072937,
0.253316193819046,
0.2077721357345581,
-0.2965050935745239,
0.1949695497751236,
-0.41540002822875977,
-0.12788502871990204,
0.381841778755188,
0.4617634117603302,
-0.7168672680854797,
-1.0615382194519043,
-0.6126574873924255,
0.19422437250614166,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Trelis/stanford-NIL-disclosure-ft | Trelis | 2023-10-17T09:34:27Z | 88 | 0 | null | [
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"fine-tuning",
"NIL",
"region:us"
] | 2023-10-17T09:34:27Z | 2023-10-06T08:43:16.000Z | 2023-10-06T08:43:16 | ---
task_categories:
- text-generation
language:
- en
tags:
- fine-tuning
- NIL
size_categories:
- n<1K
---
# NIL Policy
Data is taken from the [Stanford website](https://gostanford.com/sports/2022/11/11/nil-student-athletes.aspx).
Data is chunked into rows for the training set.
The test.csv dataset is generated using Llama 70B to extract key takeaways from the raw text.
For educational and non-commercial use only. | [
-0.09718488901853561,
-0.6835845112800598,
0.26612767577171326,
0.2271374762058258,
-0.1452321708202362,
0.11764475703239441,
0.12054059654474258,
-0.3624800443649292,
0.4263329803943634,
0.7388677000999451,
-1.0529152154922485,
-0.2827344834804535,
-0.008010247722268105,
-0.01213414873927... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
owkin/nct-crc-he | owkin | 2023-10-26T09:42:47Z | 88 | 0 | null | [
"task_categories:image-classification",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-sa-3.0",
"biology",
"medical",
"cancer",
"colorectal cancer",
"region:us"
] | 2023-10-26T09:42:47Z | 2023-10-13T11:31:07.000Z | 2023-10-13T11:31:07 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': ADI
'1': BACK
'2': DEB
'3': LYM
'4': MUC
'5': MUS
'6': NORM
'7': STR
'8': TUM
splits:
- name: nct_crc_he_100
num_bytes: 15058006
num_examples: 99
- name: nct_crc_he_1k
num_bytes: 151950686
num_examples: 999
- name: crc_val_he_7k
num_bytes: 1092855241.74
num_examples: 7180
download_size: 1095677324
dataset_size: 1259863933.74
configs:
- config_name: default
data_files:
- split: nct_crc_he_100
path: data/nct_crc_he_100-*
- split: nct_crc_he_1k
path: data/nct_crc_he_1k-*
- split: crc_val_he_7k
path: data/crc_val_he_7k-*
license: cc-by-sa-3.0
task_categories:
- image-classification
language:
- en
tags:
- biology
- medical
- cancer
- colorectal cancer
pretty_name: NCT_CRC
size_categories:
- 10K<n<100K
---
# Dataset Card for NCT-CRC-HE
### Dataset Summary
The NCT-CRC-HE dataset consists of images of human tissue slides, some of which contain cancer.
### Data Splits
The dataset contains tissues from different parts of the body. Examples from each of the 9 classes can be seen below

### Initial Data Collection and Normalization
NCT biobank (National Center for Tumor Diseases) and the UMM pathology archive (University Medical Center Mannheim). Images were normalized using Macenko normalization.
### Licensing Information
CC-BY-SA
### Citation Information
Owkin claims no ownership of the dataset. This is simply an upload of the original dataset onto HF.
[Link to original paper](https://zenodo.org/records/1214456)
| [
-0.1817297488451004,
-0.21266381442546844,
0.20530007779598236,
-0.18727342784404755,
-0.6505969762802124,
0.30744704604148865,
0.012469465844333172,
-0.16992256045341492,
0.3982241451740265,
0.7667328715324402,
-0.4698769152164459,
-1.0937449932098389,
-0.5109419822692871,
0.1883097141981... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jmelsbach/easy-german-explanations | jmelsbach | 2023-11-02T14:46:41Z | 88 | 0 | null | [
"region:us"
] | 2023-11-02T14:46:41Z | 2023-11-02T14:46:34.000Z | 2023-11-02T14:46:34 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: title
dtype: string
- name: href
dtype: string
- name: url
dtype: string
- name: content
dtype: string
- name: parsed_content
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 76834511.02908278
num_examples: 2860
- name: test
num_bytes: 19235492.970917225
num_examples: 716
download_size: 22733394
dataset_size: 96070004.0
---
# Dataset Card for "easy-german-explanations"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8102384209632874,
-0.6572426557540894,
0.5452378988265991,
0.35313552618026733,
-0.2500763535499573,
-0.3806709051132202,
-0.08092751353979111,
-0.12202299386262894,
0.47356170415878296,
0.15583522617816925,
-1.05675208568573,
-0.8606070280075073,
-0.6140314936637878,
-0.171435356140136... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
NikitiusIvanov/protein_seq_to_go_bio_process | NikitiusIvanov | 2023-11-26T20:34:27Z | 88 | 0 | null | [
"region:us"
] | 2023-11-26T20:34:27Z | 2023-11-11T12:44:24.000Z | 2023-11-11T12:44:24 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hugosousa/TimeQA | hugosousa | 2023-11-28T19:02:42Z | 88 | 1 | null | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"language_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:bsd-3-clause-clear",
"region:us"
] | 2023-11-28T19:02:42Z | 2023-11-12T19:31:09.000Z | 2023-11-12T19:31:09 | ---
annotations_creators: []
language:
- en
language_creators:
- crowdsourced
- machine-generated
license:
- bsd-3-clause-clear
multilinguality:
- monolingual
pretty_name: TimeQA
size_categories:
- 10K<n<100K
source_datasets: []
tags: []
task_categories:
- question-answering
task_ids:
- closed-domain-qa
---
# TimeQA
Check out the original [GitHub repo](https://github.com/wenhuchen/Time-Sensitive-QA/tree/main) to learn more about the dataset.
| [
-0.2235194742679596,
-0.4337158203125,
0.1560944765806198,
0.1436457335948944,
-0.14403195679187775,
0.37880027294158936,
0.35431739687919617,
-0.34329891204833984,
0.2467418611049652,
0.33235907554626465,
-0.7305540442466736,
-0.5048623085021973,
-0.05065450072288513,
-0.19848576188087463... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jlbaker361/small_subtraction_decimal | jlbaker361 | 2023-11-17T05:53:58Z | 88 | 0 | null | [
"region:us"
] | 2023-11-17T05:53:58Z | 2023-11-17T04:47:44.000Z | 2023-11-17T04:47:44 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: float64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2030.2222222222222
num_examples: 40
- name: test
num_bytes: 253.77777777777777
num_examples: 5
download_size: 4553
dataset_size: 2284.0
---
# Dataset Card for "small_subtraction_decimal"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6442224979400635,
-0.36641207337379456,
0.1599646955728531,
0.14523249864578247,
-0.3843582570552826,
-0.3169858455657959,
0.019644711166620255,
-0.17754188179969788,
0.8458942174911499,
0.1515674591064453,
-0.8296144008636475,
-0.5812747478485107,
-0.6630585193634033,
-0.16941215097904... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
girrajjangid/mini-platypus | girrajjangid | 2023-11-20T13:30:04Z | 88 | 0 | null | [
"region:us"
] | 2023-11-20T13:30:04Z | 2023-11-20T13:30:01.000Z | 2023-11-20T13:30:01 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 28547714
num_examples: 5000
download_size: 17892667
dataset_size: 28547714
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
strombergnlp/offenseval_2020 | strombergnlp | 2022-05-12T10:04:57Z | 87 | 1 | null | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"arxiv:2006.07235",
"arxiv:2004.02192",
"arxiv:1908.04531",
"arxi... | 2022-05-12T10:04:57Z | 2022-05-10T10:22:47.000Z | 2022-05-10T10:22:47 | ---
annotations_creators:
- expert-generated
language_creators:
- found
languages:
- ar
- da
- en
- gr
- tr
licenses:
- cc-by-4.0
multilinguality:
- multilingual
pretty_name: OffensEval 2020
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- hate-speech-detection
- text-classification-other-hate-speech-detection
extra_gated_prompt: "Warning: this repository contains harmful content (abusive language, hate speech)."
paperswithcode_id:
- dkhate
- ogtd
---
# Dataset Card for "offenseval_2020"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://sites.google.com/site/offensevalsharedtask/results-and-paper-submission](https://sites.google.com/site/offensevalsharedtask/results-and-paper-submission)
- **Repository:**
- **Paper:** [https://aclanthology.org/2020.semeval-1.188/](https://aclanthology.org/2020.semeval-1.188/), [https://arxiv.org/abs/2006.07235](https://arxiv.org/abs/2006.07235)
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
### Dataset Summary
OffensEval 2020 features a multilingual dataset with five languages. The languages included in OffensEval 2020 are:
* Arabic
* Danish
* English
* Greek
* Turkish
The annotation follows the hierarchical tagset proposed in the Offensive Language Identification Dataset (OLID) and used in OffensEval 2019.
In this taxonomy we break down offensive content into the following three sub-tasks taking the type and target of offensive content into account.
The following sub-tasks were organized:
* Sub-task A - Offensive language identification;
* Sub-task B - Automatic categorization of offense types;
* Sub-task C - Offense target identification.
English training data is omitted so needs to be collected otherwise (see [https://zenodo.org/record/3950379#.XxZ-aFVKipp](https://zenodo.org/record/3950379#.XxZ-aFVKipp))
The source datasets come from:
* Arabic [https://arxiv.org/pdf/2004.02192.pdf](https://arxiv.org/pdf/2004.02192.pdf), [https://aclanthology.org/2021.wanlp-1.13/](https://aclanthology.org/2021.wanlp-1.13/)
* Danish [https://arxiv.org/pdf/1908.04531.pdf](https://arxiv.org/pdf/1908.04531.pdf), [https://aclanthology.org/2020.lrec-1.430/?ref=https://githubhelp.com](https://aclanthology.org/2020.lrec-1.430/)
* English [https://arxiv.org/pdf/2004.14454.pdf](https://arxiv.org/pdf/2004.14454.pdf), [https://aclanthology.org/2021.findings-acl.80.pdf](https://aclanthology.org/2021.findings-acl.80.pdf)
* Greek [https://arxiv.org/pdf/2003.07459.pdf](https://arxiv.org/pdf/2003.07459.pdf), [https://aclanthology.org/2020.lrec-1.629/](https://aclanthology.org/2020.lrec-1.629/)
* Turkish [https://aclanthology.org/2020.lrec-1.758/](https://aclanthology.org/2020.lrec-1.758/)
### Supported Tasks and Leaderboards
* [OffensEval 2020](https://sites.google.com/site/offensevalsharedtask/results-and-paper-submission)
### Languages
Five are covered: bcp47 `ar;da;en;gr;tr`
## Dataset Structure
There are five named configs, one per language:
* `ar` Arabic
* `da` Danish
* `en` English
* `gr` Greek
* `tr` Turkish
The training data for English is absent - this is 9M tweets that need to be rehydrated on their own. See [https://zenodo.org/record/3950379#.XxZ-aFVKipp](https://zenodo.org/record/3950379#.XxZ-aFVKipp)
### Data Instances
An example of 'train' looks as follows.
```
{
'id': '0',
'text': 'PLACEHOLDER TEXT',
'subtask_a': 1,
}
```
### Data Fields
- `id`: a `string` feature.
- `text`: a `string`.
- `subtask_a`: whether or not the instance is offensive; `0: NOT, 1: OFF`
### Data Splits
| name |train|test|
|---------|----:|---:|
|ar|7839|1827|
|da|2961|329|
|en|0|3887|
|gr|8743|1544|
|tr|31277|3515|
## Dataset Creation
### Curation Rationale
Collecting data for abusive language classification. Different rational for each dataset.
### Source Data
#### Initial Data Collection and Normalization
Varies per language dataset
#### Who are the source language producers?
Social media users
### Annotations
#### Annotation process
Varies per language dataset
#### Who are the annotators?
Varies per language dataset; native speakers
### Personal and Sensitive Information
The data was public at the time of collection. No PII removal has been performed.
## Considerations for Using the Data
### Social Impact of Dataset
The data definitely contains abusive language. The data could be used to develop and propagate offensive language against every target group involved, i.e. ableism, racism, sexism, ageism, and so on.
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
The datasets is curated by each sub-part's paper authors.
### Licensing Information
This data is available and distributed under Creative Commons attribution license, CC-BY 4.0.
### Citation Information
```
@inproceedings{zampieri-etal-2020-semeval,
title = "{S}em{E}val-2020 Task 12: Multilingual Offensive Language Identification in Social Media ({O}ffens{E}val 2020)",
author = {Zampieri, Marcos and
Nakov, Preslav and
Rosenthal, Sara and
Atanasova, Pepa and
Karadzhov, Georgi and
Mubarak, Hamdy and
Derczynski, Leon and
Pitenis, Zeses and
{\c{C}}{\"o}ltekin, {\c{C}}a{\u{g}}r{\i}},
booktitle = "Proceedings of the Fourteenth Workshop on Semantic Evaluation",
month = dec,
year = "2020",
address = "Barcelona (online)",
publisher = "International Committee for Computational Linguistics",
url = "https://aclanthology.org/2020.semeval-1.188",
doi = "10.18653/v1/2020.semeval-1.188",
pages = "1425--1447",
abstract = "We present the results and the main findings of SemEval-2020 Task 12 on Multilingual Offensive Language Identification in Social Media (OffensEval-2020). The task included three subtasks corresponding to the hierarchical taxonomy of the OLID schema from OffensEval-2019, and it was offered in five languages: Arabic, Danish, English, Greek, and Turkish. OffensEval-2020 was one of the most popular tasks at SemEval-2020, attracting a large number of participants across all subtasks and languages: a total of 528 teams signed up to participate in the task, 145 teams submitted official runs on the test data, and 70 teams submitted system description papers.",
}
```
### Contributions
Author-added dataset [@leondz](https://github.com/leondz)
| [
-0.32924020290374756,
-0.7656857371330261,
-0.004322920460253954,
0.07691625505685806,
-0.20517167448997498,
0.13723336160182953,
-0.30027779936790466,
-0.66463702917099,
0.2653296887874603,
0.2867463231086731,
-0.38942286372184753,
-0.9892159700393677,
-0.6664133667945862,
0.2450038492679... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116212 | autoevaluate | 2022-09-09T04:54:17Z | 87 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-09T04:54:17Z | 2022-09-09T03:37:29.000Z | 2022-09-09T03:37:29 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- launch/gov_report
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2
metrics: ['bertscore']
dataset_name: launch/gov_report
dataset_config: plain_text
dataset_split: validation
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2
* Dataset: launch/gov_report
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. | [
-0.2338111698627472,
-0.15836618840694427,
0.3128703236579895,
0.271539568901062,
-0.18255755305290222,
-0.00462713185697794,
0.012688813731074333,
-0.46968260407447815,
0.1006406843662262,
0.4693675637245178,
-0.8046007752418518,
-0.339938223361969,
-0.698354184627533,
-0.1335030049085617... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kunishou/oasst1-89k-ja | kunishou | 2023-11-12T09:07:01Z | 87 | 15 | null | [
"language:ja",
"license:apache-2.0",
"region:us"
] | 2023-11-12T09:07:01Z | 2023-05-06T09:12:30.000Z | 2023-05-06T09:12:30 | ---
license: apache-2.0
language:
- ja
---
This dataset was created by automatically translating "OpenAssistant/oasst1" into Japanese.
The "ng_translation" flag indicates that the translation was not successful, and "1" means that the translation failed.
Therefore, for data with "1", "text" and "text_en" contain the same text.
**Update:**
- 2023/11/12
oasst1-89k-jaをチャット形式に変換した[oasst1-chat-44k-ja](https://huggingface.co/datasets/kunishou/oasst1-chat-44k-ja)を公開しました。
- 2023/10/21
自動翻訳によるコード関連データの翻訳誤り2000箇所程度を手動で修正しました。
**<details><summary>修正イメージを表示</summary><div>**
- 修正前
```
もちろん!これは、Flask Webフレームワークを使用して文字列を提供する単純なAPIエンドポイントを作成するPythonスクリプトの例です。
フラスコ輸入フラスコから
app = flask(__name__)
@app.route( '/')
def hello_world():
「こんにちは、世界!」を返します
__name__ == '__main__'の場合:
app.run()
このスクリプトでは、最初にフラスコモジュールからフラスコクラスをインポートします。次に、__Name__変数を使用してアプリケーションの名前を指定するフラスコクラスの新しいインスタンスを作成します。
```
- 修正後
```
もちろん!これは、Flask Webフレームワークを使用して文字列を提供する単純なAPIエンドポイントを作成するPythonスクリプトの例です。
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'Hello, world!'
if __name__ == '__main__':
app.run()
このスクリプトでは、最初にフラスコモジュールからフラスコクラスをインポートします。次に、__Name__変数を使用してアプリケーションの名前を指定するフラスコクラスの新しいインスタンスを作成します。
```
</div></details>
以下のコードを用いることで、 Instruction と Output (prompterの命令とassistantの回答)の形式に変換することができます。
ファインチューニングで使用する場合はこちらのコードで変換して下さい。
変換コード参考
https://github.com/h2oai/h2o-llmstudio/blob/5ebfd3879e226b4e1afd0a0b45eb632e60412129/app_utils/utils.py#L1888
```python
pip install datasets
```
```python
from datasets import load_dataset
import pandas as pd
import os
import json
# oasst1のオリジナルデータのロード
ds = load_dataset("OpenAssistant/oasst1")
train = ds["train"].to_pandas()
val = ds["validation"].to_pandas()
df_origin = pd.concat([train, val], axis=0).reset_index(drop=True)
# oasst1日本語翻訳データの読み込み
df_ja = pd.read_json("oasst1_ja_89k.json")
# oasst1のオリジナルデータと日本語翻訳データのマージ
df = pd.merge(df_origin, df_ja[["message_id", "text_ja"]], on="message_id", how="left").copy()
df["text"] = df["text_ja"]
df_assistant = df[(df.role == "assistant")].copy()
df_prompter = df[(df.role == "prompter")].copy()
df_prompter = df_prompter.set_index("message_id")
df_assistant["output"] = df_assistant["text"].values
inputs = []
parent_ids = []
for _, row in df_assistant.iterrows():
input = df_prompter.loc[row.parent_id]
inputs.append(input.text)
parent_ids.append(input.parent_id)
df_assistant["instruction"] = inputs
df_assistant["parent_id"] = parent_ids
df_assistant = df_assistant[
["instruction", "output", "message_id", "parent_id", "lang", "rank"]
].rename(columns={"message_id": "id"})
# 翻訳タスクのみデータに異常があるので除外
df_assistant2 = df_assistant[~df_assistant["instruction"].str.contains("翻訳")]
# これ以下でjsonファイルへ書き出し---------------
learn_datas = []
input_list = []
for n in range(len(df_assistant2)):
learn_data = {
"instruction": str(df_assistant2.iloc[n, 0]),
"input": "",
"output": ""
}
input_list.append(df_assistant2.iloc[n, 0])
learn_data["input"] = ""
learn_data["output"] = str(df_assistant2.iloc[n, 1])
learn_datas.append(learn_data)
json_learn_data = json.dumps(learn_datas, indent=4, ensure_ascii=False)
with open('oasst1_ja_converted.json', 'w', encoding="utf-8") as f:
f.write(json_learn_data)
```
oasst1-ja-89k Repository
https://github.com/kunishou/oasst1-89k-ja
OpenAssistant/oasst1
https://huggingface.co/datasets/OpenAssistant/oasst1 | [
-0.3889903426170349,
-0.6559628844261169,
0.1976662129163742,
0.059706997126340866,
-0.06551089137792587,
-0.12028384208679199,
-0.13956047594547272,
-0.1621844470500946,
0.23773694038391113,
0.22295473515987396,
-0.6390013694763184,
-0.5290199518203735,
-0.5216113924980164,
0.335898578166... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
seungheondoh/LP-MusicCaps-MC | seungheondoh | 2023-08-01T03:52:24Z | 87 | 5 | null | [
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"music",
"text-to-music",
"music-to-text",
"art",
"arxiv:2307.16372",
"region:us"
] | 2023-08-01T03:52:24Z | 2023-07-26T04:19:27.000Z | 2023-07-26T04:19:27 | ---
license: mit
language:
- en
tags:
- music
- text-to-music
- music-to-text
- art
pretty_name: LP-MusicCaps-MC
size_categories:
- 1K<n<10K
---
======================================
**!important**: Be careful when using `caption_attribute_prediction` (We don't recommend to use)!
======================================
# Dataset Card for LP-MusicCaps-MC
## Dataset Description
- **Repository:** [LP-MusicCaps repository](https://github.com/seungheondoh/lp-music-caps)
- **Paper:** [ArXiv](https://arxiv.org/abs/2307.16372)
## Dataset Summary
**LP-MusicCaps** is a Large Language Model based Pseudo Music Caption dataset for `text-to-music` and `music-to-text` tasks. We construct the music-to-caption pairs with tag-to-caption generation (using three existing multi-label tag datasets and four task instructions). The data sources are MusicCaps, Magnatagtune, and Million Song Dataset ECALS subset.
- [LP-MusicCaps MSD](https://huggingface.co/datasets/seungheondoh/LP-MusicCaps-MSD): 0.5M Audio with 2.2M Caption
- [LP-MusicCaps MTT](https://huggingface.co/datasets/seungheondoh/LP-MusicCaps-MTT): 22k Audio with 88k Caption
- **LP-MusicCaps MC (This Repo)**: 5521 Audio with 22084 Caption. We utilize 13,219 unique aspects used by 10 musicians in the [MusicCaps dataset](https://huggingface.co/datasets/google/MusicCaps) to perform tag-to-caption generation through LLM.
## Data Instances
Each instance in LP-MusicCaps MC (This Repo) represents multiple image-text pair information with meta-attributes:
```
{
'fname': '[-0Gj8-vB1q4]-[30-40]',
'ytid': '-0Gj8-vB1q4',
'aspect_list': ['low quality',
'sustained strings melody',
'soft female vocal',
'mellow piano melody',
'sad',
'soulful',
'ballad'
],
'caption_ground_truth': 'The low quality recording features a ballad song that contains sustained strings, mellow piano melody and soft female vocal singing over it. It sounds sad and soulful, like something you would hear at Sunday services.',
'caption_writing': 'This heartfelt ballad showcases a soulful and sad low-quality sustained strings melody intertwined with a mellow piano melody, and a soft female vocal, resulting in an emotionally charged and sonically rich experience for listeners.',
'caption_summary': 'A melancholic and soulful ballad with low-quality sustained strings, a mellow piano melody, and soft female vocals.',
'caption_paraphrase': 'A melancholic ballad of soulful sadness featuring a low quality sustained strings melody complemented by a soft, mellow piano melody accompanied by a plaintive, soothing female vocal.',
'caption_attribute_prediction': 'This soulful ballad features a sustained strings melody that tugs at your heartstrings, accompanied by a mellow piano melody and gentle percussion. The soft, emotionally-charged female vocal delivers poetic and poignant lyrics that speak to the sadness and pain of lost love. The addition of a beautiful string arrangement adds to the melodic depth of the song, making it a truly moving listening experience. With its slow tempo, this track exudes a mellow and introspective vibe, perfect for those moments when you need a moment to sit and reflect on the past.',
'pseudo_attribute': ['emotional lyrics',
'slow tempo',
'gentle percussion',
'string arrangement'
],
'is_crawled': True,
'author_id': 4,
'start_s': 30,
'end_s': 40,
'audioset_positive_labels': '/m/0140xf,/m/02cjck,/m/04rlf',
'is_balanced_subset': False,
'is_audioset_eval': True
}
```
## Pseudo Caption Example:
Input Tags:
*"video game theme, no singer, instrumental, analog sounding, small keyboard, beatboxing, playful, cheerful, groovy"*
Output Pseudo Captions
*"instrumental track has a joyful and playful vibe, perfect for a video game theme. With no singer, the analog-sounding music features a small keyboard and beatboxing, creating a groovy and cheerful atmosphere"*
[More Information for pseudo caption generation](https://github.com/seungheondoh/lp-music-caps/blob/main/lpmc/llm_captioning/generate.py)
## Data Fields
| Name | Type | Description |
|------------------------------|-----------------|---------------------------------------------------------------------|
| fname | string | File name of the data |
| ytid | string | YouTube ID of the data |
| aspect_list | list of strings | List of unique aspects used by musicians in the MusicCaps dataset |
| caption_ground_truth | string | Ground truth caption for the data |
| caption_writing | string | Pseudo Caption generated through a writing instruction |
| caption_summary | string | Pseudo Caption generated through a summary instruction |
| caption_paraphrase | string | Pseudo Caption generated through a paraphrase instruction |
| caption_attribute_prediction | string | Pseudo Caption generated through a attribute_prediction instruction |
| pseudo_attribute | list of strings | List of pseudo-attributes using in caption_attribute_prediction |
| is_crawled | boolean | Indicates whether the data is crawled or not |
| author_id | int64 | ID of the author |
| start_s | int64 | Start time in seconds |
| end_s | int64 | End time in seconds |
| audioset_positive_labels | string | Positive labels from the AudioSet dataset |
| is_balanced_subset | boolean | Indicates whether the data is part of a balanced subset |
| is_audioset_eval | boolean | Indicates whether the data is for AudioSet evaluation |
## Considerations for Using the Data
The LP-MusicCaps dataset is recommended to be used for research purposes. Due to the wrong labeling issue, we recommend not using caption_attribute_prediction and pseudo_attribute unless it is specifically for large-scale pretraining. Additionally, the field "is_crawled" indicates the samples used in the reference paper mentioned below.
## Discussion of Biases
It will be described in a paper to be released soon.
## Other Known Limitations
It will be described in a paper to be released soon. | [
-0.5449396371841431,
-0.4047495126724243,
0.2872081398963928,
0.3541111648082733,
-0.36690014600753784,
0.08140692114830017,
-0.35040339827537537,
-0.29772302508354187,
0.5734356641769409,
0.7278459072113037,
-1.1059231758117676,
-0.9189695119857788,
-0.39536845684051514,
0.117372326552867... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
allenai/objaverse-xl | allenai | 2023-10-31T16:46:54Z | 87 | 42 | null | [
"language:en",
"license:odc-by",
"arxiv:2307.05663",
"region:us"
] | 2023-10-31T16:46:54Z | 2023-08-17T17:50:21.000Z | 2023-08-17T17:50:21 | ---
license: odc-by
language:
- en
viewer: false
---
# Objaverse-XL
<a href="//arxiv.org/abs/2307.05663" target="_blank">
<img src="https://img.shields.io/badge/arXiv-2307.05663-<COLOR>">
</a>
Objaverse-XL is an open dataset of over 10 million 3D objects!
With it, we train Zero123-XL, a foundation model for 3D, observing incredible 3D generalization abilities: 🧵👇
<img src="https://mattdeitke.com/static/1cdcdb2ef7033e177ca9ae2975a9b451/9c1ca/objaverse-xl.webp">
## Scale Comparison
Objaverse 1.0 was released back in December. It was a step in the right direction, but still relatively small with 800K objects.
Objaverse-XL is over an order of magnitude larger and much more diverse!
<img src="https://github.com/allenai/objaverse-rendering/assets/28768645/43833dd3-ec97-4a3d-8782-00a6aea584b4">
## Unlocking Generalization
Compared to the original Zero123 model, Zero123-XL improves remarkably in 0-shot generalization abilities, even being able to perform novel view synthesis on sketches, cartoons, and people!
A ton more examples in the [📝 paper](https://arxiv.org/abs/2307.05663) :)
<img src="https://github.com/allenai/objaverse-rendering/assets/28768645/8470e4df-e39d-444b-9871-58fbee4b87fd">
## Image → 3D
With the base Zero123-XL foundation model, we can perform image → 3D using [DreamFusion](https://dreamfusion3d.github.io/), having the model guide a NeRF to generate novel views!
<video autoplay muted loop controls>
<source src="https://github.com/allenai/objaverse-rendering/assets/28768645/571852cd-dc02-46ce-b2bb-88f64a67d0ac" type="video/mp4">
</video>
## Text → 3D
Text-to-3D comes for free with text → image models, such as with SDXL here, providing the initial image!
<video autoplay muted loop controls>
<source src="https://github.com/allenai/objaverse-rendering/assets/28768645/96255b42-8158-4c7a-8308-7b0f1257ada8" type="video/mp4">
</video>
## Scaling Trends
Beyond that, we show strong scaling trends for both Zero123-XL and [PixelNeRF](https://alexyu.net/pixelnerf/)!
<img src="https://github.com/allenai/objaverse-rendering/assets/28768645/0c8bb433-27df-43a1-8cb8-1772007c0899">
## Tutorial
Check out the [Google Colab tutorial](https://colab.research.google.com/drive/15XpZMjrHXuky0IgBbXcsUtb_0g-XWYmN?usp=sharing) to download Objaverse-XL.
Polycam data is available by Polycam to academic researchers for non-commercial use upon request and approval from Polycam. For access please fill out [this form](https://forms.gle/HUjYVtS9GKVS5QBXA).
## License
The use of the dataset as a whole is licensed under the ODC-By v1.0 license. Individual objects in Objaverse-XL are licensed under different licenses.
## Citation
To cite Objaverse-XL, please cite our [📝 arXiv](https://arxiv.org/abs/2307.05663) paper with the following BibTeX entry:
```bibtex
@article{objaverseXL,
title={Objaverse-XL: A Universe of 10M+ 3D Objects},
author={Matt Deitke and Ruoshi Liu and Matthew Wallingford and Huong Ngo and
Oscar Michel and Aditya Kusupati and Alan Fan and Christian Laforte and
Vikram Voleti and Samir Yitzhak Gadre and Eli VanderBilt and
Aniruddha Kembhavi and Carl Vondrick and Georgia Gkioxari and
Kiana Ehsani and Ludwig Schmidt and Ali Farhadi},
journal={arXiv preprint arXiv:2307.05663},
year={2023}
}
```
Objaverse 1.0 is available on 🤗Hugging Face at [@allenai/objaverse](https://huggingface.co/datasets/allenai/objaverse). To cite it, use:
```bibtex
@article{objaverse,
title={Objaverse: A Universe of Annotated 3D Objects},
author={Matt Deitke and Dustin Schwenk and Jordi Salvador and Luca Weihs and
Oscar Michel and Eli VanderBilt and Ludwig Schmidt and
Kiana Ehsani and Aniruddha Kembhavi and Ali Farhadi},
journal={arXiv preprint arXiv:2212.08051},
year={2022}
}
```
| [
-0.7837151288986206,
-0.8079136610031128,
0.6497288942337036,
0.1704091578722,
-0.12133674323558807,
-0.32844430208206177,
0.09941913932561874,
-0.7191182374954224,
0.32024720311164856,
0.4527100920677185,
-0.4484484791755676,
-0.33936429023742676,
-0.541567862033844,
0.2807455062866211,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jpawan33/fkr30k-image-captioning-dataset | jpawan33 | 2023-09-09T04:17:11Z | 87 | 1 | null | [
"region:us"
] | 2023-09-09T04:17:11Z | 2023-09-06T19:00:10.000Z | 2023-09-06T19:00:10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1625135945.608
num_examples: 31782
download_size: 1621386563
dataset_size: 1625135945.608
---
# Dataset Card for "fkr30k-image-captioning-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6234040260314941,
0.19785481691360474,
0.07893753051757812,
0.4825522303581238,
-0.5355483889579773,
0.11540278047323227,
0.26656585931777954,
-0.12195061147212982,
0.4844570755958557,
0.4939940273761749,
-0.9665002822875977,
-0.7300826907157898,
-0.5369322896003723,
0.01124236453324556... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shariqfarooq/USYllmblue | shariqfarooq | 2023-09-27T00:10:28Z | 87 | 0 | null | [
"region:us"
] | 2023-09-27T00:10:28Z | 2023-09-27T00:08:05.000Z | 2023-09-27T00:08:05 | ---
dataset_info:
features:
- name: gligen
dtype: image
- name: layoutgpt
dtype: image
- name: llmgrounded
dtype: image
- name: ours
dtype: image
- name: stablediffusion
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 96699641.0
num_examples: 44
download_size: 96703577
dataset_size: 96699641.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "USYllmblue"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.43532663583755493,
-0.14287351071834564,
-0.022719617933034897,
0.17587554454803467,
-0.3995758891105652,
0.22805331647396088,
0.42952805757522583,
-0.3364808261394501,
0.9207167029380798,
0.5309679508209229,
-0.9399793744087219,
-0.7289614081382751,
-0.4248857796192169,
-0.302659809589... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Salesforce/cloudops_tsf | Salesforce | 2023-10-31T08:23:00Z | 87 | 1 | null | [
"task_categories:time-series-forecasting",
"size_categories:100M<n<1B",
"license:cc-by-4.0",
"arxiv:2310.05063",
"region:us"
] | 2023-10-31T08:23:00Z | 2023-10-29T07:51:30.000Z | 2023-10-29T07:51:30 | ---
license: cc-by-4.0
task_categories:
- time-series-forecasting
pretty_name: cloud
size_categories:
- 100M<n<1B
---
# Pushing the Limits of Pre-training for Time Series Forecasting in the CloudOps Domain
[Paper](https://arxiv.org/abs/2310.05063) | [Code](https://github.com/SalesforceAIResearch/pretrain-time-series-cloudops)
Datasets accompanying the paper "Pushing the Limits of Pre-training for Time Series Forecasting in the CloudOps Domain".
## Quick Start
```python
from datasets import load_dataset
dataset = load_dataset('Salesforce/cloudops_tsf', 'azure_vm_traces_2017')
```
## Available Datasets
### azure_vm_traces_2017
```python
DatasetDict({
train_test: Dataset({
features: ['start', 'target', 'item_id', 'feat_static_cat', 'feat_static_real', 'past_feat_dynamic_real'],
num_rows: 17568
})
pretrain: Dataset({
features: ['start', 'target', 'item_id', 'feat_static_cat', 'feat_static_real', 'past_feat_dynamic_real'],
num_rows: 159472
})
})
```
### borg_cluster_data_2011
```python
DatasetDict({
train_test: Dataset({
features: ['start', 'target', 'item_id', 'feat_static_cat', 'past_feat_dynamic_real'],
num_rows: 11117
})
pretrain: Dataset({
features: ['start', 'target', 'item_id', 'feat_static_cat', 'past_feat_dynamic_real'],
num_rows: 143386
})
})
```
### alibaba_cluster_trace_2018
```python
DatasetDict({
train_test: Dataset({
features: ['start', 'target', 'item_id', 'feat_static_cat', 'past_feat_dynamic_real'],
num_rows: 6048
})
pretrain: Dataset({
features: ['start', 'target', 'item_id', 'feat_static_cat', 'past_feat_dynamic_real'],
num_rows: 58409
})
})
```
## Dataset Config
```python
from datasets import load_dataset_builder
config = load_dataset_builder('Salesforce/cloudops_tsf', 'azure_vm_traces_2017').config
print(config)
CloudOpsTSFConfig(
name='alibaba_cluster_trace_2018',
version=1.0.0,
data_dir=None,
data_files=None,
description='',
prediction_length=48,
freq='5T',
stride=48,
univariate=False,
multivariate=True,
optional_fields=('feat_static_cat', 'past_feat_dynamic_real'),
rolling_evaluations=12,
test_split_date=Period('2018-01-08 11:55', '5T'),
_feat_static_cat_cardinalities={
'pretrain': (
('container_id', 64457),
('app_du', 9484)),
'train_test': (
('container_id', 6048),
('app_du', 1292)
)
},
target_dim=2,
feat_static_real_dim=0,
past_feat_dynamic_real_dim=6
)
```
```test_split_date``` is provided to achieve the same train-test split as given in the paper.
This is essentially the date/time of ```rolling_evaluations * prediction_length``` time steps before the last time step in the dataset.
Note that the pre-training dataset includes the test region, and thus should also be filtered before usage.
## Acknowledgements
The datasets were processed from the following original sources. Please cite the original sources if you use the datasets.
* Azure VM Traces 2017
* Bianchini. Resource central: Understanding and predicting workloads for improved resource management in large cloud platforms. In Proceedings of the 26th Symposium on Operating Systems Principles, pp. 153–167, 2017.
* https://github.com/Azure/AzurePublicDataset
* Borg Cluster Data 2011
* John Wilkes. More Google cluster data. Google research blog, November 2011. Posted at http://googleresearch.blogspot.com/2011/11/more-google-cluster-data.html.
* https://github.com/google/cluster-data
* Alibaba Cluster Trace 2018
* Jing Guo, Zihao Chang, Sa Wang, Haiyang Ding, Yihui Feng, Liang Mao, and Yungang Bao. Who limits the resource efficiency of my datacenter: An analysis of alibaba datacenter traces. In Proceedings of the International Symposium on Quality of Service, pp. 1–10, 2019.
* https://github.com/alibaba/clusterdata
## Citation
```
@article{woo2023pushing,
title={Pushing the Limits of Pre-training for Time Series Forecasting in the CloudOps Domain},
author={Woo, Gerald and Liu, Chenghao and Kumar, Akshat and Sahoo, Doyen},
journal={arXiv preprint arXiv:2310.05063},
year={2023}
}
```
| [
-0.5656758546829224,
-0.36663374304771423,
0.25773224234580994,
0.051950983703136444,
-0.4396047592163086,
-0.07807448506355286,
-0.1123412624001503,
-0.2929410934448242,
0.30129504203796387,
0.3078673183917999,
-1.0363436937332153,
-0.4903584122657776,
-0.39860862493515015,
-0.31475895643... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Zahra032/qadataset | Zahra032 | 2023-11-19T22:48:21Z | 87 | 0 | null | [
"region:us"
] | 2023-11-19T22:48:21Z | 2023-10-29T14:55:23.000Z | 2023-10-29T14:55:23 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gmongaras/BERT_Base_Cased_512_GLUE_Mapped | gmongaras | 2023-11-25T17:18:02Z | 87 | 0 | null | [
"region:us"
] | 2023-11-25T17:18:02Z | 2023-11-25T17:10:03.000Z | 2023-11-25T17:10:03 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: label
dtype: float64
- name: dataset_name
dtype: string
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 340267802
num_examples: 1342430
- name: validation
num_bytes: 16853181
num_examples: 69711
- name: test
num_bytes: 95818051
num_examples: 425205
download_size: 162773837
dataset_size: 452939034
---
Original Dataset from: https://huggingface.co/datasets/glue
This dataset is adapted from https://huggingface.co/datasets/gmongaras/BERT_Base_Cased_512_GLUE
Every split besides the ax split is in this dataset.
Lines above 512 tokens from the BERT-cased (bert-base-cased) tokenizer are removed in the original dataset
If in any case the sentences are longer than 512 tokens, they are subsetted.
Original labels and dataset categories are retained. | [
-0.40087759494781494,
-0.7279461026191711,
0.2420666217803955,
0.26703205704689026,
-0.11167486757040024,
0.14702492952346802,
-0.10125136375427246,
-0.11224990338087082,
0.7360909581184387,
0.7604166269302368,
-0.9032730460166931,
-0.3930801451206207,
-0.3921597898006439,
0.00673208571970... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shivmoha/squad-unanswerable | shivmoha | 2021-11-27T09:40:02Z | 86 | 1 | null | [
"region:us"
] | 2021-11-27T09:40:02Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
wietsedv/stsbenchmark | wietsedv | 2022-03-09T09:14:43Z | 86 | 0 | null | [
"license:cc-by-sa-4.0",
"region:us"
] | 2022-03-09T09:14:43Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
license: cc-by-sa-4.0
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/bioasq_task_b | bigbio | 2022-12-22T15:41:12Z | 86 | 3 | null | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-12-22T15:41:12Z | 2022-09-26T04:05:28.000Z | 2022-09-26T04:05:28 | ---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: NLM_LICENSE
pretty_name: BioASQ Task B
homepage: http://participants-area.bioasq.org/datasets/
bigbio_pubmed: true
bigbio_public: false
bigbio_tasks:
- QUESTION_ANSWERING
---
# Dataset Card for BioASQ Task B
## Dataset Description
- **Homepage:** http://participants-area.bioasq.org/datasets/
- **Pubmed:** True
- **Public:** False
- **Tasks:** QA
The BioASQ corpus contains multiple question
answering tasks annotated by biomedical experts, including yes/no, factoid, list,
and summary questions. Pertaining to our objective of comparing neural language
models, we focus on the the yes/no questions (Task 7b), and leave the inclusion
of other tasks to future work. Each question is paired with a reference text
containing multiple sentences from a PubMed abstract and a yes/no answer. We use
the official train/dev/test split of 670/75/140 questions.
See 'Domain-Specific Language Model Pretraining for Biomedical
Natural Language Processing'
## Citation Information
```
@article{tsatsaronis2015overview,
title = {
An overview of the BIOASQ large-scale biomedical semantic indexing and
question answering competition
},
author = {
Tsatsaronis, George and Balikas, Georgios and Malakasiotis, Prodromos
and Partalas, Ioannis and Zschunke, Matthias and Alvers, Michael R and
Weissenborn, Dirk and Krithara, Anastasia and Petridis, Sergios and
Polychronopoulos, Dimitris and others
},
year = 2015,
journal = {BMC bioinformatics},
publisher = {BioMed Central Ltd},
volume = 16,
number = 1,
pages = 138
}
```
| [
-0.2269325852394104,
-0.7423354983329773,
0.583069384098053,
0.0634143128991127,
-0.16849911212921143,
-0.13144344091415405,
0.10451412945985794,
-0.4631897509098053,
0.262503981590271,
0.5041995644569397,
-0.6324462890625,
-0.5625602602958679,
-0.4329712986946106,
0.56751549243927,
-0.1... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cannlytics/cannabis_licenses | cannlytics | 2023-09-30T14:23:05Z | 86 | 3 | null | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"license:cc-by-4.0",
"cannabis",
"licenses",
"region:us"
] | 2023-09-30T14:23:05Z | 2022-09-28T19:52:23.000Z | 2022-09-28T19:52:23 | ---
pretty_name: cannabis_licenses
annotations_creators:
- expert-generated
language_creators:
- expert-generated
license:
- cc-by-4.0
tags:
- cannabis
- licenses
---
# Cannabis Licenses
<!-- FIXME:
<div align="center" style="text-align:center; margin-top:1rem; margin-bottom: 1rem;">
<img style="max-height:365px;width:100%;max-width:720px;" alt="" src="analysis/figures/cannabis-licenses-map.png">
</div> -->
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Data Collection and Normalization](#data-collection-and-normalization)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [License](#license)
- [Citation](#citation)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** <https://github.com/cannlytics/cannlytics>
- **Repository:** <https://huggingface.co/datasets/cannlytics/cannabis_licenses>
- **Point of Contact:** <dev@cannlytics.com>
### Dataset Summary
**Cannabis Licenses** is a collection of cannabis license data for each state with permitted adult-use cannabis. The dataset also includes a sub-dataset, `all`, that includes all licenses.
## Dataset Structure
The dataset is partitioned into 18 subsets for each state and the aggregate.
| State | Code | Status |
|-------|------|--------|
| [All](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/all) | `all` | ✅ |
| [Alaska](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/ak) | `ak` | ✅ |
| [Arizona](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/az) | `az` | ✅ |
| [California](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/ca) | `ca` | ✅ |
| [Colorado](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/co) | `co` | ✅ |
| [Connecticut](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/ct) | `ct` | ✅ |
| [Delaware](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/de) | `md` | ✅ |
| [Illinois](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/il) | `il` | ✅ |
| [Maine](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/me) | `me` | ✅ |
| [Maryland](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/md) | `md` | ✅ |
| [Massachusetts](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/ma) | `ma` | ✅ |
| [Michigan](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/mi) | `mi` | ✅ |
| [Missouri](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/mo) | `mo` | ✅ |
| [Montana](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/mt) | `mt` | ✅ |
| [Nevada](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/nv) | `nv` | ✅ |
| [New Jersey](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/nj) | `nj` | ✅ |
| [New Mexico](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/nm) | `nm` | ✅ |
| [New York](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/ny) | `ny` | ✅ |
| [Oregon](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/or) | `or` | ✅ |
| [Rhode Island](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/ri) | `ri` | ✅ |
| [Vermont](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/vt) | `vt` | ✅ |
| Virginia | `va` | ⏳ Expected 2024 |
| [Washington](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/wa) | `wa` | ✅ |
The following states have issued medical cannabis licenses, but are not (yet) included in the dataset:
- Alabama
- Arkansas
- District of Columbia (D.C.)
- Florida
- Kentucky (2024)
- Louisiana
- Minnesota
- Mississippi
- New Hampshire
- North Dakota
- Ohio
- Oklahoma
- Pennsylvania
- South Dakota
- Utah
- West Virginia
### Data Instances
You can load the licenses for each state. For example:
```py
from datasets import load_dataset
# Get the licenses for a specific state.
dataset = load_dataset('cannlytics/cannabis_licenses', 'all')
data = dataset['data']
```
### Data Fields
Below is a non-exhaustive list of fields, used to standardize the various data that are encountered, that you may expect to find for each observation.
| Field | Example | Description |
|-------|-----|-------------|
| `id` | `"1046"` | A state-unique ID for the license. |
| `license_number` | `"C10-0000423-LIC"` | A unique license number. |
| `license_status` | `"Active"` | The status of the license. Only licenses that are active are included. |
| `license_status_date` | `"2022-04-20T00:00"` | The date the status was assigned, an ISO-formatted date if present. |
| `license_term` | `"Provisional"` | The term for the license. |
| `license_type` | `"Commercial - Retailer"` | The type of business license. |
| `license_designation` | `"Adult-Use and Medicinal"` | A state-specific classification for the license. |
| `issue_date` | `"2019-07-15T00:00:00"` | An issue date for the license, an ISO-formatted date if present. |
| `expiration_date` | `"2023-07-14T00:00:00"` | An expiration date for the license, an ISO-formatted date if present. |
| `licensing_authority_id` | `"BCC"` | A unique ID for the state licensing authority. |
| `licensing_authority` | `"Bureau of Cannabis Control (BCC)"` | The state licensing authority. |
| `business_legal_name` | `"Movocan"` | The legal name of the business that owns the license. |
| `business_dba_name` | `"Movocan"` | The name the license is doing business as. |
| `business_owner_name` | `"redacted"` | The name of the owner of the license. |
| `business_structure` | `"Corporation"` | The structure of the business that owns the license. |
| `activity` | `"Pending Inspection"` | Any relevant license activity. |
| `premise_street_address` | `"1632 Gateway Rd"` | The street address of the business. |
| `premise_city` | `"Calexico"` | The city of the business. |
| `premise_state` | `"CA"` | The state abbreviation of the business. |
| `premise_county` | `"Imperial"` | The county of the business. |
| `premise_zip_code` | `"92231"` | The zip code of the business. |
| `business_email` | `"redacted@gmail.com"` | The business email of the license. |
| `business_phone` | `"(555) 555-5555"` | The business phone of the license. |
| `business_website` | `"cannlytics.com"` | The business website of the license. |
| `parcel_number` | `"A42"` | An ID for the business location. |
| `premise_latitude` | `32.69035693` | The latitude of the business. |
| `premise_longitude` | `-115.38987552` | The longitude of the business. |
| `data_refreshed_date` | `"2022-09-21T12:16:33.3866667"` | An ISO-formatted time when the license data was updated. |
### Data Splits
The data is split into subsets by state. You can retrieve all licenses by requesting the `all` subset.
```py
from datasets import load_dataset
# Get all cannabis licenses.
dataset = load_dataset('cannlytics/cannabis_licenses', 'all')
data = dataset['data']
```
## Dataset Creation
### Curation Rationale
Data about organizations operating in the cannabis industry for each state is valuable for research.
### Source Data
| State | Data Source URL |
|-------|-----------------|
| Alaska | <https://www.commerce.alaska.gov/abc/marijuana/Home/licensesearch> |
| Arizona | <https://azcarecheck.azdhs.gov/s/?licenseType=null> |
| California | <https://search.cannabis.ca.gov/> |
| Colorado | <https://sbg.colorado.gov/med/licensed-facilities> |
| Connecticut | <https://portal.ct.gov/DCP/Medical-Marijuana-Program/Connecticut-Medical-Marijuana-Dispensary-Facilities> |
| Delaware | <https://dhss.delaware.gov/dhss/dph/hsp/medmarcc.html> |
| Illinois | <https://www.idfpr.com/LicenseLookup/AdultUseDispensaries.pdf> |
| Maine | <https://www.maine.gov/dafs/ocp/open-data/adult-use> |
| Maryland | <https://mmcc.maryland.gov/Pages/Dispensaries.aspx> |
| Massachusetts | <https://masscannabiscontrol.com/open-data/data-catalog/> |
| Michigan | <https://michigan.maps.arcgis.com/apps/webappviewer/index.html?id=cd5a1a76daaf470b823a382691c0ff60> |
| Missouri | <https://health.mo.gov/safety/cannabis/licensed-facilities.php> |
| Montana | <https://mtrevenue.gov/cannabis/#CannabisLicenses> |
| Nevada | <https://ccb.nv.gov/list-of-licensees/> |
| New Jersey | <https://data.nj.gov/stories/s/ggm4-mprw> |
| New Mexico | <https://nmrldlpi.force.com/bcd/s/public-search-license?division=CCD&language=en_US> |
| New York | <https://cannabis.ny.gov/licensing> |
| Oregon | <https://www.oregon.gov/olcc/marijuana/pages/recreational-marijuana-licensing.aspx> |
| Rhode Island | <https://dbr.ri.gov/office-cannabis-regulation/compassion-centers/licensed-compassion-centers> |
| Vermont | <https://ccb.vermont.gov/licenses> |
| Washington | <https://lcb.wa.gov/records/frequently-requested-lists> |
### Data Collection and Normalization
In the `algorithms` directory, you can find the algorithms used for data collection. You can use these algorithms to recreate the dataset. First, you will need to clone the repository:
```
git clone https://huggingface.co/datasets/cannlytics/cannabis_licenses
```
You can then install the algorithm Python (3.9+) requirements:
```
cd cannabis_licenses
pip install -r requirements.txt
```
Then you can run all of the data-collection algorithms:
```
python algorithms/main.py
```
Or you can run each algorithm individually. For example:
```
python algorithms/get_licenses_ny.py
```
### Personal and Sensitive Information
This dataset includes names of individuals, public addresses, and contact information for cannabis licensees. It is important to take care to use these data points in a legal manner.
## Considerations for Using the Data
### Social Impact of Dataset
Arguably, there is substantial social impact that could result from the study of permitted adult-use cannabis, therefore, researchers and data consumers alike should take the utmost care in the use of this dataset.
### Discussion of Biases
Cannlytics is a for-profit data and analytics company that primarily serves cannabis businesses. The data are not randomly collected and thus sampling bias should be taken into consideration.
### Other Known Limitations
The data is for adult-use cannabis licenses. It would be valuable to include medical cannabis licenses too.
## Additional Information
### Dataset Curators
Curated by [🔥Cannlytics](https://cannlytics.com)<br>
<contact@cannlytics.com>
### License
```
Copyright (c) 2022-2023 Cannlytics and the Cannabis Data Science Team
The files associated with this dataset are licensed under a
Creative Commons Attribution 4.0 International license.
You can share, copy and modify this dataset so long as you give
appropriate credit, provide a link to the CC BY license, and
indicate if changes were made, but you may not do so in a way
that suggests the rights holder has endorsed you or your use of
the dataset. Note that further permission may be required for
any content within the dataset that is identified as belonging
to a third party.
```
### Citation
Please cite the following if you use the code examples in your research:
```bibtex
@misc{cannlytics2023,
title={Cannabis Data Science},
author={Skeate, Keegan and O'Sullivan-Sutherland, Candace},
journal={https://github.com/cannlytics/cannabis-data-science},
year={2023}
}
```
### Contributions
Thanks to [🔥Cannlytics](https://cannlytics.com), [@candy-o](https://github.com/candy-o), [@hcadeaux](https://huggingface.co/hcadeaux), [@keeganskeate](https://github.com/keeganskeate), and the entire [Cannabis Data Science Team](https://meetup.com/cannabis-data-science/members) for their contributions.
| [
-0.3278438150882721,
-0.5898494124412537,
0.7653698325157166,
0.4563341438770294,
-0.32317987084388733,
-0.33667367696762085,
-0.04963421821594238,
-0.40398675203323364,
0.9929631948471069,
0.6321269869804382,
-0.5142663717269897,
-1.4371378421783447,
-0.4630395174026489,
0.107052467763423... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
skytnt/anime-segmentation | skytnt | 2022-10-03T01:35:40Z | 86 | 21 | null | [
"task_categories:image-segmentation",
"task_ids:semantic-segmentation",
"size_categories:10K<n<100K",
"source_datasets:original",
"license:cc0-1.0",
"region:us"
] | 2022-10-03T01:35:40Z | 2022-09-30T05:27:06.000Z | 2022-09-30T05:27:06 | ---
annotations_creators: []
language: []
language_creators: []
license:
- cc0-1.0
multilinguality: []
pretty_name: Anime Segmentation
size_categories:
- 10K<n<100K
source_datasets:
- original
tags: []
task_categories:
- image-segmentation
task_ids:
- semantic-segmentation
---
## Dataset Description
A segmentation dataset for anime character
My project: [anime-segmentation](https://github.com/SkyTNT/anime-segmentation)
### Dataset Summary
| Dir | Description | Format | Images |
| ---- | ---- | ---- | ---- |
| bg | background images | jpg | 8057 |
| fg | foreground images, transparent background | png | 11802 |
| imgs | real images with background and foreground| jpg | 1111 |
| masks| labels for imgs | jpg | 1111 |
Total size: 18GB
### Collection Method
Collect background from [character_bg_seg_data](https://github.com/ShuhongChen/bizarre-pose-estimator#download)
Collect foreground from danbooru website.
Collect imgs and masks from [AniSeg](https://github.com/jerryli27/AniSeg#about-the-models) and danbooru website.
I use [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN) to restore the background images.
I clean the dataset using [DeepDanbooru](https://github.com/KichangKim/DeepDanbooru) first then manually, to make sue all foreground is anime character.
### Contributions
Thanks to [@SkyTNT](https://github.com/SkyTNT) for adding this dataset.
Thanks to [@ShuhongChen](https://github.com/ShuhongChen) for [character_bg_seg_data](https://github.com/ShuhongChen/bizarre-pose-estimator#download)
Thanks to [@jerryli27](https://github.com/jerryli27) for [AniSeg](https://github.com/jerryli27/AniSeg#about-the-models)
| [
-0.3832366466522217,
-0.4556455910205841,
0.38820379972457886,
0.1458955556154251,
-0.6021888256072998,
-0.10930801182985306,
0.12928086519241333,
-0.31321075558662415,
0.6393517851829529,
0.7644520401954651,
-0.8904180526733398,
-0.917454719543457,
-0.43308326601982117,
0.0558939836919307... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DFKI-SLT/kbp37 | DFKI-SLT | 2023-04-27T13:04:14Z | 86 | 0 | null | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:other",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other",
"language:en",
"license:other",
"relation extraction",
"arxiv:1508... | 2023-04-27T13:04:14Z | 2023-01-06T12:26:09.000Z | 2023-01-06T12:26:09 | ---
annotations_creators:
- other
language:
- en
language_creators:
- found
license:
- other
multilinguality:
- monolingual
pretty_name: KBP37 is an English Relation Classification dataset
size_categories:
- 10K<n<100K
source_datasets:
- extended|other
tags:
- relation extraction
task_categories:
- text-classification
task_ids:
- multi-class-classification
dataset_info:
- config_name: kbp37
features:
- name: id
dtype: string
- name: sentence
dtype: string
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names(e1,e2)
'2': org:alternate_names(e2,e1)
'3': org:city_of_headquarters(e1,e2)
'4': org:city_of_headquarters(e2,e1)
'5': org:country_of_headquarters(e1,e2)
'6': org:country_of_headquarters(e2,e1)
'7': org:founded(e1,e2)
'8': org:founded(e2,e1)
'9': org:founded_by(e1,e2)
'10': org:founded_by(e2,e1)
'11': org:members(e1,e2)
'12': org:members(e2,e1)
'13': org:stateorprovince_of_headquarters(e1,e2)
'14': org:stateorprovince_of_headquarters(e2,e1)
'15': org:subsidiaries(e1,e2)
'16': org:subsidiaries(e2,e1)
'17': org:top_members/employees(e1,e2)
'18': org:top_members/employees(e2,e1)
'19': per:alternate_names(e1,e2)
'20': per:alternate_names(e2,e1)
'21': per:cities_of_residence(e1,e2)
'22': per:cities_of_residence(e2,e1)
'23': per:countries_of_residence(e1,e2)
'24': per:countries_of_residence(e2,e1)
'25': per:country_of_birth(e1,e2)
'26': per:country_of_birth(e2,e1)
'27': per:employee_of(e1,e2)
'28': per:employee_of(e2,e1)
'29': per:origin(e1,e2)
'30': per:origin(e2,e1)
'31': per:spouse(e1,e2)
'32': per:spouse(e2,e1)
'33': per:stateorprovinces_of_residence(e1,e2)
'34': per:stateorprovinces_of_residence(e2,e1)
'35': per:title(e1,e2)
'36': per:title(e2,e1)
splits:
- name: train
num_bytes: 3570626
num_examples: 15917
- name: validation
num_bytes: 388935
num_examples: 1724
- name: test
num_bytes: 762806
num_examples: 3405
download_size: 5106673
dataset_size: 4722367
- config_name: kbp37_formatted
features:
- name: id
dtype: string
- name: token
sequence: string
- name: e1_start
dtype: int32
- name: e1_end
dtype: int32
- name: e2_start
dtype: int32
- name: e2_end
dtype: int32
- name: relation
dtype:
class_label:
names:
'0': no_relation
'1': org:alternate_names(e1,e2)
'2': org:alternate_names(e2,e1)
'3': org:city_of_headquarters(e1,e2)
'4': org:city_of_headquarters(e2,e1)
'5': org:country_of_headquarters(e1,e2)
'6': org:country_of_headquarters(e2,e1)
'7': org:founded(e1,e2)
'8': org:founded(e2,e1)
'9': org:founded_by(e1,e2)
'10': org:founded_by(e2,e1)
'11': org:members(e1,e2)
'12': org:members(e2,e1)
'13': org:stateorprovince_of_headquarters(e1,e2)
'14': org:stateorprovince_of_headquarters(e2,e1)
'15': org:subsidiaries(e1,e2)
'16': org:subsidiaries(e2,e1)
'17': org:top_members/employees(e1,e2)
'18': org:top_members/employees(e2,e1)
'19': per:alternate_names(e1,e2)
'20': per:alternate_names(e2,e1)
'21': per:cities_of_residence(e1,e2)
'22': per:cities_of_residence(e2,e1)
'23': per:countries_of_residence(e1,e2)
'24': per:countries_of_residence(e2,e1)
'25': per:country_of_birth(e1,e2)
'26': per:country_of_birth(e2,e1)
'27': per:employee_of(e1,e2)
'28': per:employee_of(e2,e1)
'29': per:origin(e1,e2)
'30': per:origin(e2,e1)
'31': per:spouse(e1,e2)
'32': per:spouse(e2,e1)
'33': per:stateorprovinces_of_residence(e1,e2)
'34': per:stateorprovinces_of_residence(e2,e1)
'35': per:title(e1,e2)
'36': per:title(e2,e1)
splits:
- name: train
num_bytes: 4943394
num_examples: 15807
- name: validation
num_bytes: 539197
num_examples: 1714
- name: test
num_bytes: 1055918
num_examples: 3379
download_size: 5106673
dataset_size: 6581345
---
# Dataset Card for "kbp37"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Repository:** [kbp37](https://github.com/zhangdongxu/kbp37)
- **Paper:** [Relation Classification via Recurrent Neural Network](https://arxiv.org/abs/1508.01006)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 5.11 MB
- **Size of the generated dataset:** 6.58 MB
### Dataset Summary
KBP37 is a revision of MIML-RE annotation dataset, provided by Gabor Angeli et al. (2014). They use both the 2010 and
2013 KBP official document collections, as well as a July 2013 dump of Wikipedia as the text corpus for annotation.
There are 33811 sentences been annotated. Zhang and Wang made several refinements:
1. They add direction to the relation names, e.g. '`per:employee_of`' is split into '`per:employee of(e1,e2)`'
and '`per:employee of(e2,e1)`'. They also replace '`org:parents`' with '`org:subsidiaries`' and replace
'`org:member of’ with '`org:member`' (by their reverse directions).
2. They discard low frequency relations such that both directions of each relation occur more than 100 times in the
dataset.
KBP37 contains 18 directional relations and an additional '`no_relation`' relation, resulting in 37 relation classes.
Note:
- There is a formatted version that you can load with `datasets.load_dataset('kbp37', name='kbp37_formatted')`. This version is tokenized with `str.split()` and
provides entities as offsets instead of being enclosed by xml tags. It discards some examples, however, that are invalid in the original dataset and lead
to entity offset errors, e.g. example train/1276.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
The language data in KBP37 is in English (BCP-47 en)
## Dataset Structure
### Data Instances
#### kbp37
- **Size of downloaded dataset files:** 5.11 MB
- **Size of the generated dataset:** 4.7 MB
An example of 'train' looks as follows:
```json
{
"id": "0",
"sentence": "<e1> Thom Yorke </e1> of <e2> Radiohead </e2> has included the + for many of his signature distortion sounds using a variety of guitars to achieve various tonal options .",
"relation": 27
}
```
#### kbp37_formatted
- **Size of downloaded dataset files:** 5.11 MB
- **Size of the generated dataset:** 6.58 MB
An example of 'train' looks as follows:
```json
{
"id": "1",
"token": ["Leland", "High", "School", "is", "a", "public", "high", "school", "located", "in", "the", "Almaden", "Valley", "in", "San", "Jose", "California", "USA", "in", "the", "San", "Jose", "Unified", "School", "District", "."],
"e1_start": 0,
"e1_end": 3,
"e2_start": 14,
"e2_end": 16,
"relation": 3
}
```
### Data Fields
#### kbp37
- `id`: the instance id of this sentence, a `string` feature.
- `sentence`: the sentence, a `string` features.
- `relation`: the relation label of this instance, an `int` classification label.
```python
{"no_relation": 0, "org:alternate_names(e1,e2)": 1, "org:alternate_names(e2,e1)": 2, "org:city_of_headquarters(e1,e2)": 3, "org:city_of_headquarters(e2,e1)": 4, "org:country_of_headquarters(e1,e2)": 5, "org:country_of_headquarters(e2,e1)": 6, "org:founded(e1,e2)": 7, "org:founded(e2,e1)": 8, "org:founded_by(e1,e2)": 9, "org:founded_by(e2,e1)": 10, "org:members(e1,e2)": 11, "org:members(e2,e1)": 12, "org:stateorprovince_of_headquarters(e1,e2)": 13, "org:stateorprovince_of_headquarters(e2,e1)": 14, "org:subsidiaries(e1,e2)": 15, "org:subsidiaries(e2,e1)": 16, "org:top_members/employees(e1,e2)": 17, "org:top_members/employees(e2,e1)": 18, "per:alternate_names(e1,e2)": 19, "per:alternate_names(e2,e1)": 20, "per:cities_of_residence(e1,e2)": 21, "per:cities_of_residence(e2,e1)": 22, "per:countries_of_residence(e1,e2)": 23, "per:countries_of_residence(e2,e1)": 24, "per:country_of_birth(e1,e2)": 25, "per:country_of_birth(e2,e1)": 26, "per:employee_of(e1,e2)": 27, "per:employee_of(e2,e1)": 28, "per:origin(e1,e2)": 29, "per:origin(e2,e1)": 30, "per:spouse(e1,e2)": 31, "per:spouse(e2,e1)": 32, "per:stateorprovinces_of_residence(e1,e2)": 33, "per:stateorprovinces_of_residence(e2,e1)": 34, "per:title(e1,e2)": 35, "per:title(e2,e1)": 36}
```
#### kbp37_formatted
- `id`: the instance id of this sentence, a `string` feature.
- `token`: the list of tokens of this sentence, using `str.split()`, a `list` of `string` features.
- `e1_start`: the 0-based index of the start token of the first argument', an `int` feature.
- `e1_end`: the 0-based index of the end token of the first argument, exclusive, an `int` feature.
- `e2_start`: the 0-based index of the start token of the second argument, an `int` feature.
- `e2_end`: the 0-based index of the end token of the second argument, exclusive, an `int` feature.
- `relation`: the relation label of this instance, an `int` classification label (same as `'kbp37''`).
### Data Splits
| | Train | Dev | Test |
|-------|-------|------|------|
| kbp37 | 15917 | 1724 | 3405 |
| kbp37_formatted | 15807 | 1714 | 3379 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{DBLP:journals/corr/ZhangW15a,
author = {Dongxu Zhang and
Dong Wang},
title = {Relation Classification via Recurrent Neural Network},
journal = {CoRR},
volume = {abs/1508.01006},
year = {2015},
url = {http://arxiv.org/abs/1508.01006},
eprinttype = {arXiv},
eprint = {1508.01006},
timestamp = {Fri, 04 Nov 2022 18:37:50 +0100},
biburl = {https://dblp.org/rec/journals/corr/ZhangW15a.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@phucdev](https://github.com/phucdev) for adding this dataset. | [
-0.5267307758331299,
-0.5282827615737915,
0.313615083694458,
0.2087622582912445,
-0.18555280566215515,
-0.08244997262954712,
-0.27218490839004517,
-0.426737517118454,
0.4950029253959656,
0.5017855167388916,
-0.690658688545227,
-0.8578656911849976,
-0.49280092120170593,
0.21895113587379456,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
derek-thomas/squad-v1.1-t5-question-generation | derek-thomas | 2023-03-09T13:50:46Z | 86 | 2 | null | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|squad",
"language:en",
"license:cc-by-4.0",
"questiongeneration",
"question-generation",
"text2tex... | 2023-03-09T13:50:46Z | 2023-02-08T12:10:34.000Z | 2023-02-08T12:10:34 | ---
dataset_info:
features:
- name: context
dtype: string
- name: questions
dtype: string
splits:
- name: train
num_bytes: 20293805
num_examples: 18896
- name: validation
num_bytes: 2376313
num_examples: 2067
download_size: 12600387
dataset_size: 22670118
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- crowdsourced
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Question Generation for T5 based on Squad V1.1
size_categories:
- 10K<n<100K
source_datasets:
- extended|squad
tags:
- questiongeneration
- question-generation
- text2text-generation
task_categories:
- text2text-generation
task_ids: []
---
# Dataset Card for "squad-v1.1-t5-question-generation"
## Dataset Description
- **Homepage:** [https://rajpurkar.github.io/SQuAD-explorer/](https://rajpurkar.github.io/SQuAD-explorer/)
- **Paper:** [SQuAD: 100,000+ Questions for Machine Comprehension of Text](https://arxiv.org/abs/1606.05250)
### Dataset Summary
This is a modified Stanford Question Answering Dataset (SQuAD) to suit question generation with All Questions in One Line (AQOL) just like in [Transformer-based End-to-End Question Generation](https://arxiv.org/pdf/2005.01107v1.pdf)
specifically for the T5 family of models. The prefix is `generate questions: ` so that the task can be unique to a trained model.
Check out the generation notebook [here](https://nbviewer.org/urls/huggingface.co/datasets/derek-thomas/squad-v1.1-t5-question-generation/resolve/main/Squad_V1_Question_Generation.ipynb).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
## Dataset Structure
### Data Instances
#### plain_text
An example of 'train' looks as follows.
```
{
"context": "generate questions: This is a test context.",
"question": "Is this a test? {sep_token} Is this another Test {sep_token}"
}
```
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `context`: a `string` feature.
- `question`: a `string` feature.
### Data Splits
| name |train|validation|
|----------|----:|---------:|
|plain_text|18896| 2067|
### Citation Information
```
@article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250},
}
```
### Contributions
Thanks to [Derek Thomas](https://huggingface.co/derek-thomas) and [Thomas Simonini](https://huggingface.co/ThomasSimonini) for adding this to the hub
Check out: [How to contribute more](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Visitors
[](https://visitorbadge.io/status?path=https%3A%2F%2Fhuggingface.co%2Fdatasets%2Fderek-thomas%2Fsquad-v1.1-t5-question-generation) | [
-0.7599138021469116,
-0.8524377346038818,
0.252856582403183,
0.2049592286348343,
-0.14029856026172638,
-0.07123807817697525,
-0.012758160009980202,
-0.24232815206050873,
0.29776087403297424,
0.47516629099845886,
-1.2883390188217163,
-0.7038534283638,
-0.22360990941524506,
0.304699957370758... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
liuyanchen1015/MULTI_VALUE_wnli_be_perfect | liuyanchen1015 | 2023-04-03T19:45:16Z | 86 | 0 | null | [
"region:us"
] | 2023-04-03T19:45:16Z | 2023-04-03T19:45:12.000Z | 2023-04-03T19:45:12 | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 2647
num_examples: 12
- name: test
num_bytes: 14221
num_examples: 46
- name: train
num_bytes: 21327
num_examples: 98
download_size: 20308
dataset_size: 38195
---
# Dataset Card for "MULTI_VALUE_wnli_be_perfect"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.526160717010498,
-0.17402641475200653,
0.038531240075826645,
0.2598767578601837,
-0.23686450719833374,
-0.07221449166536331,
0.12727093696594238,
-0.3112806975841522,
1.0776822566986084,
0.303421288728714,
-0.841143786907196,
-0.6432384252548218,
-0.4494988024234772,
-0.2839017808437347... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jiacheng-ye/nl2bash | jiacheng-ye | 2023-04-17T12:55:38Z | 86 | 0 | null | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"code",
"region:us"
] | 2023-04-17T12:55:38Z | 2023-04-17T12:53:49.000Z | 2023-04-17T12:53:49 | ---
task_categories:
- text-generation
language:
- en
tags:
- code
pretty_name: NL2Bash
size_categories:
- 1K<n<10K
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ds3lab/ac-sgd-arxiv21 | ds3lab | 2023-04-25T10:45:37Z | 86 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-04-25T10:45:37Z | 2023-04-25T10:23:52.000Z | 2023-04-25T10:23:52 | ---
license: apache-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
polytechXhf/onepiece-dataset | polytechXhf | 2023-05-05T15:17:56Z | 86 | 0 | null | [
"region:us"
] | 2023-05-05T15:17:56Z | 2023-05-02T15:59:45.000Z | 2023-05-02T15:59:45 | ---
dataset_info:
features:
- name: image
dtype: image
- name: char_name
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 120488910.0
num_examples: 922
download_size: 120447392
dataset_size: 120488910.0
---
# Dataset Card for "onepiece-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5631002187728882,
-0.33976081013679504,
0.15836703777313232,
0.20180760324001312,
-0.3845575153827667,
-0.15366259217262268,
0.43419328331947327,
0.0018248582491651177,
1.0379948616027832,
0.8251776099205017,
-1.1303819417953491,
-0.7792068123817444,
-0.533807098865509,
-0.3505943715572... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
miladfa7/Brain-MRI-Images-for-Brain-Tumor-Detection | miladfa7 | 2023-05-16T17:11:04Z | 86 | 2 | null | [
"region:us"
] | 2023-05-16T17:11:04Z | 2023-05-03T07:11:39.000Z | 2023-05-03T07:11:39 |
Brain Tumor Detection | Vision Transformer 99%
Click -> [Kaggle](https://www.kaggle.com/code/miladfa7/brain-tumor-detection-vision-transformer-99)
---
task_categories:
- image-classification
- image-segmentation
tags:
- 'brain '
- MRI
- brain-MRI-images
- Tumor
--- | [
-0.29667630791664124,
-0.6315079927444458,
0.7127259969711304,
0.39407896995544434,
-0.5301663875579834,
-0.27903980016708374,
0.2818492352962494,
-0.0941297635436058,
0.49405863881111145,
0.7194265127182007,
-0.66652911901474,
-0.8825181126594543,
-0.756462574005127,
-0.29445308446884155,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kunishou/hh-rlhf-49k-ja | kunishou | 2023-11-07T08:33:43Z | 86 | 15 | null | [
"license:mit",
"region:us"
] | 2023-11-07T08:33:43Z | 2023-05-18T21:19:21.000Z | 2023-05-18T21:19:21 | ---
license: mit
---
This dataset was created by automatically translating part of "Anthropic/hh-rlhf" into Japanese.
This dataset is also included in "mosaicml/dolly_hhrlhf".
The "ng_translation" flag indicates that the translation was not successful, and "1" means that the translation failed.
Therefore, for data with "1", "instruction" and "instruction_en" contain the same text.
以下の通りに読み込むことで"ng_translation"が"1"(翻訳誤り)のものを除外して使用できます。
```
pip install datasets
```
```
from datasets import Dataset, load_dataset
dataset = load_dataset("kunishou/hh-rlhf-49k-ja")
dataset.set_format(type="pandas")
df = dataset["train"][:]
df = df[df["ng_translation"]!="1"].drop(["ng_translation", "index"], axis=1).reset_index()
dataset = Dataset.from_pandas(df)
dataset
```
hh-rlhf repository
https://github.com/anthropics/hh-rlhf
Anthropic/hh-rlhf
https://huggingface.co/datasets/Anthropic/hh-rlhf
mosaicml/dolly_hhrlhf
https://huggingface.co/datasets/mosaicml/dolly_hhrlhf | [
-0.3589732050895691,
-0.5734310746192932,
0.21038834750652313,
0.4139244258403778,
-0.570435643196106,
-0.14235542714595795,
0.02837504632771015,
-0.12014595419168472,
0.6091772317886353,
0.6951594948768616,
-0.9783538579940796,
-0.8039769530296326,
-0.8199175000190735,
0.6647936105728149,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zhangyue/test_one | zhangyue | 2023-05-26T09:30:23Z | 86 | 0 | null | [
"region:us"
] | 2023-05-26T09:30:23Z | 2023-05-26T09:30:16.000Z | 2023-05-26T09:30:16 | ---
dataset_info:
features:
- name: id
dtype: string
- name: package_name
dtype: string
- name: review
dtype: string
- name: date
dtype: string
- name: star
dtype: int64
- name: version_id
dtype: int64
splits:
- name: train
num_bytes: 1508
num_examples: 5
- name: test
num_bytes: 956
num_examples: 5
download_size: 9453
dataset_size: 2464
---
# Dataset Card for "test_one"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6946764588356018,
-0.4362788498401642,
0.02191860042512417,
0.23882748186588287,
-0.1985778510570526,
-0.1530478447675705,
0.327990859746933,
0.09137768298387527,
0.7828287482261658,
0.48648199439048767,
-0.9428366422653198,
-0.7361029386520386,
-0.4449605345726013,
-0.22293071448802948... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
13nishit/LoanApprovalPrediction | 13nishit | 2023-06-16T11:15:35Z | 86 | 0 | null | [
"license:unlicense",
"region:us"
] | 2023-06-16T11:15:35Z | 2023-06-16T11:14:04.000Z | 2023-06-16T11:14:04 | ---
license: unlicense
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bgglue/bgglue | bgglue | 2023-08-06T15:22:26Z | 86 | 0 | null | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:multiple-choice",
"task_ids:multiple-choice-qa",
"task_ids:named-entity-recognition",
"task_ids:natural-language-inference",
"task_ids:part-of-speech",
"task_ids:sent... | 2023-08-06T15:22:26Z | 2023-07-08T10:43:00.000Z | 2023-07-08T10:43:00 | ---
task_categories:
- text-classification
- token-classification
- question-answering
- multiple-choice
language:
- bg
pretty_name: Bulgarian GLUE
size_categories:
- n<1K
- 1K<n<10K
- 10K<n<100K
- 100K<n<1M
license:
- mit
- cc-by-3.0
- cc-by-sa-4.0
- other
- cc-by-nc-4.0
- cc-by-nc-3.0
task_ids:
- multiple-choice-qa
- named-entity-recognition
- natural-language-inference
- part-of-speech
- sentiment-analysis
source_datasets:
- bsnlp
- wikiann
- exams
- ct21.t1
- fakenews
- crediblenews
- universal_dependencies
tags:
- check-worthiness-estimation
- fake-new-detection
- humor-detection
- regression
- ranking
---
# Dataset Card for "bgGLUE: A Bulgarian General Language Understanding Evaluation Benchmark"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://bgglue.github.io/](https://bgglue.github.io/)
- **Repository:** [https://github.com/bgGLUE](https://github.com/bgGLUE)
- **Paper:** [bgGLUE: A Bulgarian General Language Understanding Evaluation Benchmark](https://arxiv.org/abs/2306.02349)
- **Point of Contact:** [bulgarianglue [at] gmail [dot] com](mailto:bulgarianglue@gmail.com)

### Dataset Summary
bgGLUE (Bulgarian General Language Understanding Evaluation) is a benchmark for evaluating language models on Natural Language Understanding (NLU) tasks in Bulgarian. The benchmark includes NLU tasks targeting a variety of NLP problems (e.g., natural language inference, fact-checking, named entity recognition, sentiment analysis, question answering, etc.) and machine learning tasks (sequence labeling, document-level classification, and regression).
### Supported Tasks and Leaderboards
List of supported tasks: [Tasks](https://bgglue.github.io/tasks/).
Leaderboard: [bgGLUE Leaderboard](https://bgglue.github.io/leaderboard/).
### Languages
Bulgarian
## Dataset Structure
### Data Instances
<div id="container">
<table id="table-tasks" class="table table-striped table-bordered">
<thead>
<tr>
<th scope="col">Name</th>
<th scope="col">Task type</th>
<th scope="col">Identifier</th>
<th scope="col" data-toggle="tooltip" data-placement="top" title="Tooltip on right">Download</th>
<th scope="col">More Info</th>
<th scope="col">Metrics</th>
<th scope="col">Train / Val / Test</th>
</tr>
</thead>
<tbody>
<tr>
<td>Balto-Slavic NLP Shared Task</td>
<td>Named Entity Recognition</td>
<td>BSNLP</td>
<td class="text-center"><a href="https://github.com/bgGLUE/bgglue/raw/main/data/bsnlp.tar.gz" target="_blank" rel="noopener">URL</a> </td>
<td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/bsnlp/">Info</a> </td>
<td>F1</td>
<td>724 / 182 / 301</td>
</tr>
<tr>
<td>CheckThat! (2021), Task 1A </td>
<td>Check-Worthiness Estimation</td>
<td>CT21.T1</td>
<td class="text-center"><a href="https://gitlab.com/checkthat_lab/clef2021-checkthat-lab/-/tree/master/task1" target="_blank" rel="noopener">URL</a> </td>
<td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/ct21-t1/">Info</a> </td>
<td>Average Precision</td>
<td>2,995 / 350 / 357</td>
</tr>
<tr>
<td>Cinexio Movie Reviews</td>
<td>Sentiment Analysis</td>
<td>Cinexio</td>
<td class="text-center"><a href="https://github.com/bgGLUE/bgglue/raw/main/data/cinexio.tar.gz" target="_blank" rel="noopener">URL</a> </td>
<td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/cinexio/">Info</a> </td>
<td>Pearson-Spearman Corr</td>
<td>8,155 / 811 / 861</td>
</tr>
<tr>
<td>Hack the News Datathon (2019)</td>
<td>Fake News Detection</td>
<td>Fake-N</td>
<td class="text-center"><a href="https://github.com/bgGLUE/bgglue/raw/main/data/fakenews.tar.gz" target="_blank" rel="noopener">URL</a> </td>
<td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/fakenews/">Info</a> </td>
<td>Binary F1</td>
<td>1,990 / 221 / 701</td>
</tr>
<tr>
<td>In Search of Credible News</td>
<td>Humor Detection</td>
<td>Cred.-N</td>
<td class="text-center"><a href="https://forms.gle/Z7PYHMAvFvFusWT37" target="_blank" rel="noopener">URL</a> </td>
<td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/crediblenews/">Info</a> </td>
<td>Binary F1</td>
<td>19,227 / 5,949 / 17,887</td>
</tr>
<tr>
<td>Multi-Subject High School Examinations Dataset</td>
<td>Multiple-choice QA</td>
<td>EXAMS</td>
<td class="text-center"><a href="https://github.com/bgGLUE/bgglue/raw/main/data/exams.tar.gz" target="_blank" rel="noopener">URL</a> </td>
<td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/exams/">Info</a> </td>
<td>Accuracy</td>
<td>1,512 / 365 / 1,472</td>
</tr>
<tr>
<td>Universal Dependencies</td>
<td>Part-of-Speech Tagging</td>
<td>U.Dep</td>
<td class="text-center"><a href="https://universaldependencies.org/#bulgarian-treebanks" target="_blank" rel="noopener">URL</a> </td>
<td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/udep/">Info</a> </td>
<td>F1</td>
<td>8,907 / 1,115 / 1,116</td>
</tr>
<tr>
<td>Cross-lingual Natural Language Inference</td>
<td>Natural Language Inference</td>
<td>XNLI</td>
<td class="text-center"><a href="https://github.com/facebookresearch/XNLI#download" target="_blank" rel="noopener">URL</a> </td>
<td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/xnli/">Info</a> </td>
<td>Accuracy</td>
<td>392,702 / 5,010 / 2,490</td>
</tr>
<tr>
<td>Cross-lingual Name Tagging and Linking (PAN-X / WikiAnn)</td>
<td>Named Entity Recognition</td>
<td>PAN-X</td>
<td class="text-center"><a href="https://github.com/bgGLUE/bgglue/raw/main/data/wikiann_bg.tar.gz">URL</a> </td>
<td class="text-center"><a href="https://bgglue.github.io/tasks/task_info/wikiann/">Info</a> </td>
<td>F1</td>
<td>16,237 / 7,029 / 7,263 </td>
</tr>
</tbody>
</table>
</div>
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
Here, we describe the pre-processing steps we took to prepare the datasets before including them in the bgGLUE benchmark. Our main goal was to ensure that the setup evaluated the language understanding abilities of the models in a principled way and in a diverse set of domains. Since all of the datasets were publicly available, we preserved the original setup as much as possible. Nevertheless, we found that some datasets contained duplicate examples across their train/dev/test splits, or that all of the splits came from the same domain, which may overestimate the model's performance. Hereby, \textit{we removed data leaks} and proposed new topic-based or temporal-based (i.e., timestamp-based) data splits where needed. We deduplicated the examples based on a complete word overlap in two pairs of normalized texts, i.e., lowercased, and excluding all stop words.
## Considerations for Using the Data
### Discussion of Biases
The datasets included in bgGLUE were annotated by human annotators, who could be subject to potential biases in their annotation process. Hence, the datasets in \benchmarkName could potentially be misused to develop models that make predictions that are unfair to individuals or groups. Therefore, we ask users of bgGLUE to be aware of such potential biases and risks of misuse. We note that any biases that might exist in the original resources gathered in this benchmark are unintentional and do not aim to cause harm.
### Other Known Limitations
#### Tasks in bgGLUE
The bgGLUE benchmark is comprised of nine challenging NLU tasks, including three token classification tasks, one ranking task and five text classification tasks. While we cover three different types of tasks in the benchmark, we are restricted by the available resources for Bulgarian, and thus we could not include some other NLP tasks, such as language generation. We also consider only NLP tasks and we do not include tasks with other/multiple modalities. Finally, some of the tasks are of similar nature, e.g., we include two datasets for NER and two for credibility/fake news classification.
### Domains in bgGLUE
The tasks included in bgGLUE span over multiple domains such as social media posts, Wikipedia, and news articles and can test both for short and long document understanding. However, each task is limited to one domain and the topics within the domain do not necessarily have full coverage of all possible topics. Moreover, some of the tasks have overlapping domains, e.g., the documents in both Cred.-N and Fake-N are news articles.
## Additional Information
### Licensing Information
The primary bgGLUE tasks are built on and derived from existing datasets.
We refer users to the original licenses accompanying each dataset.
For each dataset the license is listed on its ["Tasks" page](https://bgglue.github.io/tasks/) on the bgGLUE website.
### Citation Information
```
@inproceedings{hardalov-etal-2023-bgglue,
title = "bg{GLUE}: A {B}ulgarian General Language Understanding Evaluation Benchmark",
author = "Hardalov, Momchil and
Atanasova, Pepa and
Mihaylov, Todor and
Angelova, Galia and
Simov, Kiril and
Osenova, Petya and
Stoyanov, Veselin and
Koychev, Ivan and
Nakov, Preslav and
Radev, Dragomir",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.487",
pages = "8733--8759",
}
```
### Contributions
[List of bgGLUE contributors](https://bgglue.github.io/people/) | [
-0.49619293212890625,
-0.6363604664802551,
0.12351775914430618,
0.20057731866836548,
-0.09613147377967834,
-0.005769540090113878,
-0.5334566831588745,
-0.43205326795578003,
0.3149404525756836,
-0.106533944606781,
-0.5750541090965271,
-0.9496968984603882,
-0.5405899286270142,
-0.06896667182... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Vazbeek/alpaca-cs | Vazbeek | 2023-08-09T16:23:40Z | 86 | 1 | null | [
"language:cs",
"license:cc-by-4.0",
"code",
"region:us"
] | 2023-08-09T16:23:40Z | 2023-08-03T22:55:09.000Z | 2023-08-03T22:55:09 | ---
license: cc-by-4.0
language:
- cs
tags:
- code
---
Alpaca dataset translated to Czech language using ChatGPT 3.5. | [
-0.32946813106536865,
-0.7812752723693848,
0.20575101673603058,
0.5260794758796692,
-1.0206700563430786,
-0.28138303756713867,
-0.4729820191860199,
-0.747275173664093,
0.2930010259151459,
0.8266351819038391,
-0.7558605074882507,
-1.061667799949646,
-0.762592077255249,
0.13162235915660858,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
andersonbcdefg/c4-1000 | andersonbcdefg | 2023-09-09T22:23:08Z | 86 | 1 | null | [
"region:us"
] | 2023-09-09T22:23:08Z | 2023-09-09T22:23:03.000Z | 2023-09-09T22:23:03 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: timestamp
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 2303428
num_examples: 1000
download_size: 1435214
dataset_size: 2303428
---
# Dataset Card for "c4-1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7318416833877563,
-0.07518550753593445,
0.3701966404914856,
0.2577412724494934,
-0.0868212878704071,
0.07647252827882767,
0.41017985343933105,
-0.31883877515792847,
0.8575777411460876,
0.4393619894981384,
-0.828742504119873,
-0.6764722466468811,
-0.45106783509254456,
0.00771677307784557... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
fruk19/ptvn_incremental_data | fruk19 | 2023-09-19T04:29:23Z | 86 | 0 | null | [
"region:us"
] | 2023-09-19T04:29:23Z | 2023-09-19T04:02:18.000Z | 2023-09-19T04:02:18 | ---
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 432074.0
num_examples: 2
download_size: 426405
dataset_size: 432074.0
---
# Dataset Card for "ptvn_incremental_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4975298345088959,
-0.3225383758544922,
0.08802954107522964,
0.5694665908813477,
-0.3567451536655426,
-0.09734837710857391,
0.4629072844982147,
0.19222749769687653,
0.6776447296142578,
0.6737384796142578,
-0.7210052013397217,
-0.7255836725234985,
-0.5731197595596313,
-0.1744847595691681,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
emozilla/proofpile-test-tokenized-mistral | emozilla | 2023-10-07T03:18:31Z | 86 | 0 | null | [
"region:us"
] | 2023-10-07T03:18:31Z | 2023-10-07T03:17:40.000Z | 2023-10-07T03:17:40 | ---
dataset_info:
features:
- name: text
dtype: string
- name: meta
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: tokenized_len
dtype: int64
splits:
- name: train
num_bytes: 1647980074
num_examples: 46251
download_size: 554081392
dataset_size: 1647980074
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "proofpile-test-tokenized-mistral"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5327538847923279,
-0.2741309404373169,
-0.00586341368034482,
0.2721785008907318,
-0.12844090163707733,
-0.11926744878292084,
0.2361544370651245,
-0.00615394301712513,
0.6364052295684814,
0.3789474070072174,
-0.49385038018226624,
-0.7045547366142273,
-0.6972621083259583,
-0.3095786869525... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pranjali97/ha-en_RL-grow1_valid | pranjali97 | 2023-11-04T03:31:03Z | 86 | 0 | null | [
"region:us"
] | 2023-11-04T03:31:03Z | 2023-11-04T03:31:01.000Z | 2023-11-04T03:31:01 | ---
dataset_info:
features:
- name: src
dtype: string
- name: ref
dtype: string
- name: mt
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 1553276
num_examples: 3339
download_size: 369871
dataset_size: 1553276
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ha-en_RL-grow1_valid"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.41530176997184753,
-0.7237225770950317,
0.022525276988744736,
0.43606528639793396,
-0.21676234900951385,
0.004390057176351547,
0.20973631739616394,
-0.3381458818912506,
1.0397100448608398,
0.5305187106132507,
-0.887712299823761,
-0.8354489207267761,
-0.546398937702179,
-0.01272012200206... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arun2023acs/acsrepoind2023 | arun2023acs | 2023-11-22T08:38:29Z | 86 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-22T08:38:29Z | 2023-11-16T04:52:05.000Z | 2023-11-16T04:52:05 | ---
license: mit
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
codeparrot/codeparrot-clean-train | codeparrot | 2022-10-10T15:27:50Z | 85 | 10 | null | [
"region:us"
] | 2022-10-10T15:27:50Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | # CodeParrot 🦜 Dataset Cleaned (train)
Train split of [CodeParrot 🦜 Dataset Cleaned](https://huggingface.co/datasets/lvwerra/codeparrot-clean).
## Dataset structure
```python
DatasetDict({
train: Dataset({
features: ['repo_name', 'path', 'copies', 'size', 'content', 'license', 'hash', 'line_mean', 'line_max', 'alpha_frac', 'autogenerated'],
num_rows: 5300000
})
})
``` | [
-0.5826375484466553,
-0.23288296163082123,
-0.29653751850128174,
-0.001042477902956307,
-0.5039271116256714,
0.19696611166000366,
-0.19376453757286072,
0.11988965421915054,
0.4639662504196167,
0.605984628200531,
-0.3702891170978546,
-0.4633381962776184,
-0.34874051809310913,
0.253769874572... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
crystina-z/nocs-mrtydi-corpus | crystina-z | 2022-03-07T17:15:52Z | 85 | 0 | null | [
"region:us"
] | 2022-03-07T17:15:52Z | 2022-03-06T01:55:04.000Z | 2022-03-06T01:55:04 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
taln-ls2n/inspec | taln-ls2n | 2022-07-21T14:14:59Z | 85 | 3 | null | [
"task_categories:text-generation",
"annotations_creators:unknown",
"language_creators:unknown",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:unknown",
"region:us"
] | 2022-07-21T14:14:59Z | 2022-04-12T08:10:45.000Z | 2022-04-12T08:10:45 | ---
annotations_creators:
- unknown
language_creators:
- unknown
language:
- en
license:
- unknown
multilinguality:
- monolingual
task_categories:
- text-mining
- text-generation
task_ids:
- keyphrase-generation
- keyphrase-extraction
size_categories:
- 1K<n<10K
pretty_name: Inspec
---
# Inspec Benchmark Dataset for Keyphrase Generation
## About
Inspec is a dataset for benchmarking keyphrase extraction and generation models.
The dataset is composed of 2,000 abstracts of scientific papers collected from the [Inspec database](https://www.theiet.org/resources/inspec/).
Keyphrases were annotated by professional indexers in an uncontrolled setting (that is, not limited to thesaurus entries).
Details about the inspec dataset can be found in the original paper [(Hulth, 2003)][hulth-2003].
Reference (indexer-assigned) keyphrases are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in [(Boudin and Gallina, 2021)][boudin-2021].
Text pre-processing (tokenization) is carried out using `spacy` (`en_core_web_sm` model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token).
Stemming (Porter's stemmer implementation provided in `nltk`) is applied before reference keyphrases are matched against the source text.
Details about the process can be found in `prmu.py`.
## Content and statistics
The dataset is divided into the following three splits:
| Split | # documents | #words | # keyphrases | % Present | % Reordered | % Mixed | % Unseen |
| :--------- | ----------: | -----: | -----------: | --------: | ----------: | ------: | -------: |
| Train | 1,000 | 141.7 | 9.79 | 78.00 | 9.85 | 6.22 | 5.93 |
| Validation | 500 | 132.2 | 9.15 | 77.96 | 9.82 | 6.75 | 5.47 |
| Test | 500 | 134.8 | 9.83 | 78.70 | 9.92 | 6.48 | 4.91 |
The following data fields are available :
- **id**: unique identifier of the document.
- **title**: title of the document.
- **abstract**: abstract of the document.
- **keyphrases**: list of reference keyphrases.
- **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases.
## References
- (Hulth, 2003) Anette Hulth. 2003.
[Improved automatic keyword extraction given more linguistic knowledge](https://aclanthology.org/W03-1028).
In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, pages 216-223.
- (Boudin and Gallina, 2021) Florian Boudin and Ygor Gallina. 2021.
[Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness](https://aclanthology.org/2021.naacl-main.330/).
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
[hulth-2003]: https://aclanthology.org/W03-1028/
[boudin-2021]: https://aclanthology.org/2021.naacl-main.330/ | [
-0.3053695857524872,
-0.40904226899147034,
0.3787989616394043,
0.2155495136976242,
-0.2749614715576172,
0.2800670564174652,
-0.13740603625774384,
-0.1849670708179474,
0.06268421560525894,
0.3204650282859802,
-0.4632062613964081,
-0.7318312525749207,
-0.422910213470459,
0.6289769411087036,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
crystina-z/mrtydi-mContriever-mmarco-HN | crystina-z | 2022-07-14T20:00:39Z | 85 | 0 | null | [
"region:us"
] | 2022-07-14T20:00:39Z | 2022-07-14T07:34:00.000Z | 2022-07-14T07:34:00 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
copenlu/citeworth | copenlu | 2022-08-17T13:48:22Z | 85 | 2 | citeworth | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:extended|s2orc",
"language:en",
"license:cc-by-nc-4.0",
"citation detection",
"citation",
"science",
"scholarly... | 2022-08-17T13:48:22Z | 2022-08-17T11:57:29.000Z | 2022-08-17T11:57:29 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
paperswithcode_id: citeworth
pretty_name: CiteWorth
size_categories:
- 1M<n<10M
source_datasets:
- extended|s2orc
tags:
- citation detection
- citation
- science
- scholarly documents
- bio
- medicine
- computer science
- citeworthiness
task_categories:
- text-classification
task_ids: []
---
# Dataset Card for CiteWorth
## Dataset Description
- **Repo** https://github.com/copenlu/cite-worth
- **Paper** https://aclanthology.org/2021.findings-acl.157.pdf
### Dataset Summary
Scientific document understanding is challenging as the data is highly domain specific and diverse. However, datasets for tasks with scientific text require expensive manual annotation and tend to be small and limited to only one or a few fields. At the same time, scientific documents contain many potential training signals, such as citations, which can be used to build large labelled datasets. Given this, we present an in-depth study of cite-worthiness detection in English, where a sentence is labelled for whether or not it cites an external source. To accomplish this, we introduce CiteWorth, a large, contextualized, rigorously cleaned labelled dataset for cite-worthiness detection built from a massive corpus of extracted plain-text scientific documents. We show that CiteWorth is high-quality, challenging, and suitable for studying problems such as domain adaptation. Our best performing cite-worthiness detection model is a paragraph-level contextualized sentence labelling model based on Longformer, exhibiting a 5 F1 point improvement over SciBERT which considers only individual sentences. Finally, we demonstrate that language model fine-tuning with cite-worthiness as a secondary task leads to improved performance on downstream scientific document understanding tasks.
## Dataset Structure
The data is structured as follows
- `paper_id`: The S2ORC paper ID where the paragraph comes from
- `section_idx`: An index into the section array in the original S2ORC data
- `file_index`: The volume in the S2ORC dataset that the paper belongs to
- `file_offset`: Byte offset to the start of the paper json in the S2ORC paper PDF file
- `mag_field_of_study`: The field of study to which a paper belongs (an array, but each paper belongs to a single field)
- `original_text`: The original text of the paragraph
- `section_title`: Title of the section to which the paragraph belongs
- `samples`: An array containing dicts of the cleaned sentences for the paragraph, in order. The fields for each dict are as follows
- `text`: The cleaned text for the sentence
- `label`: Label for the sentence, either `check-worthy` for cite-worthy sentences or `non-check-worthy` non-cite-worthy sentences
- `original_text`: The original sentence text
- `ref_ids`: List of the reference IDs in the S2ORC dataset for papers cited in this sentence
- `citation_text`: List of all citation text in this sentence
## Dataset Creation
The data is derived from the [S2ORC dataset](https://github.com/allenai/s2orc), specifically the 20200705v1 release of the data. It is licensed under the [CC By-NC 2.0](https://creativecommons.org/licenses/by-nc/2.0/) license. For details on the dataset creation process, see section 3 of our [paper](https://aclanthology.org/2021.findings-acl.157.pdf)
.
## Citing
Please use the following citation when referencing this work or using the data:
```
@inproceedings{wright2021citeworth,
title={{CiteWorth: Cite-Worthiness Detection for Improved Scientific Document Understanding}},
author={Dustin Wright and Isabelle Augenstein},
booktitle = {Findings of ACL-IJCNLP},
publisher = {Association for Computational Linguistics},
year = 2021
}
``` | [
-0.010812944732606411,
-0.17125563323497772,
0.8322098851203918,
0.17197072505950928,
0.02353666163980961,
-0.4296230375766754,
-0.08586855232715607,
-0.5625153183937073,
0.04871400445699692,
0.049367498606443405,
-0.1630222052335739,
-0.4124878942966461,
-0.8112157583236694,
0.40981727838... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/euadr | bigbio | 2022-12-22T15:44:36Z | 85 | 2 | null | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-12-22T15:44:36Z | 2022-11-13T22:08:25.000Z | 2022-11-13T22:08:25 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: EU-ADR
homepage: https://www.sciencedirect.com/science/article/pii/S1532046412000573
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- RELATION_EXTRACTION
---
# Dataset Card for EU-ADR
## Dataset Description
- **Homepage:** https://www.sciencedirect.com/science/article/pii/S1532046412000573
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,RE
Corpora with specific entities and relationships annotated are essential to train and evaluate text-mining systems that are developed to extract specific structured information from a large corpus. In this paper we describe an approach where a named-entity recognition system produces a first annotation and annotators revise this annotation using a web-based interface. The agreement figures achieved show that the inter-annotator agreement is much better than the agreement with the system provided annotations. The corpus has been annotated for drugs, disorders, genes and their inter-relationships. For each of the drug-disorder, drug-target, and target-disorder relations three experts have annotated a set of 100 abstracts. These annotated relationships will be used to train and evaluate text-mining software to capture these relationships in texts.
## Citation Information
```
@article{VANMULLIGEN2012879,
title = {The EU-ADR corpus: Annotated drugs, diseases, targets, and their relationships},
journal = {Journal of Biomedical Informatics},
volume = {45},
number = {5},
pages = {879-884},
year = {2012},
note = {Text Mining and Natural Language Processing in Pharmacogenomics},
issn = {1532-0464},
doi = {https://doi.org/10.1016/j.jbi.2012.04.004},
url = {https://www.sciencedirect.com/science/article/pii/S1532046412000573},
author = {Erik M. {van Mulligen} and Annie Fourrier-Reglat and David Gurwitz and Mariam Molokhia and Ainhoa Nieto and Gianluca Trifiro and Jan A. Kors and Laura I. Furlong},
keywords = {Text mining, Corpus development, Machine learning, Adverse drug reactions},
abstract = {Corpora with specific entities and relationships annotated are essential to train and evaluate text-mining systems that are developed to extract specific structured information from a large corpus. In this paper we describe an approach where a named-entity recognition system produces a first annotation and annotators revise this annotation using a web-based interface. The agreement figures achieved show that the inter-annotator agreement is much better than the agreement with the system provided annotations. The corpus has been annotated for drugs, disorders, genes and their inter-relationships. For each of the drug–disorder, drug–target, and target–disorder relations three experts have annotated a set of 100 abstracts. These annotated relationships will be used to train and evaluate text-mining software to capture these relationships in texts.}
}
```
| [
-0.3865334391593933,
-0.46446630358695984,
0.49136611819267273,
0.0031700944527983665,
-0.06805072724819183,
-0.1545475423336029,
-0.3892728090286255,
-0.7020359039306641,
0.6074618697166443,
0.5235460996627808,
-0.3520070016384125,
-0.7786720395088196,
-0.6952287554740906,
0.7553997039794... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PlanTL-GOB-ES/sts-es | PlanTL-GOB-ES | 2023-01-19T09:45:42Z | 85 | 2 | null | [
"task_categories:text-classification",
"task_ids:semantic-similarity-scoring",
"task_ids:text-scoring",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"language:es",
"region:us"
] | 2023-01-19T09:45:42Z | 2022-11-17T12:11:58.000Z | 2022-11-17T12:11:58 | ---
YAML tags:
annotations_creators:
- expert-generated
language:
- es
language_creators:
- found
multilinguality:
- monolingual
pretty_name: STS-es
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-classification
task_ids:
- semantic-similarity-scoring
- text-scoring
---
# STS-es
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://alt.qcri.org/semeval2014/task10/
- **Point of Contact:** [Aitor Gonzalez](aitor.gonzalez@bsc.es)
### Dataset Summary
For Semantic Text Similarity, we collected the Spanish test sets from SemEval-2014 (Agirre et al., 2014) and SemEval-2015 (Agirre et al., 2015). Since no training data was provided for the Spanish subtask, we randomly sampled both datasets into 1,321 sentences for the train set, 78 sentences for the development set, and 156 sentences for the test set. To make the task harder for the models, we purposely made the development set smaller than the test set.
We use this corpus as part of the EvalEs Spanish language benchmark.
### Supported Tasks and Leaderboards
Semantic Text Similarity Scoring
### Languages
The dataset is in Spanish (`es-ES`)
## Dataset Structure
### Data Instances
```
{
'sentence1': "El "tendón de Aquiles" ("tendo Achillis") o "tendón calcáneo" ("tendo calcaneus") es un tendón de la parte posterior de la pierna."
'sentence2': "El tendón de Aquiles es la extensión tendinosa de los tres músculos de la pantorrilla: gemelo, sóleo y plantar delgado."
'label': 2.8
}
```
### Data Fields
- sentence1: String
- sentence2: String
- label: Float
### Data Splits
- train: 1,321 instances
- dev: 78 instances
- test: 156 instances
## Dataset Creation
### Curation Rationale
[N/A]
### Source Data
The source data came from the Spanish Wikipedia (2013 dump) and texts from Spanish news (2014).
For more information visit the paper from the SemEval-2014 Shared Task [(Agirre et al., 2014)](https://aclanthology.org/S14-2010.pdf) and the SemEval-2015 Shared Task [(Agirre et al., 2015)](https://aclanthology.org/S15-2045.pdf).
#### Initial Data Collection and Normalization
For more information visit the paper from the SemEval-2014 Shared Task [(Agirre et al., 2014)](https://aclanthology.org/S14-2010.pdf) and the SemEval-2015 Shared Task [(Agirre et al., 2015)](https://aclanthology.org/S15-2045.pdf).
#### Who are the source language producers?
Journalists and Wikipedia contributors.
### Annotations
#### Annotation process
For more information visit the paper from the SemEval-2014 Shared Task [(Agirre et al., 2014)](https://aclanthology.org/S14-2010.pdf) and the SemEval-2015 Shared Task [(Agirre et al., 2015)](https://aclanthology.org/S15-2045.pdf).
#### Who are the annotators?
For more information visit the paper from the SemEval-2014 Shared Task [(Agirre et al., 2014)](https://aclanthology.org/S14-2010.pdf) and the SemEval-2015 Shared Task [(Agirre et al., 2015)](https://aclanthology.org/S15-2045.pdf).
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contributes to the development of language models in Spanish.
### Discussion of Biases
No postprocessing steps were applied to mitigate potential social biases.
## Additional Information
### Citation Information
The following papers must be cited when using this corpus:
```
@inproceedings{agirre2015semeval,
title={Semeval-2015 task 2: Semantic textual similarity, english, spanish and pilot on interpretability},
author={Agirre, Eneko and Banea, Carmen and Cardie, Claire and Cer, Daniel and Diab, Mona and Gonzalez-Agirre, Aitor and Guo, Weiwei and Lopez-Gazpio, Inigo and Maritxalar, Montse and Mihalcea, Rada and others},
booktitle={Proceedings of the 9th international workshop on semantic evaluation (SemEval 2015)},
pages={252--263},
year={2015}
}
@inproceedings{agirre2014semeval,
title={SemEval-2014 Task 10: Multilingual Semantic Textual Similarity.},
author={Agirre, Eneko and Banea, Carmen and Cardie, Claire and Cer, Daniel M and Diab, Mona T and Gonzalez-Agirre, Aitor and Guo, Weiwei and Mihalcea, Rada and Rigau, German and Wiebe, Janyce},
booktitle={SemEval@ COLING},
pages={81--91},
year={2014}
}
```
| [
-0.3575954735279083,
-0.5789332985877991,
0.3133910298347473,
0.4078652560710907,
-0.21423831582069397,
-0.15324175357818604,
-0.4897885024547577,
-0.49511709809303284,
0.3576466739177704,
0.5199576616287231,
-0.6904169917106628,
-0.8322674036026001,
-0.6766802668571472,
0.4718548059463501... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jordyvl/RVL-CDIP-N | jordyvl | 2023-01-02T14:25:47Z | 85 | 1 | null | [
"license:cc-by-3.0",
"region:us"
] | 2023-01-02T14:25:47Z | 2023-01-02T14:13:33.000Z | 2023-01-02T14:13:33 | ---
license: cc-by-3.0
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': budget
'1': email
'2': form
'3': handwritten
'4': invoice
'5': letter
'6': memo
'7': news_article
'8': questionnaire
'9': resume
'10': scientific_publication
'11': specification
splits:
- name: test
num_bytes: 2272995060.864
num_examples: 1002
download_size: 544832160
dataset_size: 2272995060.864
---
This dataset was created in https://openreview.net/pdf?id=uDlkiCI5N7Y
The original source is here: https://drive.google.com/drive/folders/1VDnwRhmguvhKUCZ0_nv54RMGgqfYHGfz
Many thanks to Stefan Larson! | [
-0.13717572391033173,
-0.03787793219089508,
0.27411413192749023,
0.13456813991069794,
-0.05824152007699013,
-0.24010814726352692,
0.03763560950756073,
-0.1351281702518463,
0.3675684630870819,
0.8160032629966736,
-0.9088441133499146,
-0.7698329091072083,
-0.2825338542461395,
0.0624840073287... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
FourthBrainGenAI/MarketMail-AI | FourthBrainGenAI | 2023-04-26T07:08:28Z | 85 | 0 | null | [
"region:us"
] | 2023-04-26T07:08:28Z | 2023-04-26T07:08:24.000Z | 2023-04-26T07:08:24 | ---
dataset_info:
features:
- name: product
dtype: string
- name: description
dtype: string
- name: marketing_email
dtype: string
splits:
- name: train
num_bytes: 30474
num_examples: 17
download_size: 31271
dataset_size: 30474
---
# Dataset Card for "cool_new_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7774815559387207,
-0.3220463693141937,
0.027096077799797058,
0.10248523205518723,
-0.32045090198516846,
0.10663509368896484,
0.1858704537153244,
-0.07055893540382385,
1.0947282314300537,
0.4254354238510132,
-0.8033149242401123,
-0.8406431078910828,
-0.5250604152679443,
-0.32932108640670... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
skeskinen/TinyStories-hf | skeskinen | 2023-05-17T18:13:44Z | 85 | 16 | null | [
"arxiv:2305.07759",
"region:us"
] | 2023-05-17T18:13:44Z | 2023-05-17T17:23:20.000Z | 2023-05-17T17:23:20 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1911420483
num_examples: 2119719
- name: validation
num_bytes: 19306310
num_examples: 21990
download_size: 1000775442
dataset_size: 1930726793
---
A description of this dataset can be found at https://arxiv.org/abs/2305.07759
Copied from roneneldan/TinyStories
Modified with:
```
import ftfy.bad_codecs
from datasets import Dataset, DatasetDict
train = open('./TinyStories-train.txt', 'r', encoding='sloppy-windows-1252').read()
train = train.split('<|endoftext|>')
train = [l.strip() for l in train]
valid = open('./TinyStories-valid.txt', 'r', encoding='sloppy-windows-1252').read()
valid = valid.split('<|endoftext|>')
valid = [l.strip() for l in valid]
dataset = DatasetDict({
'train': Dataset.from_dict({'text': train }),
'validation': Dataset.from_dict({'text': valid}),
})
dataset.save_to_disk('./TinyStories')
``` | [
-0.14539946615695953,
-0.3015367388725281,
0.1913342922925949,
-0.246981680393219,
-0.07072125375270844,
-0.36783865094184875,
-0.5048475861549377,
-0.09377841651439667,
0.10312966257333755,
0.3874882757663727,
-0.653083324432373,
-0.563650369644165,
-0.2197183519601822,
0.5210134387016296... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SotirisLegkas/clickbait | SotirisLegkas | 2023-06-23T11:30:01Z | 85 | 0 | null | [
"region:us"
] | 2023-06-23T11:30:01Z | 2023-06-23T11:08:28.000Z | 2023-06-23T11:08:28 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
baber/agieval | baber | 2023-10-26T00:49:22Z | 85 | 2 | null | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"license:mit",
"arxiv:2304.06364",
"region:us"
] | 2023-10-26T00:49:22Z | 2023-07-23T00:31:09.000Z | 2023-07-23T00:31:09 | ---
license: mit
language:
- en
task_categories:
- question-answering
- text-generation
pretty_name: AGIEval
---
# Dataset Card for AGIEval
## Dataset Description
- **Homepage:** https://github.com/microsoft/AGIEval/blob/main/README.md
- **Repository:** https://github.com/microsoft/AGIEval
- **Paper:** https://arxiv.org/abs/2304.06364
### Dataset Summary
AGIEval is a human-centric benchmark specifically designed to evaluate the general abilities of foundation models in tasks pertinent to human cognition and problem-solving. This benchmark is derived from 20 official, public, and high-standard admission and qualification exams intended for general human test-takers, such as general college admission tests (e.g., Chinese College Entrance Exam (Gaokao) and American SAT), law school admission tests, math competitions, lawyer qualification tests, and national civil service exams.
### Citation Information
Dataset taken from the AGIEval Repo.
```
@misc{zhong2023agieval,
title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models},
author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan},
year={2023},
eprint={2304.06364},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Citation for each dataset:
```
@inproceedings{ling-etal-2017-program,
title = "Program Induction by Rationale Generation: Learning to Solve and Explain Algebraic Word Problems",
author = "Ling, Wang and
Yogatama, Dani and
Dyer, Chris and
Blunsom, Phil",
booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2017",
address = "Vancouver, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P17-1015",
doi = "10.18653/v1/P17-1015",
pages = "158--167",
abstract = "Solving algebraic word problems requires executing a series of arithmetic operations{---}a program{---}to obtain a final answer. However, since programs can be arbitrarily complicated, inducing them directly from question-answer pairs is a formidable challenge. To make this task more feasible, we solve these problems by generating answer rationales, sequences of natural language and human-readable mathematical expressions that derive the final answer through a series of small steps. Although rationales do not explicitly specify programs, they provide a scaffolding for their structure via intermediate milestones. To evaluate our approach, we have created a new 100,000-sample dataset of questions, answers and rationales. Experimental results show that indirect supervision of program learning via answer rationales is a promising strategy for inducing arithmetic programs.",
}
@inproceedings{hendrycksmath2021,
title={Measuring Mathematical Problem Solving With the MATH Dataset},
author={Dan Hendrycks and Collin Burns and Saurav Kadavath and Akul Arora and Steven Basart and Eric Tang and Dawn Song and Jacob Steinhardt},
journal={NeurIPS},
year={2021}
}
@inproceedings{Liu2020LogiQAAC,
title={LogiQA: A Challenge Dataset for Machine Reading Comprehension with Logical Reasoning},
author={Jian Liu and Leyang Cui and Hanmeng Liu and Dandan Huang and Yile Wang and Yue Zhang},
booktitle={International Joint Conference on Artificial Intelligence},
year={2020}
}
@inproceedings{zhong2019jec,
title={JEC-QA: A Legal-Domain Question Answering Dataset},
author={Zhong, Haoxi and Xiao, Chaojun and Tu, Cunchao and Zhang, Tianyang and Liu, Zhiyuan and Sun, Maosong},
booktitle={Proceedings of AAAI},
year={2020},
}
@article{Wang2021FromLT,
title={From LSAT: The Progress and Challenges of Complex Reasoning},
author={Siyuan Wang and Zhongkun Liu and Wanjun Zhong and Ming Zhou and Zhongyu Wei and Zhumin Chen and Nan Duan},
journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing},
year={2021},
volume={30},
pages={2201-2216}
}
``` | [
-0.40782564878463745,
-0.8824685215950012,
0.35912904143333435,
0.20591241121292114,
0.06390003114938736,
-0.12473873794078827,
-0.09935356676578522,
-0.39198583364486694,
-0.055278416723012924,
0.33717137575149536,
-0.6412237286567688,
-0.29785796999931335,
-0.46127650141716003,
0.1673450... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mychen76/stack-exchange-paired-500k | mychen76 | 2023-09-01T23:55:09Z | 85 | 0 | null | [
"region:us"
] | 2023-09-01T23:55:09Z | 2023-09-01T23:18:07.000Z | 2023-09-01T23:18:07 | StackExchange Paired 500K is a subset of lvwerra/stack-exchange-paired
which is a processed version of the HuggingFaceH4/stack-exchange-preferences. The following steps were applied:
Parse HTML to Markdown with markdownify
Create pairs (response_j, response_k) where j was rated better than k
Sample at most 10 pairs per question
Shuffle the dataset globally
This dataset is designed to be used for preference learning.
---
license: mit
---
| [
-0.6224074363708496,
-0.27153390645980835,
0.13471892476081848,
0.3982347249984741,
-0.233764186501503,
0.05029540881514549,
-0.0671933963894844,
-0.524912416934967,
0.8247837424278259,
0.6695085167884827,
-0.773816704750061,
-0.23931001126766205,
-0.12152247875928879,
0.25515538454055786,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
open-llm-leaderboard/details_TigerResearch__tigerbot-70b-chat | open-llm-leaderboard | 2023-10-25T05:20:52Z | 85 | 0 | null | [
"region:us"
] | 2023-10-25T05:20:52Z | 2023-09-13T04:03:49.000Z | 2023-09-13T04:03:49 | ---
pretty_name: Evaluation run of TigerResearch/tigerbot-70b-chat
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [TigerResearch/tigerbot-70b-chat](https://huggingface.co/TigerResearch/tigerbot-70b-chat)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 4 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TigerResearch__tigerbot-70b-chat\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-25T05:20:39.857272](https://huggingface.co/datasets/open-llm-leaderboard/details_TigerResearch__tigerbot-70b-chat/blob/main/results_2023-10-25T05-20-39.857272.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.43791946308724833,\n\
\ \"em_stderr\": 0.005080846199755935,\n \"f1\": 0.47991820469798696,\n\
\ \"f1_stderr\": 0.004915876956213108,\n \"acc\": 0.6161274146961446,\n\
\ \"acc_stderr\": 0.012720219505629717\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.43791946308724833,\n \"em_stderr\": 0.005080846199755935,\n\
\ \"f1\": 0.47991820469798696,\n \"f1_stderr\": 0.004915876956213108\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.4564063684609553,\n \
\ \"acc_stderr\": 0.013720038270485325\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7758484609313339,\n \"acc_stderr\": 0.011720400740774106\n\
\ }\n}\n```"
repo_url: https://huggingface.co/TigerResearch/tigerbot-70b-chat
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|arc:challenge|25_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|arc:challenge|25_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_24T02_18_07.282954
path:
- '**/details_harness|drop|3_2023-10-24T02-18-07.282954.parquet'
- split: 2023_10_25T05_20_39.857272
path:
- '**/details_harness|drop|3_2023-10-25T05-20-39.857272.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-25T05-20-39.857272.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_24T02_18_07.282954
path:
- '**/details_harness|gsm8k|5_2023-10-24T02-18-07.282954.parquet'
- split: 2023_10_25T05_20_39.857272
path:
- '**/details_harness|gsm8k|5_2023-10-25T05-20-39.857272.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-25T05-20-39.857272.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hellaswag|10_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hellaswag|10_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-13T04-03-35.733983.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-13T04-21-04.931146.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-13T04-03-35.733983.parquet'
- split: 2023_09_13T04_21_04.931146
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-13T04-21-04.931146.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-13T04-21-04.931146.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_24T02_18_07.282954
path:
- '**/details_harness|winogrande|5_2023-10-24T02-18-07.282954.parquet'
- split: 2023_10_25T05_20_39.857272
path:
- '**/details_harness|winogrande|5_2023-10-25T05-20-39.857272.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-25T05-20-39.857272.parquet'
- config_name: results
data_files:
- split: 2023_09_13T04_03_35.733983
path:
- results_2023-09-13T04-03-35.733983.parquet
- split: 2023_09_13T04_21_04.931146
path:
- results_2023-09-13T04-21-04.931146.parquet
- split: 2023_10_24T02_18_07.282954
path:
- results_2023-10-24T02-18-07.282954.parquet
- split: 2023_10_25T05_20_39.857272
path:
- results_2023-10-25T05-20-39.857272.parquet
- split: latest
path:
- results_2023-10-25T05-20-39.857272.parquet
---
# Dataset Card for Evaluation run of TigerResearch/tigerbot-70b-chat
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/TigerResearch/tigerbot-70b-chat
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [TigerResearch/tigerbot-70b-chat](https://huggingface.co/TigerResearch/tigerbot-70b-chat) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 4 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TigerResearch__tigerbot-70b-chat",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-25T05:20:39.857272](https://huggingface.co/datasets/open-llm-leaderboard/details_TigerResearch__tigerbot-70b-chat/blob/main/results_2023-10-25T05-20-39.857272.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.43791946308724833,
"em_stderr": 0.005080846199755935,
"f1": 0.47991820469798696,
"f1_stderr": 0.004915876956213108,
"acc": 0.6161274146961446,
"acc_stderr": 0.012720219505629717
},
"harness|drop|3": {
"em": 0.43791946308724833,
"em_stderr": 0.005080846199755935,
"f1": 0.47991820469798696,
"f1_stderr": 0.004915876956213108
},
"harness|gsm8k|5": {
"acc": 0.4564063684609553,
"acc_stderr": 0.013720038270485325
},
"harness|winogrande|5": {
"acc": 0.7758484609313339,
"acc_stderr": 0.011720400740774106
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.4266425669193268,
-0.6426070332527161,
0.12587343156337738,
0.23939351737499237,
-0.1959225833415985,
0.16515237092971802,
-0.4120216965675354,
-0.16391493380069733,
0.4411354660987854,
0.5552589893341064,
-0.7140095829963684,
-0.9163943529129028,
-0.5387649536132812,
0.1995708048343658... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
texonom/texonom-md | texonom | 2023-10-29T18:47:20Z | 85 | 0 | null | [
"region:us"
] | 2023-10-29T18:47:20Z | 2023-10-29T18:18:27.000Z | 2023-10-29T18:18:27 | ---
dataset_info:
features:
- name: title
dtype: string
- name: parent
dtype: string
- name: created
dtype: string
- name: editor
dtype: string
- name: creator
dtype: string
- name: edited
dtype: string
- name: refs
dtype: string
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 11117155
num_examples: 23960
download_size: 6320648
dataset_size: 11117155
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "texonom-md"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7136139273643494,
-0.40553197264671326,
0.5475561022758484,
-0.03906525298953056,
-0.2986510396003723,
0.16340474784374237,
0.20081664621829987,
-0.16540464758872986,
0.8539213538169861,
0.7355819344520569,
-0.7721943855285645,
-1.025878667831421,
-0.7811852693557739,
-0.158121883869171... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
juny116/few_glue | juny116 | 2021-08-13T05:37:37Z | 84 | 1 | null | [
"arxiv:2012.15723",
"region:us"
] | 2021-08-13T05:37:37Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | # FewGLUE_32dev
This repository contains the FewGLUE_32dev dataset, an extension of the [FewGLUE](https://github.com/timoschick/fewglue), which enables NLU few-shot learning tasks to be benchmarked under a new 32-sample-dev setting. It has been proved in [previous work](https://arxiv.org/abs/2012.15723) that using larger development sets confer a significant advantage beyond few-shot. FewGLUE_32dev is built by adding additional few-shot dev sets with 32 examples randomly selected from the original/unused SuperGLUE training sets.
### Data Format
The data files follow the exact same format as [SuperGLUE task files](https://super.gluebenchmark.com/tasks).
### Structure
For each SuperGLUE task `T`, the directory `FewGLUE_32dev/T` contains the 32-sample-dev file (`dev32.jsonl`), which consists of 32 examples for few-shot validation.
| [
-0.6087669730186462,
-0.36305785179138184,
0.15649214386940002,
0.09696850180625916,
-0.010017438791692257,
0.10411395132541656,
-0.08874396234750748,
-0.3790130615234375,
0.037697259336709976,
0.2485978603363037,
-0.8654784560203552,
-0.7691509127616882,
-0.43258827924728394,
0.0287428162... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
wenhu/TheoremQA | wenhu | 2023-07-15T17:54:40Z | 84 | 12 | null | [
"task_categories:question-answering",
"size_categories:n<1K",
"language:en",
"license:mit",
"question answering",
"math",
"science",
"visual question answering",
"arxiv:2305.12524",
"region:us"
] | 2023-07-15T17:54:40Z | 2023-05-24T02:57:57.000Z | 2023-05-24T02:57:57 | ---
license: mit
task_categories:
- question-answering
language:
- en
tags:
- question answering
- math
- science
- visual question answering
pretty_name: ThoeremQA
size_categories:
- n<1K
---
## Introduction
We propose the first question-answering dataset driven by STEM theorems. We annotated 800 QA pairs covering 350+ theorems spanning across Math, EE&CS, Physics and Finance. The dataset is collected by human experts with very high quality. We provide the dataset as a new benchmark to test the limit of large language models to apply theorems to solve challenging university-level questions. We provide a pipeline in the following to prompt LLMs and evaluate their outputs with WolframAlpha.
## How to use TheoremQA
```
from datasets import load_dataset
dataset = load_dataset("wenhu/TheoremQA")
for d in dataset['test']:
print(d)
```
To use the images, try to download images from images.zip in https://huggingface.co/datasets/wenhu/TheoremQA/blob/main/images.zip. The image is under the `Picture' field.
## Arxiv Paper:
https://arxiv.org/abs/2305.12524
## Code
https://github.com/wenhuchen/TheoremQA/tree/main | [
-0.34012365341186523,
-0.6439244747161865,
0.451065331697464,
0.09799394011497498,
-0.15338732302188873,
0.2669406533241272,
0.14377965033054352,
-0.15034057199954987,
-0.06435064226388931,
0.541282594203949,
-0.9839375019073486,
-0.6235520243644714,
-0.09876379370689392,
0.157257080078125... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PurCL/bincorp-26m-all | PurCL | 2023-08-22T20:07:44Z | 84 | 0 | null | [
"region:us"
] | 2023-08-22T20:07:44Z | 2023-08-21T16:25:16.000Z | 2023-08-21T16:25:16 | ---
viewer: true
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: code
dtype: string
- name: data_dep
dtype: string
splits:
- name: train
num_bytes: 39826202125.70429
num_examples: 14019961
- name: test
num_bytes: 11713589027.6
num_examples: 4123518
- name: valid
num_bytes: 7028153984.695704
num_examples: 2474111
download_size: 19420221346
dataset_size: 58567945137.99999
---
# Dataset Card for "bincorp-26m-all"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7596138715744019,
-0.2102772295475006,
0.17258226871490479,
0.5168122053146362,
-0.4469626843929291,
0.3413926362991333,
0.5428205728530884,
-0.20779955387115479,
0.9457536935806274,
0.9721954464912415,
-0.895889401435852,
-0.897806704044342,
-0.6769877672195435,
-0.02650189958512783,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Binaryy/cars-for-sale | Binaryy | 2023-09-16T13:43:03Z | 84 | 1 | null | [
"region:us"
] | 2023-09-16T13:43:03Z | 2023-09-16T13:42:34.000Z | 2023-09-16T13:42:34 | ---
dataset_info:
features:
- name: image
dtype: image
- name: 'Unnamed: 0'
dtype: int64
- name: Car Name
dtype: string
- name: Region
dtype: string
- name: Price
dtype: string
- name: Status
dtype: string
- name: Mileage
dtype: string
- name: Car Name.1
dtype: string
- name: Image URL
dtype: string
splits:
- name: train
num_bytes: 8301111.18
num_examples: 1332
download_size: 8084700
dataset_size: 8301111.18
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "cars-for-sale"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6143603324890137,
-0.23149648308753967,
0.4164656102657318,
0.22058862447738647,
-0.3550480902194977,
0.0009970703395083547,
0.07598764449357986,
-0.2345678210258484,
0.4501361846923828,
0.27170729637145996,
-0.7645955085754395,
-0.775518000125885,
-0.07914511114358902,
-0.4923948347568... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-eval-tweet_eval-sentiment-45124a-38605145054 | autoevaluate | 2023-10-04T14:23:31Z | 84 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2023-10-04T14:23:31Z | 2023-10-04T14:20:04.000Z | 2023-10-04T14:20:04 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- tweet_eval
eval_info:
task: multi_class_classification
model: siberett/roberta-sentiment-analysis-finetune
metrics: []
dataset_name: tweet_eval
dataset_config: sentiment
dataset_split: train
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: siberett/roberta-sentiment-analysis-finetune
* Dataset: tweet_eval
* Config: sentiment
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@emuggins](https://huggingface.co/emuggins) for evaluating this model. | [
-0.43158158659935,
-0.32990017533302307,
0.28507500886917114,
0.2866591811180115,
-0.04258182644844055,
0.06471361964941025,
-0.17717903852462769,
-0.3334711492061615,
0.11606784164905548,
0.22203631699085236,
-0.8676900863647461,
-0.2971491515636444,
-0.8316278457641602,
-0.08737264573574... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
FinGPT/fingpt-finred | FinGPT | 2023-10-10T06:58:37Z | 84 | 2 | null | [
"region:us"
] | 2023-10-10T06:58:37Z | 2023-10-10T06:56:22.000Z | 2023-10-10T06:56:22 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 25113554
num_examples: 27558
- name: test
num_bytes: 4477146
num_examples: 5112
download_size: 2114835
dataset_size: 29590700
---
# Dataset Card for "fingpt-finred"
This dataset consist of both Relation Extraction part and Classification part, and it used in Multi-task Instruction Tuning
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6441412568092346,
-0.7079293727874756,
0.010761014185845852,
0.10699209570884705,
-0.3359595835208893,
-0.11539751291275024,
-0.15506069362163544,
-0.3094398081302643,
0.1907530128955841,
0.6532710194587708,
-0.9590868949890137,
-0.4621811807155609,
-0.47546327114105225,
-0.312679797410... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AdapterOcean/gorilla_16k_standardized_cluster_4_std | AdapterOcean | 2023-10-22T23:16:48Z | 84 | 0 | null | [
"region:us"
] | 2023-10-22T23:16:48Z | 2023-10-22T23:16:45.000Z | 2023-10-22T23:16:45 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: cluster
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 5005609
num_examples: 8256
download_size: 1950794
dataset_size: 5005609
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "gorilla_16k_standardized_cluster_4_std"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6507262587547302,
-0.4412652254104614,
0.1524631530046463,
0.5302769541740417,
-0.533308744430542,
0.04707306995987892,
0.22310155630111694,
-0.24639582633972168,
0.8432286977767944,
0.18892212212085724,
-0.6699557900428772,
-1.007930874824524,
-0.577848494052887,
-0.09278220683336258,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Rewcifer/validation_2000_cutoff_llama | Rewcifer | 2023-11-03T02:26:04Z | 84 | 0 | null | [
"region:us"
] | 2023-11-03T02:26:04Z | 2023-11-03T02:26:02.000Z | 2023-11-03T02:26:02 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 72668795.5058724
num_examples: 14551
download_size: 13175560
dataset_size: 72668795.5058724
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "validation_2000_cutoff_llama"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4627076983451843,
-0.20858891308307648,
0.3700117766857147,
0.3932651877403259,
-0.33948466181755066,
0.0026819210033863783,
0.5163437724113464,
-0.09318854659795761,
0.7130667567253113,
0.5660369396209717,
-1.0401966571807861,
-0.6299758553504944,
-0.5656029582023621,
0.142921209335327... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
allenai/WildChat-nontoxic | allenai | 2023-11-11T06:05:01Z | 84 | 5 | null | [
"size_categories:100K<n<1M",
"region:us"
] | 2023-11-11T06:05:01Z | 2023-11-11T03:28:14.000Z | 2023-11-11T03:28:14 | ---
dataset_info:
features:
- name: conversation_id
dtype: string
- name: model
dtype: string
- name: timestamp
dtype: timestamp[s, tz=UTC]
- name: conversation
list:
- name: content
dtype: string
- name: language
dtype: string
- name: redacted
dtype: bool
- name: role
dtype: string
- name: toxic
dtype: bool
- name: turn
dtype: int64
- name: language
dtype: string
- name: openai_moderation
list:
- name: categories
struct:
- name: harassment
dtype: bool
- name: harassment/threatening
dtype: bool
- name: hate
dtype: bool
- name: hate/threatening
dtype: bool
- name: self-harm
dtype: bool
- name: self-harm/instructions
dtype: bool
- name: self-harm/intent
dtype: bool
- name: sexual
dtype: bool
- name: sexual/minors
dtype: bool
- name: violence
dtype: bool
- name: violence/graphic
dtype: bool
- name: category_scores
struct:
- name: harassment
dtype: float64
- name: harassment/threatening
dtype: float64
- name: hate
dtype: float64
- name: hate/threatening
dtype: float64
- name: self-harm
dtype: float64
- name: self-harm/instructions
dtype: float64
- name: self-harm/intent
dtype: float64
- name: sexual
dtype: float64
- name: sexual/minors
dtype: float64
- name: violence
dtype: float64
- name: violence/graphic
dtype: float64
- name: flagged
dtype: bool
- name: detoxify_moderation
list:
- name: identity_attack
dtype: float32
- name: insult
dtype: float32
- name: obscene
dtype: float32
- name: severe_toxicity
dtype: float32
- name: sexual_explicit
dtype: float32
- name: threat
dtype: float32
- name: toxicity
dtype: float32
- name: toxic
dtype: bool
- name: redacted
dtype: bool
splits:
- name: train
num_bytes: 2949938170
num_examples: 529514
download_size: 1587001052
dataset_size: 2949938170
pretty_name: WildChat-nontoxic
extra_gated_prompt: >-
Access to this dataset is automatically granted upon accepting the [**AI2
ImpACT License - Low Risk Artifacts (“LR
Agreement”)**](https://allenai.org/licenses/impact-lr) and completing all
fields below.
extra_gated_fields:
Your full name: text
Organization or entity you are affiliated with: text
State or country you are located in: text
Contact email: text
Please describe your intended use of the low risk artifact(s): text
I AGREE to the terms and conditions of the LR Agreement above: checkbox
I AGREE to AI2’s use of my information for legal notices and administrative matters: checkbox
I CERTIFY that the information I have provided is true and accurate: checkbox
size_categories:
- 100K<n<1M
---
# Dataset Card for WildChat-nontoxic
## Dataset Description
- **Paper:** https://wenting-zhao.github.io/papers/wildchat.pdf
- **License:** https://allenai.org/licenses/impact-lr
- **Language(s) (NLP):** multi-lingual
- **Point of Contact:** [Yuntian Deng](mailto:yuntiand@allenai.org)
### Dataset Summary
WildChat-nontoxic is the nontoxic subset of the [WildChat dataset](https://huggingface.co/datasets/allenai/WildChat), a collection of 530K conversations between human users and ChatGPT. The full WildChat dataset containing 650K conversations can be found [here](https://huggingface.co/datasets/allenai/WildChat). We collected WildChat by offering online users free access to OpenAI's GPT-3.5-Turbo and GPT-4. The dataset contains a broad spectrum of user-chatbot interactions that are not previously covered by other instruction fine-tuning datasets: for example, interactions include ambiguous user requests, code-switching, topic-switching, political discussions, etc. WildChat can serve both as a dataset for instructional fine-tuning and as a valuable resource for studying user behaviors.
WildChat-nontoxic has been openly released under AI2's ImpACT license as a low-risk artifact. The use of WildChat-nontoxic to cause harm is strictly prohibited.
### Languages
66 languages were detected in WildChat.
### Data Fields
- `conversation_id` (string): Each conversation has a unique id.
- `model` (string): The underlying OpenAI model, such as gpt-3.5-turbo or gpt-4.
- `timestamp` (timestamp): The timestamp of the last turn in the conversation in UTC.
- `conversation` (list): A list of user/assistant utterances. Each utterance is a dictionary containing the `role` of the speaker (user or assistant), the `content` of the utterance, the detected `language` of the utterance, whether the content of the utterance is considered `toxic`, and whether PII has been detected and anonymized (`redacted`).
- `turn` (int): The number of turns in the conversation. A turn refers to one round of user-assistant interaction.
- `language` (string): The language of the conversation. Note that this is the most frequently detected language in the utterances of the conversation.
- `openai_moderation` (list): A list of OpenAI Moderation results. Each element in the list corresponds to one utterance in the conversation.
- `detoxify_moderation` (list): A list of Detoxify results. Each element in the list corresponds to one utterance in the conversation.
- `toxic` (bool): Whether this conversation contains any utterances considered to be toxic by either OpenAI Moderation or Detoxify.
- `redacted` (bool): Whether this conversation contains any utterances in which PII is detected and anonymized.
### Personal and Sensitive Information
The data has been de-identified with Microsoft Presidio and hand-written rules by the authors.
### Inappropriate Content
If you discover inappropriate conversations in this nontoxic subset, please report their conversation ids to us for removal by sending us an email or using community discussions.
### Licensing Information
WildChat-nontoxic is made available under the [**AI2
ImpACT License - Low Risk Artifacts ("LR
Agreement")**](https://allenai.org/licenses/impact-lr)
### Citation Information
Please cite [our paper](https://wenting-zhao.github.io/papers/wildchat.pdf) when using this dataset:
```
@misc{zhao2023wildchat,
title={(InThe)WildChat: 650K ChatGPT Interaction Logs in the Wild},
author={Wenting Zhao, Xiang Ren, Jack Hessel, Claire Cardie, Yejin Choi, Yuntian Deng.},
year={2023},
eprint={},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | [
-0.05812749266624451,
-0.9331356287002563,
-0.05356188490986824,
0.3330337703227997,
-0.35768818855285645,
-0.2543945610523224,
-0.34205225110054016,
-0.7279444336891174,
0.4911106824874878,
0.4832347333431244,
-0.6939513683319092,
-0.4795110821723938,
-0.38298293948173523,
0.0000670084336... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.