id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
PaddlePaddle/dureader_robust | PaddlePaddle | 2022-03-10T05:14:18Z | 83 | 1 | null | [
"license:apache-2.0",
"region:us"
] | 2022-03-10T05:14:18Z | 2022-03-10T04:46:26.000Z | 2022-03-10T04:46:26 | ---
license: apache-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
RUC-DataLab/ER-dataset | RUC-DataLab | 2022-07-05T07:58:55Z | 83 | 0 | null | [
"region:us"
] | 2022-07-05T07:58:55Z | 2022-03-24T01:49:22.000Z | 2022-03-24T01:49:22 | ### dataset-list
The datasets in this dataset repository are from public datasets DeepMatcher,Magellan and WDC, which cover a variety of domains, such as product, citation and restaurant. Each dataset contains entities from two relational tables with multiple attributes, and a set of labeled matching/non-matching entity pairs.
| dataset_name | domain |
| -------------- | ----------- |
| abt_buy | Product |
| amazon_google | Product |
| anime | Anime |
| beer | Product |
| books2 | Book |
| books4 | Book |
| cameras | WDC-Product |
| computers | WDC-Product |
| cosmetics | Cosmetics |
| dblp_acm | Citation |
| dblp_scholar | Citation |
| ebooks1 | eBook |
| fodors_zagat | Restaurant |
| itunes_amazon | Music |
| movies1 | Movie |
| restaurants1 | Restaurant |
| restaurants3 | Restaurant |
| restaurants4 | Restaurant |
| shoes | WDC-Product |
| walmart_amazon | Product |
| watches | WDC-Product |
| [
-0.6302473545074463,
-0.1480676829814911,
0.08693467825651169,
-0.1492174118757248,
0.1091899573802948,
0.13660043478012085,
0.09605002403259277,
-0.008818787522614002,
0.2932891547679901,
0.9774012565612793,
-0.9310337901115417,
-1.0680503845214844,
-0.28447651863098145,
0.188525155186653... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
huggan/cats | huggan | 2022-03-24T09:48:20Z | 83 | 2 | null | [
"region:us"
] | 2022-03-24T09:48:20Z | 2022-03-24T09:41:31.000Z | 2022-03-24T09:41:31 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nouamanetazi/test111 | nouamanetazi | 2022-05-15T19:28:57Z | 83 | 0 | null | [
"region:us"
] | 2022-05-15T19:28:57Z | 2022-05-15T18:50:55.000Z | 2022-05-15T18:50:55 | test | [
-0.3891207277774811,
-0.9635319709777832,
0.2210875153541565,
0.15607964992523193,
-0.07489927858114243,
0.0734468623995781,
0.25029265880584717,
-0.2309502363204956,
0.24025516211986542,
0.5704401135444641,
-0.4998493492603302,
-0.16452203691005707,
-0.5219650864601135,
0.2751986682415008... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
alexfabbri/answersumm | alexfabbri | 2022-12-14T20:18:28Z | 83 | 3 | null | [
"task_categories:summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"query-based-summarization",
"arxiv:2111.06474",
"region:us"
] | 2022-12-14T20:18:28Z | 2022-06-09T14:58:23.000Z | 2022-06-09T14:58:23 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- summarization
task_ids: []
tags:
- query-based-summarization
---
# Dataset Card for answersumm
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/Alex-Fabbri/AnswerSumm
- **Paper:** [AnswerSumm: A Manually-Curated Dataset and Pipeline for Answer Summarization](https://arxiv.org/abs/2111.06474)
- **Point of Contact:** [Alex Fabbri](mailto:afabbri@salesforce.com)
### Dataset Summary
The AnswerSumm dataset is an English-language dataset of questions and answers collected from a [StackExchange data dump](https://archive.org/details/stackexchange). The dataset was created to support the task of query-focused answer summarization with an emphasis on multi-perspective answers.
The dataset consists of over 4200 such question-answer threads annotated by professional linguists and includes over 8700 summaries. We decompose the task into several annotation stages, including sentence selection, sentence clustering, cluster summarization, and overall summarization. For each thread, the annotator writes two summaries, one in which the annotator is asked to mark sentences that are included in the final summary and instructed to more closely use the words in these sentences rather than abstract. We have multiple annotators for a subset of the examples in the test set.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
A data point comprises a question with a `title` field containing the overview of the question and a `question` that elaborates on the title. The answers are sentence tokenized and contain relevance labels, labels for inclusion in the final summary, and cluster labels. We include cluster summaries, overall summaries, and additional metadata.
An example from the AnswerSumm test set looks as follows:
```json
{
"example_id": 9_24,
"annotator_id": [1],
"question": {
"author": "gaming.stackexchange.com/users/11/Jeffrey",
"forum": "gaming.stackexchange.com",
"link": "gaming.stackexchange.com/questions/1",
"question": "Now that the Engineer update has come, there will be lots of Engineers building up everywhere. How should this best be handled?",
"question_tags": "\<team-fortress-2\>",
"title": "What is a good strategy to deal with lots of engineers turtling on the other team?"
},
"answers": [
{
"answer_details": {
"author": "gaming.stackexchange.com/users/44/Corv1nus",
"score": 49
}
"sents": [
"text": "Lots of medics with lots of ubers on high-damage-dealing classes."
"label": [0],
"label_summ": [0],
"cluster_id": [[-1]],
]
...
},
...
]
"summaries": [
[
"Demomen usually work best against a sentry farm. Heavies or pyros can also be effective. Medics should be in the frontline to absorb the shock. Build a teleporter to help your team through.",
"Demomen are best against a sentry farm. Heavies or pyros can also be effective. The medic should lead the uber combo. ..."
]
]
"cluster_summaries":[
"Demomen are best against a sentry farm.",
"Heavies or pyros can also be effective.",
...
]
}
```
### Data Fields
- question: contains metadata about the question and forum
- question: the body of the question post
- title: the title of the question post
- question_tags: user-provided question tags
- link: link to the original question
- author: link to the author's user page (as requested by StackExchange's attribution policy)
- answers: list of sentence-tokenized answers
- answer_details: dictionary consisting of link to answer author's user page (author) and community-assigned score (score)
- sents: sentences that compose the answer
- text: the sentence text
- label: a list (to generalize to multi-annotator scenarios) of whether the sentence is labeled as relevant or not for answering the question.
- label_summ: a list of whether the sentence was used to write the first annotator-created summary (that is the first summary in `summaries`)
- cluster_id: a list of lists (potentially multiple annotators and a sentence can be in potentially multiple clusters) of the clusters a sentence belongs to. -1 implies no cluster. This label can be used to aggregate sentences into clusters across answers.
- summaries: list of list of summaries. Each annotator wrote two summaries. The first in the list is the summary in which the instructor was told to mark sentences relevant for inclusion in the summary and then closely use the words of these sentences, while for the second summary the annotator was asked to paraphrase and condense the cluster summaries but was not asked to reduce abstraction.
- annotator_id: a list of the ids of the annotator(s) who completed all tasks related to that thread.
- mismatch_info: a dict of any issues in processing the excel files on which annotations were completed.
- rel_sent_not_in_cluster: list of booleans indicating whether there are sentences that are labeled as relevant but were not included in a cluster.
- cluster_sents_not_matched: list of sentences that were found in a cluster but which our processing script didn't automatically match to sentences in the source answers. If cluster summarization is of interest to the user you may want to process these examples separately using clusters_orig.
### Data Splits
The data is split into training, validation, and test sets using stratified sampling on the source forums. There are 2783, 500, and 1000 train/validation/test threads, respectively.
## Dataset Creation
### Curation Rationale
AnswerSumm was built to provide a testbed for query-focused summarization of multi-perspective answers. The data collection was designed to tackle multiple subtasks including sentence selection, clustering, cluster summarization, and overall summarization.
### Source Data
#### Initial Data Collection and Normalization
The data was obtained by filtering examples based on a whitelist of forums from StackExchange which we believed would be able to be summarized by a lay person. We describe. We asked annotators to remove examples which required technical knowledge or additional context beyond what was present in the answers.
#### Who are the source language producers?
The language producers are the users of the StackExchange forums sampled.
### Annotations
#### Annotation process
Please see our [paper](https://arxiv.org/pdf/2111.06474.pdf) for additional annotation details. We began with a pre-pilot of 50 examples, followed by a pilot of 500 and a final annotation of 5000 examples. This release contains the results of the final data collection. We will release the instructions used in data collection.
#### Who are the annotators?
The annotators are professional linguists who were obtained through an internal contractor.
### Personal and Sensitive Information
We did not anonymize the data. We followed the specifications from StackExchange [here](https://archive.org/details/stackexchange) to include author information.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop systems that automatically summarize multi-perspective answers. A system that succeeds at this task would be able to summarize many perspectives present in an answer and not limit itself to a single perspective.
### Discussion of Biases
While StackExchange allows for the exchange of information and ideas, hate and harassment may exist on this site. While our annotators did not flag examples in this process, we encourage users of the dataset to reach out with concerns.
We also note that this dataset is limited in its monolingual coverage.
## Additional Information
### Dataset Curators
The dataset was collected by Alex Fabbri, Xiaojian Wu, Srini Iyer, Haoran Li, and Mona Diab during work done at Facebook.
### Licensing Information
The data is released under cc-by-sa 4.0 following the original StackExchange [release](https://archive.org/details/stackexchange).
### Citation Information
```bibtex
@misc{fabbri-etal-2022-answersumm,
title={AnswerSumm: A Manually-Curated Dataset and Pipeline for Answer Summarization},
author={Alexander R. Fabbri and Xiaojian Wu and Srini Iyer and Haoran Li and Mona Diab },
year={2022},
eprint={2111.06474},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2111.06474}
}
```
| [
-0.75914067029953,
-0.9605792760848999,
0.3203125596046448,
0.13527889549732208,
-0.17176832258701324,
0.024608861654996872,
-0.2108125239610672,
-0.32850244641304016,
0.380442351102829,
0.5847176313400269,
-0.5575040578842163,
-0.6251084804534912,
-0.49488964676856995,
0.3258797526359558,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v3-math-468e93-2011366581 | autoevaluate | 2022-11-07T06:35:22Z | 83 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T06:35:22Z | 2022-11-07T06:34:19.000Z | 2022-11-07T06:34:19 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test_cot_v3
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-125m_eval
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test_cot_v3
dataset_config: mathemakitten--winobias_antistereotype_test_cot_v3
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-125m_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v3
* Config: mathemakitten--winobias_antistereotype_test_cot_v3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | [
-0.42624491453170776,
-0.3013591468334198,
0.2626178562641144,
-0.05095333978533745,
-0.09291476011276245,
-0.1657540500164032,
0.03796689584851265,
-0.4206625819206238,
0.18049408495426178,
0.388109415769577,
-0.9396322965621948,
-0.24969056248664856,
-0.6455140113830566,
-0.0448051877319... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/hprd50 | bigbio | 2022-12-22T15:44:46Z | 83 | 1 | null | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-12-22T15:44:46Z | 2022-11-13T22:08:57.000Z | 2022-11-13T22:08:57 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: HPRD50
homepage:
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- RELATION_EXTRACTION
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for HPRD50
## Dataset Description
- **Homepage:**
- **Pubmed:** True
- **Public:** True
- **Tasks:** RE,NER
HPRD50 is a dataset of randomly selected, hand-annotated abstracts of biomedical papers
referenced by the Human Protein Reference Database (HPRD). It is parsed in XML format,
splitting each abstract into sentences, and in each sentence there may be entities and
interactions between those entities. In this particular dataset, entities are all
proteins and interactions are thus protein-protein interactions.
Moreover, all entities are normalized to the HPRD database. These normalized terms are
stored in each entity's 'type' attribute in the source XML. This means the dataset can
determine e.g. that "Janus kinase 2" and "Jak2" are referencing the same normalized
entity.
Because the dataset contains entities and relations, it is suitable for Named Entity
Recognition and Relation Extraction.
## Citation Information
```
@article{fundel2007relex,
title={RelEx—Relation extraction using dependency parse trees},
author={Fundel, Katrin and K{"u}ffner, Robert and Zimmer, Ralf},
journal={Bioinformatics},
volume={23},
number={3},
pages={365--371},
year={2007},
publisher={Oxford University Press}
}
```
| [
-0.3001754581928253,
-0.38888347148895264,
0.258048951625824,
-0.1285417228937149,
-0.1295648068189621,
-0.29889705777168274,
0.08106697350740433,
-0.4065149128437042,
0.3656007647514343,
0.35433289408683777,
-0.40049442648887634,
-0.412554532289505,
-0.3203054964542389,
0.587402880191803,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
farrael004/CynicAI-Dataset | farrael004 | 2023-04-07T22:43:02Z | 83 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-04-07T22:43:02Z | 2023-04-07T22:42:16.000Z | 2023-04-07T22:42:16 | ---
license: apache-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ctu-aic/csfever_v2 | ctu-aic | 2023-07-27T08:52:58Z | 83 | 1 | null | [
"task_categories:text-classification",
"task_categories:text-retrieval",
"task_ids:natural-language-inference",
"task_ids:document-retrieval",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:fever",
"language:cs",
"license:cc-by-sa-3.0",
"Fact-checking",
"arxiv:2201.... | 2023-07-27T08:52:58Z | 2023-05-09T14:19:36.000Z | 2023-05-09T14:19:36 | ---
license: cc-by-sa-3.0
task_categories:
- text-classification
- text-retrieval
task_ids:
- natural-language-inference
- document-retrieval
language:
- cs
tags:
- Fact-checking
pretty_name: CsFEVERv2
multilinguality: monolingual
source_datasets: fever
size_categories:
- 100K<n<1M
---
# Dataset Card for "CsFEVERv2"
## Dataset Description
CsFEVERv2 is a dataset for Czech fact-checking developed as part of a bachelor thesis at the Artificial Intelligence Center of the Faculty of Electrical Engineering of
the Czech technical university in Prague. The dataset consists of an **original** subset, which is only an iteration of CsFEVER with new data and better processing and
**f1**, **precision**, and **07** subsets filtered using an NLI model and optimized threshold values. The subset **wiki_pages** is a processed Wikipedia dump from
August 2022 with correct revids. This subset should be used to map evidence from datasets to Wikipedia texts. Additionaly preprocessed datasets **original_nli**, **f1_nli**, **precision_nli**, **07_nli**,
for training of NLI models are included.
The original subset can be used to generate other filtered datasets by filtering with other thresholds using predicted_label and predicted_score fields.
### Languages
Czech
## Dataset Usage Example
```python
from datasets import load_dataset
#load default (original) subset
dataset = load_dataset("/home/mlynatom/csfever_v2")
dataset = load_dataset("/home/mlynatom/csfever_v2", "original")
#load f1, f1_nli, precision, precision_nli, 07, and 07_nli subsets
dataset = load_dataset("/home/mlynatom/csfever_v2", "f1")
#load wiki_pages subset
dataset = load_dataset("/home/mlynatom/csfever_v2", "wiki_pages")
```
## Dataset Structure
### Data Instances
#### original
An example of 'train' looks as follows.
```json
{'id': 75397,
'label': 'SUPPORTS',
'predicted_label': 'SUPPORTS',
'predicted_score': 0.921731
'claim': 'Nikolaj Coster-Waldau pracoval pro Fox Broadcasting Company.',
'evidence': [ [ "Nikolaj Coster-Waldau", "Nikolaj Coster-Waldau" ], [ "Fox Broadcasting Company", "Fox Broadcasting Company" ] ]}
```
#### f1, precision, 07
An example of 'train' looks as follows.
```json
{'id': 75397,
'label': 'SUPPORTS',
'claim': 'Nikolaj Coster-Waldau pracoval pro Fox Broadcasting Company.',
'evidence': [ [ "Nikolaj Coster-Waldau", "Nikolaj Coster-Waldau" ], [ "Fox Broadcasting Company", "Fox Broadcasting Company" ] ]}
```
#### original_nli, f1_nli, precision_nli, 07_nli
An example of 'train' looks as follows.
```json
{'id': 155439,
'label': 2,
'claim': 'Newcastle United FC vyhrál pět ligových titulů.',
'evidence': "Ronnie Simpson. Ronnie Simpson (21. října 1930, Glasgow – 19. dubna 2004, Edinburgh) byl skotský fotbalový brankář..."}
```
#### wiki_pages
An example of 'wiki_pages' looks as follows.
```json
{'id': 80916,
'revid': 20561555,
'url': "https://cs.wikipedia.org/wiki?curid=80916",
'title': "Altruismus",
'text': "Altruismus (z lat. "alter", druhý, 3. pád "altrui", druhému) je moderní ..."}
```
### Data Fields
#### original
- `id`: a `int32` feature.
- `label`: a `string` feature.
- `predicted_label`: a `string` feature. (label predicted by NLI model)
- `predicted_score`: a `int32` feature. (confidence of predicted_label predicted by NLI model)
- `claim`: a `string` feature.
- `evidence`: a `sequence` feature.
#### f1, precision, 07
- `id`: a `int32` feature.
- `label`: a `string` feature.
- `claim`: a `string` feature.
- `evidence`: a `sequence` feature.
#### original_nli, f1_nli, precision_nli, 07_nli
- `id`: a `int32` feature.
- `label`: a `int32` feature.
- `claim`: a `string` feature.
- `evidence`: a `string` feature.
#### wiki_pages
- `id`: a `int32` feature.
- `revid`: a `int32` feature.
- `url`: a `string` feature.
- `title`: a `string` feature.
- `text`: a `string` feature.
### Data Splits
### Data Splits
#### original
| | train | dev | test |
|----------|-------:|-----:|------:|
| original | 118950 | 7458 | 7520 |
#### f1
| | train | dev | test |
|----|------:|-----:|-----:|
| f1 | 83438 | 5445 | 5328 |
#### precision
| | train | dev | test |
|-----------|-------:|-----:|------:|
| precision | 60828 | 4288 | 4236 |
#### 07
| | train | dev | test |
|----|-------:|-----:|------:|
| 07 | 108607 | 6685 | 6623 |
#### wiki_pages
| | wiki_pages |
|------------|-----------:|
| wiki_pages | 825078 |
# Citation
```bibtex
@article{Ullrich_2023,
doi = {10.1007/s10579-023-09654-3},
url = {https://doi.org/10.1007%2Fs10579-023-09654-3},
year = 2023,
month = {may},
publisher = {Springer Science and Business Media {LLC}},
author = {Herbert Ullrich and Jan Drchal and Martin Rýpar and Hana Vincourová and Václav Moravec},
title = {{CsFEVER} and {CTKFacts}: acquiring Czech data for fact verification},
journal = {Language Resources and Evaluation},
archivePrefix={arXiv},
eprint={2201.11115},
}
```
```bibtex
@thesis{Mlynar_2023,
author = {Mlynář, Tomáš},
type = {Bachelor's Thesis}
title = {Automated Fact Checking Based on Czech Wikipedia},
institution = {Czech Technical University in Prague, Faculty of Electrical Engineering},
date = {2023},
url = {http://hdl.handle.net/10467/109219}
}
```
| [
-0.6138018369674683,
-0.45701542496681213,
0.23396089673042297,
0.13115862011909485,
-0.1141715794801712,
-0.0328042171895504,
-0.3640158176422119,
-0.3600664734840393,
0.3379112780094147,
0.3606748580932617,
-0.7418344616889954,
-0.7181437015533447,
-0.37722453474998474,
0.247865036129951... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Yuhthe/vietnews | Yuhthe | 2023-07-26T02:59:45Z | 83 | 0 | null | [
"task_categories:summarization",
"language:vi",
"region:us"
] | 2023-07-26T02:59:45Z | 2023-07-25T10:24:34.000Z | 2023-07-25T10:24:34 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: guid
dtype: int64
- name: title
dtype: string
- name: abstract
dtype: string
- name: article
dtype: string
splits:
- name: train
num_bytes: 325418455
num_examples: 99134
- name: validation
num_bytes: 73397317
num_examples: 22184
- name: test
num_bytes: 74536959
num_examples: 22498
download_size: 241345943
dataset_size: 473352731
task_categories:
- summarization
language:
- vi
---
# Dataset Card for "vietnews"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4068993628025055,
-0.29126864671707153,
0.3613690435886383,
0.32848504185676575,
-0.40604153275489807,
-0.04680075868964195,
0.19523373246192932,
-0.13014738261699677,
0.7649547457695007,
0.7098064422607422,
-0.7476217746734619,
-0.92719966173172,
-0.6434537768363953,
-0.222123727202415... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
open-llm-leaderboard/details_MayaPH__GodziLLa2-70B | open-llm-leaderboard | 2023-08-27T12:25:18Z | 83 | 0 | null | [
"region:us"
] | 2023-08-27T12:25:18Z | 2023-08-17T23:55:34.000Z | 2023-08-17T23:55:34 | ---
pretty_name: Evaluation run of MayaPH/GodziLLa2-70B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [MayaPH/GodziLLa2-70B](https://huggingface.co/MayaPH/GodziLLa2-70B) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 61 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_MayaPH__GodziLLa2-70B\"\
,\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\
\nThese are the [latest results from run 2023-08-17T01:04:34.392139](https://huggingface.co/datasets/open-llm-leaderboard/details_MayaPH__GodziLLa2-70B/blob/main/results_2023-08-17T01%3A04%3A34.392139.json)\
\ (note that their might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6984504472294196,\n\
\ \"acc_stderr\": 0.031326889644367126,\n \"acc_norm\": 0.7020694580624655,\n\
\ \"acc_norm_stderr\": 0.03129855324383113,\n \"mc1\": 0.4259485924112607,\n\
\ \"mc1_stderr\": 0.017310471904076544,\n \"mc2\": 0.6153983714812797,\n\
\ \"mc2_stderr\": 0.014670323036655197\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6868600682593856,\n \"acc_stderr\": 0.013552671543623504,\n\
\ \"acc_norm\": 0.7141638225255973,\n \"acc_norm_stderr\": 0.013203196088537372\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6891057558255328,\n\
\ \"acc_stderr\": 0.00461913649735984,\n \"acc_norm\": 0.8753236407090221,\n\
\ \"acc_norm_stderr\": 0.003296764320821918\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252606,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252606\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5703703703703704,\n\
\ \"acc_stderr\": 0.04276349494376599,\n \"acc_norm\": 0.5703703703703704,\n\
\ \"acc_norm_stderr\": 0.04276349494376599\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7960526315789473,\n \"acc_stderr\": 0.03279000406310051,\n\
\ \"acc_norm\": 0.7960526315789473,\n \"acc_norm_stderr\": 0.03279000406310051\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.71,\n\
\ \"acc_stderr\": 0.04560480215720684,\n \"acc_norm\": 0.71,\n \
\ \"acc_norm_stderr\": 0.04560480215720684\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7660377358490567,\n \"acc_stderr\": 0.02605529690115292,\n\
\ \"acc_norm\": 0.7660377358490567,\n \"acc_norm_stderr\": 0.02605529690115292\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8194444444444444,\n\
\ \"acc_stderr\": 0.032166008088022675,\n \"acc_norm\": 0.8194444444444444,\n\
\ \"acc_norm_stderr\": 0.032166008088022675\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956911,\n \
\ \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956911\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"\
acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\"\
: 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_medicine|5\"\
: {\n \"acc\": 0.6878612716763006,\n \"acc_stderr\": 0.03533133389323657,\n\
\ \"acc_norm\": 0.6878612716763006,\n \"acc_norm_stderr\": 0.03533133389323657\n\
\ },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.3333333333333333,\n\
\ \"acc_stderr\": 0.04690650298201943,\n \"acc_norm\": 0.3333333333333333,\n\
\ \"acc_norm_stderr\": 0.04690650298201943\n },\n \"harness|hendrycksTest-computer_security|5\"\
: {\n \"acc\": 0.77,\n \"acc_stderr\": 0.04229525846816507,\n \
\ \"acc_norm\": 0.77,\n \"acc_norm_stderr\": 0.04229525846816507\n \
\ },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.6680851063829787,\n\
\ \"acc_stderr\": 0.030783736757745657,\n \"acc_norm\": 0.6680851063829787,\n\
\ \"acc_norm_stderr\": 0.030783736757745657\n },\n \"harness|hendrycksTest-econometrics|5\"\
: {\n \"acc\": 0.47368421052631576,\n \"acc_stderr\": 0.04697085136647863,\n\
\ \"acc_norm\": 0.47368421052631576,\n \"acc_norm_stderr\": 0.04697085136647863\n\
\ },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\"\
: 0.593103448275862,\n \"acc_stderr\": 0.04093793981266236,\n \"acc_norm\"\
: 0.593103448275862,\n \"acc_norm_stderr\": 0.04093793981266236\n },\n\
\ \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.47354497354497355,\n\
\ \"acc_stderr\": 0.025715239811346758,\n \"acc_norm\": 0.47354497354497355,\n\
\ \"acc_norm_stderr\": 0.025715239811346758\n },\n \"harness|hendrycksTest-formal_logic|5\"\
: {\n \"acc\": 0.5079365079365079,\n \"acc_stderr\": 0.044715725362943486,\n\
\ \"acc_norm\": 0.5079365079365079,\n \"acc_norm_stderr\": 0.044715725362943486\n\
\ },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.52,\n\
\ \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \
\ \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-high_school_biology|5\"\
: {\n \"acc\": 0.8193548387096774,\n \"acc_stderr\": 0.021886178567172527,\n\
\ \"acc_norm\": 0.8193548387096774,\n \"acc_norm_stderr\": 0.021886178567172527\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.5665024630541872,\n \"acc_stderr\": 0.034867317274198714,\n \"\
acc_norm\": 0.5665024630541872,\n \"acc_norm_stderr\": 0.034867317274198714\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.78,\n \"acc_stderr\": 0.04163331998932262,\n \"acc_norm\"\
: 0.78,\n \"acc_norm_stderr\": 0.04163331998932262\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.8666666666666667,\n \"acc_stderr\": 0.026544435312706467,\n\
\ \"acc_norm\": 0.8666666666666667,\n \"acc_norm_stderr\": 0.026544435312706467\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.8535353535353535,\n \"acc_stderr\": 0.025190921114603925,\n \"\
acc_norm\": 0.8535353535353535,\n \"acc_norm_stderr\": 0.025190921114603925\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9326424870466321,\n \"acc_stderr\": 0.018088393839078912,\n\
\ \"acc_norm\": 0.9326424870466321,\n \"acc_norm_stderr\": 0.018088393839078912\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.7128205128205128,\n \"acc_stderr\": 0.022939925418530613,\n\
\ \"acc_norm\": 0.7128205128205128,\n \"acc_norm_stderr\": 0.022939925418530613\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3074074074074074,\n \"acc_stderr\": 0.028133252578815642,\n \
\ \"acc_norm\": 0.3074074074074074,\n \"acc_norm_stderr\": 0.028133252578815642\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.7647058823529411,\n \"acc_stderr\": 0.027553614467863814,\n\
\ \"acc_norm\": 0.7647058823529411,\n \"acc_norm_stderr\": 0.027553614467863814\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.48344370860927155,\n \"acc_stderr\": 0.0408024418562897,\n \"\
acc_norm\": 0.48344370860927155,\n \"acc_norm_stderr\": 0.0408024418562897\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.908256880733945,\n \"acc_stderr\": 0.012376323409137118,\n \"\
acc_norm\": 0.908256880733945,\n \"acc_norm_stderr\": 0.012376323409137118\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.625,\n \"acc_stderr\": 0.033016908987210894,\n \"acc_norm\": 0.625,\n\
\ \"acc_norm_stderr\": 0.033016908987210894\n },\n \"harness|hendrycksTest-high_school_us_history|5\"\
: {\n \"acc\": 0.8970588235294118,\n \"acc_stderr\": 0.02132833757080437,\n\
\ \"acc_norm\": 0.8970588235294118,\n \"acc_norm_stderr\": 0.02132833757080437\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.9071729957805907,\n \"acc_stderr\": 0.01888975055095671,\n \
\ \"acc_norm\": 0.9071729957805907,\n \"acc_norm_stderr\": 0.01888975055095671\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.8116591928251121,\n\
\ \"acc_stderr\": 0.026241132996407266,\n \"acc_norm\": 0.8116591928251121,\n\
\ \"acc_norm_stderr\": 0.026241132996407266\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8244274809160306,\n \"acc_stderr\": 0.03336820338476072,\n\
\ \"acc_norm\": 0.8244274809160306,\n \"acc_norm_stderr\": 0.03336820338476072\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.859504132231405,\n \"acc_stderr\": 0.031722334260021585,\n \"\
acc_norm\": 0.859504132231405,\n \"acc_norm_stderr\": 0.031722334260021585\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8333333333333334,\n\
\ \"acc_stderr\": 0.036028141763926456,\n \"acc_norm\": 0.8333333333333334,\n\
\ \"acc_norm_stderr\": 0.036028141763926456\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.8159509202453987,\n \"acc_stderr\": 0.03044677768797173,\n\
\ \"acc_norm\": 0.8159509202453987,\n \"acc_norm_stderr\": 0.03044677768797173\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5803571428571429,\n\
\ \"acc_stderr\": 0.04684099321077106,\n \"acc_norm\": 0.5803571428571429,\n\
\ \"acc_norm_stderr\": 0.04684099321077106\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8155339805825242,\n \"acc_stderr\": 0.03840423627288276,\n\
\ \"acc_norm\": 0.8155339805825242,\n \"acc_norm_stderr\": 0.03840423627288276\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9017094017094017,\n\
\ \"acc_stderr\": 0.019503444900757567,\n \"acc_norm\": 0.9017094017094017,\n\
\ \"acc_norm_stderr\": 0.019503444900757567\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.67,\n \"acc_stderr\": 0.04725815626252607,\n \
\ \"acc_norm\": 0.67,\n \"acc_norm_stderr\": 0.04725815626252607\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8684546615581098,\n\
\ \"acc_stderr\": 0.01208670521425043,\n \"acc_norm\": 0.8684546615581098,\n\
\ \"acc_norm_stderr\": 0.01208670521425043\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7543352601156069,\n \"acc_stderr\": 0.023176298203992005,\n\
\ \"acc_norm\": 0.7543352601156069,\n \"acc_norm_stderr\": 0.023176298203992005\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.6312849162011173,\n\
\ \"acc_stderr\": 0.016135759015030126,\n \"acc_norm\": 0.6312849162011173,\n\
\ \"acc_norm_stderr\": 0.016135759015030126\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.738562091503268,\n \"acc_stderr\": 0.025160998214292456,\n\
\ \"acc_norm\": 0.738562091503268,\n \"acc_norm_stderr\": 0.025160998214292456\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7684887459807074,\n\
\ \"acc_stderr\": 0.023956532766639133,\n \"acc_norm\": 0.7684887459807074,\n\
\ \"acc_norm_stderr\": 0.023956532766639133\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.8148148148148148,\n \"acc_stderr\": 0.021613809395224805,\n\
\ \"acc_norm\": 0.8148148148148148,\n \"acc_norm_stderr\": 0.021613809395224805\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.574468085106383,\n \"acc_stderr\": 0.029494827600144363,\n \
\ \"acc_norm\": 0.574468085106383,\n \"acc_norm_stderr\": 0.029494827600144363\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5814863102998696,\n\
\ \"acc_stderr\": 0.012599505608336477,\n \"acc_norm\": 0.5814863102998696,\n\
\ \"acc_norm_stderr\": 0.012599505608336477\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.7022058823529411,\n \"acc_stderr\": 0.027778298701545443,\n\
\ \"acc_norm\": 0.7022058823529411,\n \"acc_norm_stderr\": 0.027778298701545443\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.7581699346405228,\n \"acc_stderr\": 0.017322789207784326,\n \
\ \"acc_norm\": 0.7581699346405228,\n \"acc_norm_stderr\": 0.017322789207784326\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7090909090909091,\n\
\ \"acc_stderr\": 0.04350271442923243,\n \"acc_norm\": 0.7090909090909091,\n\
\ \"acc_norm_stderr\": 0.04350271442923243\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7591836734693878,\n \"acc_stderr\": 0.027372942201788163,\n\
\ \"acc_norm\": 0.7591836734693878,\n \"acc_norm_stderr\": 0.027372942201788163\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8756218905472637,\n\
\ \"acc_stderr\": 0.023335401790166323,\n \"acc_norm\": 0.8756218905472637,\n\
\ \"acc_norm_stderr\": 0.023335401790166323\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774708,\n \
\ \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.03684529491774708\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.536144578313253,\n\
\ \"acc_stderr\": 0.038823108508905954,\n \"acc_norm\": 0.536144578313253,\n\
\ \"acc_norm_stderr\": 0.038823108508905954\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8538011695906432,\n \"acc_stderr\": 0.027097290118070796,\n\
\ \"acc_norm\": 0.8538011695906432,\n \"acc_norm_stderr\": 0.027097290118070796\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4259485924112607,\n\
\ \"mc1_stderr\": 0.017310471904076544,\n \"mc2\": 0.6153983714812797,\n\
\ \"mc2_stderr\": 0.014670323036655197\n }\n}\n```"
repo_url: https://huggingface.co/MayaPH/GodziLLa2-70B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|arc:challenge|25_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hellaswag|10_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-08-17T01:04:34.392139.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-08-17T01:04:34.392139.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-17T01:04:34.392139.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-08-17T01:04:34.392139.parquet'
- config_name: results
data_files:
- split: 2023_08_17T01_04_34.392139
path:
- results_2023-08-17T01:04:34.392139.parquet
- split: latest
path:
- results_2023-08-17T01:04:34.392139.parquet
---
# Dataset Card for Evaluation run of MayaPH/GodziLLa2-70B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/MayaPH/GodziLLa2-70B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [MayaPH/GodziLLa2-70B](https://huggingface.co/MayaPH/GodziLLa2-70B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_MayaPH__GodziLLa2-70B",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-08-17T01:04:34.392139](https://huggingface.co/datasets/open-llm-leaderboard/details_MayaPH__GodziLLa2-70B/blob/main/results_2023-08-17T01%3A04%3A34.392139.json) (note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6984504472294196,
"acc_stderr": 0.031326889644367126,
"acc_norm": 0.7020694580624655,
"acc_norm_stderr": 0.03129855324383113,
"mc1": 0.4259485924112607,
"mc1_stderr": 0.017310471904076544,
"mc2": 0.6153983714812797,
"mc2_stderr": 0.014670323036655197
},
"harness|arc:challenge|25": {
"acc": 0.6868600682593856,
"acc_stderr": 0.013552671543623504,
"acc_norm": 0.7141638225255973,
"acc_norm_stderr": 0.013203196088537372
},
"harness|hellaswag|10": {
"acc": 0.6891057558255328,
"acc_stderr": 0.00461913649735984,
"acc_norm": 0.8753236407090221,
"acc_norm_stderr": 0.003296764320821918
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252606,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252606
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5703703703703704,
"acc_stderr": 0.04276349494376599,
"acc_norm": 0.5703703703703704,
"acc_norm_stderr": 0.04276349494376599
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7960526315789473,
"acc_stderr": 0.03279000406310051,
"acc_norm": 0.7960526315789473,
"acc_norm_stderr": 0.03279000406310051
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.71,
"acc_stderr": 0.04560480215720684,
"acc_norm": 0.71,
"acc_norm_stderr": 0.04560480215720684
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7660377358490567,
"acc_stderr": 0.02605529690115292,
"acc_norm": 0.7660377358490567,
"acc_norm_stderr": 0.02605529690115292
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8194444444444444,
"acc_stderr": 0.032166008088022675,
"acc_norm": 0.8194444444444444,
"acc_norm_stderr": 0.032166008088022675
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956911,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956911
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.45,
"acc_stderr": 0.05,
"acc_norm": 0.45,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6878612716763006,
"acc_stderr": 0.03533133389323657,
"acc_norm": 0.6878612716763006,
"acc_norm_stderr": 0.03533133389323657
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3333333333333333,
"acc_stderr": 0.04690650298201943,
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.04690650298201943
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816507,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816507
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6680851063829787,
"acc_stderr": 0.030783736757745657,
"acc_norm": 0.6680851063829787,
"acc_norm_stderr": 0.030783736757745657
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.47368421052631576,
"acc_stderr": 0.04697085136647863,
"acc_norm": 0.47368421052631576,
"acc_norm_stderr": 0.04697085136647863
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.593103448275862,
"acc_stderr": 0.04093793981266236,
"acc_norm": 0.593103448275862,
"acc_norm_stderr": 0.04093793981266236
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.47354497354497355,
"acc_stderr": 0.025715239811346758,
"acc_norm": 0.47354497354497355,
"acc_norm_stderr": 0.025715239811346758
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5079365079365079,
"acc_stderr": 0.044715725362943486,
"acc_norm": 0.5079365079365079,
"acc_norm_stderr": 0.044715725362943486
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8193548387096774,
"acc_stderr": 0.021886178567172527,
"acc_norm": 0.8193548387096774,
"acc_norm_stderr": 0.021886178567172527
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5665024630541872,
"acc_stderr": 0.034867317274198714,
"acc_norm": 0.5665024630541872,
"acc_norm_stderr": 0.034867317274198714
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932262,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932262
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8666666666666667,
"acc_stderr": 0.026544435312706467,
"acc_norm": 0.8666666666666667,
"acc_norm_stderr": 0.026544435312706467
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8535353535353535,
"acc_stderr": 0.025190921114603925,
"acc_norm": 0.8535353535353535,
"acc_norm_stderr": 0.025190921114603925
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9326424870466321,
"acc_stderr": 0.018088393839078912,
"acc_norm": 0.9326424870466321,
"acc_norm_stderr": 0.018088393839078912
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.7128205128205128,
"acc_stderr": 0.022939925418530613,
"acc_norm": 0.7128205128205128,
"acc_norm_stderr": 0.022939925418530613
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3074074074074074,
"acc_stderr": 0.028133252578815642,
"acc_norm": 0.3074074074074074,
"acc_norm_stderr": 0.028133252578815642
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7647058823529411,
"acc_stderr": 0.027553614467863814,
"acc_norm": 0.7647058823529411,
"acc_norm_stderr": 0.027553614467863814
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.48344370860927155,
"acc_stderr": 0.0408024418562897,
"acc_norm": 0.48344370860927155,
"acc_norm_stderr": 0.0408024418562897
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.908256880733945,
"acc_stderr": 0.012376323409137118,
"acc_norm": 0.908256880733945,
"acc_norm_stderr": 0.012376323409137118
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.625,
"acc_stderr": 0.033016908987210894,
"acc_norm": 0.625,
"acc_norm_stderr": 0.033016908987210894
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8970588235294118,
"acc_stderr": 0.02132833757080437,
"acc_norm": 0.8970588235294118,
"acc_norm_stderr": 0.02132833757080437
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.9071729957805907,
"acc_stderr": 0.01888975055095671,
"acc_norm": 0.9071729957805907,
"acc_norm_stderr": 0.01888975055095671
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.8116591928251121,
"acc_stderr": 0.026241132996407266,
"acc_norm": 0.8116591928251121,
"acc_norm_stderr": 0.026241132996407266
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8244274809160306,
"acc_stderr": 0.03336820338476072,
"acc_norm": 0.8244274809160306,
"acc_norm_stderr": 0.03336820338476072
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.859504132231405,
"acc_stderr": 0.031722334260021585,
"acc_norm": 0.859504132231405,
"acc_norm_stderr": 0.031722334260021585
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8333333333333334,
"acc_stderr": 0.036028141763926456,
"acc_norm": 0.8333333333333334,
"acc_norm_stderr": 0.036028141763926456
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.8159509202453987,
"acc_stderr": 0.03044677768797173,
"acc_norm": 0.8159509202453987,
"acc_norm_stderr": 0.03044677768797173
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5803571428571429,
"acc_stderr": 0.04684099321077106,
"acc_norm": 0.5803571428571429,
"acc_norm_stderr": 0.04684099321077106
},
"harness|hendrycksTest-management|5": {
"acc": 0.8155339805825242,
"acc_stderr": 0.03840423627288276,
"acc_norm": 0.8155339805825242,
"acc_norm_stderr": 0.03840423627288276
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9017094017094017,
"acc_stderr": 0.019503444900757567,
"acc_norm": 0.9017094017094017,
"acc_norm_stderr": 0.019503444900757567
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.67,
"acc_stderr": 0.04725815626252607,
"acc_norm": 0.67,
"acc_norm_stderr": 0.04725815626252607
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8684546615581098,
"acc_stderr": 0.01208670521425043,
"acc_norm": 0.8684546615581098,
"acc_norm_stderr": 0.01208670521425043
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7543352601156069,
"acc_stderr": 0.023176298203992005,
"acc_norm": 0.7543352601156069,
"acc_norm_stderr": 0.023176298203992005
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.6312849162011173,
"acc_stderr": 0.016135759015030126,
"acc_norm": 0.6312849162011173,
"acc_norm_stderr": 0.016135759015030126
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.738562091503268,
"acc_stderr": 0.025160998214292456,
"acc_norm": 0.738562091503268,
"acc_norm_stderr": 0.025160998214292456
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7684887459807074,
"acc_stderr": 0.023956532766639133,
"acc_norm": 0.7684887459807074,
"acc_norm_stderr": 0.023956532766639133
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8148148148148148,
"acc_stderr": 0.021613809395224805,
"acc_norm": 0.8148148148148148,
"acc_norm_stderr": 0.021613809395224805
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.574468085106383,
"acc_stderr": 0.029494827600144363,
"acc_norm": 0.574468085106383,
"acc_norm_stderr": 0.029494827600144363
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.5814863102998696,
"acc_stderr": 0.012599505608336477,
"acc_norm": 0.5814863102998696,
"acc_norm_stderr": 0.012599505608336477
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7022058823529411,
"acc_stderr": 0.027778298701545443,
"acc_norm": 0.7022058823529411,
"acc_norm_stderr": 0.027778298701545443
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.7581699346405228,
"acc_stderr": 0.017322789207784326,
"acc_norm": 0.7581699346405228,
"acc_norm_stderr": 0.017322789207784326
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7090909090909091,
"acc_stderr": 0.04350271442923243,
"acc_norm": 0.7090909090909091,
"acc_norm_stderr": 0.04350271442923243
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7591836734693878,
"acc_stderr": 0.027372942201788163,
"acc_norm": 0.7591836734693878,
"acc_norm_stderr": 0.027372942201788163
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8756218905472637,
"acc_stderr": 0.023335401790166323,
"acc_norm": 0.8756218905472637,
"acc_norm_stderr": 0.023335401790166323
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.84,
"acc_stderr": 0.03684529491774708,
"acc_norm": 0.84,
"acc_norm_stderr": 0.03684529491774708
},
"harness|hendrycksTest-virology|5": {
"acc": 0.536144578313253,
"acc_stderr": 0.038823108508905954,
"acc_norm": 0.536144578313253,
"acc_norm_stderr": 0.038823108508905954
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8538011695906432,
"acc_stderr": 0.027097290118070796,
"acc_norm": 0.8538011695906432,
"acc_norm_stderr": 0.027097290118070796
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4259485924112607,
"mc1_stderr": 0.017310471904076544,
"mc2": 0.6153983714812797,
"mc2_stderr": 0.014670323036655197
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.7530676126480103,
-0.8571256399154663,
0.2564946711063385,
0.24848783016204834,
-0.20081306993961334,
-0.06411176174879074,
0.02678024023771286,
-0.22153334319591522,
0.5778535604476929,
-0.07294595241546631,
-0.46372339129447937,
-0.712407648563385,
-0.46932050585746765,
0.260064065456... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vikp/textbook_quality_programming | vikp | 2023-10-08T18:36:50Z | 83 | 143 | null | [
"language:en",
"region:us"
] | 2023-10-08T18:36:50Z | 2023-09-22T16:04:56.000Z | 2023-09-22T16:04:56 | ---
language:
- en
dataset_info:
features:
- name: topic
dtype: string
- name: model
dtype: string
- name: concepts
sequence: string
- name: outline
sequence: string
- name: markdown
dtype: string
splits:
- name: train
num_bytes: 471931604
num_examples: 11650
download_size: 0
dataset_size: 471931604
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "textbook_quality_programming"
Synthetic programming textbooks generated with GPT-3.5 and retrieval. Very high quality, aimed at being used in a phi replication. Currently 115M tokens. Covers many languages and technologies, with a bias towards python.
~10k of the books (65M tokens) use an older generation method, and average 6k tokens in length. ~1.5k books (50M tokens) use a newer generation method, with a more detailed outline, and average 33k tokens in length. All books have section headers for optimal chunking.
Generated using the [textbook_quality](https://github.com/VikParuchuri/textbook_quality) repo. | [
-0.4125545024871826,
-0.33619391918182373,
0.06526551395654678,
-0.04760940745472908,
-0.5448160171508789,
0.13221846520900726,
0.23767350614070892,
-0.23844872415065765,
-0.036297112703323364,
0.34892648458480835,
-0.2603648602962494,
-0.6467573642730713,
-0.06322506070137024,
0.240817457... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
renumics/esc50 | renumics | 2023-11-09T09:17:07Z | 83 | 0 | null | [
"task_categories:audio-classification",
"size_categories:1K<n<10K",
"license:cc-by-nc-2.0",
"region:us"
] | 2023-11-09T09:17:07Z | 2023-11-06T15:46:01.000Z | 2023-11-06T15:46:01 | ---
dataset_info:
features:
- name: src_file
dtype: string
- name: fold
dtype: int64
- name: label
dtype:
class_label:
names:
'0': dog
'1': rooster
'2': pig
'3': cow
'4': frog
'5': cat
'6': hen
'7': insects
'8': sheep
'9': crow
'10': rain
'11': sea_waves
'12': crackling_fire
'13': crickets
'14': chirping_birds
'15': water_drops
'16': wind
'17': pouring_water
'18': toilet_flush
'19': thunderstorm
'20': crying_baby
'21': sneezing
'22': clapping
'23': breathing
'24': coughing
'25': footsteps
'26': laughing
'27': brushing_teeth
'28': snoring
'29': drinking_sipping
'30': door_wood_knock
'31': mouse_click
'32': keyboard_typing
'33': door_wood_creaks
'34': can_opening
'35': washing_machine
'36': vacuum_cleaner
'37': clock_alarm
'38': clock_tick
'39': glass_breaking
'40': helicopter
'41': chainsaw
'42': siren
'43': car_horn
'44': engine
'45': train
'46': church_bells
'47': airplane
'48': fireworks
'49': hand_saw
- name: esc10
dtype: bool
- name: take
dtype: string
- name: audio
dtype: audio
splits:
- name: train
num_bytes: 882179256
num_examples: 2000
download_size: 773038488
dataset_size: 882179256
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-nc-2.0
task_categories:
- audio-classification
size_categories:
- 1K<n<10K
---
# Dataset Card for "esc50"
This is a mirror for the ESC-50 dataset. Original sources:
https://github.com/karolpiczak/ESC-50
K. J. Piczak. ESC: Dataset for Environmental Sound Classification. Proceedings of the 23rd Annual ACM Conference on Multimedia, Brisbane, Australia, 2015.
[DOI: http://dx.doi.org/10.1145/2733373.2806390]
The dataset is available under the terms of the Creative Commons Attribution Non-Commercial license.
## Exploring the dataset
You can visualize the dataset using Renumics Spotlight:
```python
import datasets
from renumics import spotlight
ds = datasets.load_dataset('renumics/esc50', split='train')
spotlight.show(ds)
```
## Explore enriched dataset
To fully understand the dataset, you can leverage model results such as embeddings or predictions.
Here is an example how to use zero-shot classification with MS CLAP for this purpose:
```python
ds_results = datasets.load_dataset("renumics/esc50-clap2023-results",split='train')
ds = datasets.concatenate_datasets([ds, ds_results], axis=1)
spotlight.show(ds, dtype={'text_embedding': spotlight.Embedding, 'audio_embedding': spotlight.Embedding})
```

| [
-0.5745611190795898,
-0.2079075574874878,
0.4554058015346527,
0.08932124823331833,
-0.13146206736564636,
-0.014581176452338696,
-0.3601715862751007,
-0.15007801353931427,
0.4679047167301178,
0.28155261278152466,
-0.771916389465332,
-1.014828085899353,
-0.4867574870586395,
-0.01487808395177... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jlbaker361/small_subtraction_whole | jlbaker361 | 2023-11-17T05:53:43Z | 83 | 0 | null | [
"region:us"
] | 2023-11-17T05:53:43Z | 2023-11-17T04:47:35.000Z | 2023-11-17T04:47:35 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: float64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1320.0
num_examples: 40
- name: test
num_bytes: 165.0
num_examples: 5
download_size: 4097
dataset_size: 1485.0
---
# Dataset Card for "small_subtraction_whole"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6320396661758423,
-0.3720208406448364,
0.34352266788482666,
0.12096533179283142,
-0.5564253926277161,
-0.26837778091430664,
0.20521600544452667,
-0.23785385489463806,
0.9983044862747192,
0.2956239581108093,
-0.9360116720199585,
-0.5835648775100708,
-0.7452325224876404,
-0.25659519433975... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
GotItAI/rocketbook_gpt4_responses_combined_template | GotItAI | 2023-11-22T04:10:19Z | 83 | 0 | null | [
"region:us"
] | 2023-11-22T04:10:19Z | 2023-11-22T04:10:18.000Z | 2023-11-22T04:10:18 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: prompt
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: prompt_id
dtype: int64
splits:
- name: train
num_bytes: 8826631
num_examples: 1459
- name: test
num_bytes: 3243123
num_examples: 530
download_size: 4302434
dataset_size: 12069754
---
# Dataset Card for "rocketbook_gpt4_responses_combined_template"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.531127393245697,
-0.016618309542536736,
0.08750218898057938,
0.31445521116256714,
-0.18132995069026947,
0.2625720798969269,
0.2608215808868408,
0.08756395429372787,
0.6182503700256348,
0.25350436568260193,
-0.8152672648429871,
-0.6295396685600281,
-0.5129497051239014,
-0.147408992052078... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
midas/ldkp3k | midas | 2022-09-27T18:29:25Z | 82 | 4 | null | [
"region:us"
] | 2022-09-27T18:29:25Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | A dataset for benchmarking keyphrase extraction and generation techniques from long document English scientific papers. For more details about the dataset please refer the original paper - []().
Data source - []()
## Dataset Summary
## Dataset Structure
### Data Fields
- **id**: unique identifier of the document.
- **sections**: list of all the sections present in the document.
- **sec_text**: list of white space separated list of words present in each section.
- **sec_bio_tags**: list of BIO tags of white space separated list of words present in each section.
- **extractive_keyphrases**: List of all the present keyphrases.
- **abstractive_keyphrase**: List of all the absent keyphrases.
### Data Splits
|Split| #datapoints |
|--|--|
| Train-Small | 20,000 |
| Train-Medium | 50,000 |
| Train-Large | 90,019 |
| Test | 3413 |
| Validation | 3339 |
## Usage
### Small Dataset
```python
from datasets import load_dataset
# get small dataset
dataset = load_dataset("midas/ldkp3k", "small")
def order_sections(sample):
"""
corrects the order in which different sections appear in the document.
resulting order is: title, abstract, other sections in the body
"""
sections = []
sec_text = []
sec_bio_tags = []
if "title" in sample["sections"]:
title_idx = sample["sections"].index("title")
sections.append(sample["sections"].pop(title_idx))
sec_text.append(sample["sec_text"].pop(title_idx))
sec_bio_tags.append(sample["sec_bio_tags"].pop(title_idx))
if "abstract" in sample["sections"]:
abstract_idx = sample["sections"].index("abstract")
sections.append(sample["sections"].pop(abstract_idx))
sec_text.append(sample["sec_text"].pop(abstract_idx))
sec_bio_tags.append(sample["sec_bio_tags"].pop(abstract_idx))
sections += sample["sections"]
sec_text += sample["sec_text"]
sec_bio_tags += sample["sec_bio_tags"]
return sections, sec_text, sec_bio_tags
# sample from the train split
print("Sample from train data split")
train_sample = dataset["train"][0]
sections, sec_text, sec_bio_tags = order_sections(train_sample)
print("Fields in the sample: ", [key for key in train_sample.keys()])
print("Section names: ", sections)
print("Tokenized Document: ", sec_text)
print("Document BIO Tags: ", sec_bio_tags)
print("Extractive/present Keyphrases: ", train_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", train_sample["abstractive_keyphrases"])
print("\n-----------\n")
# sample from the validation split
print("Sample from validation data split")
validation_sample = dataset["validation"][0]
sections, sec_text, sec_bio_tags = order_sections(validation_sample)
print("Fields in the sample: ", [key for key in validation_sample.keys()])
print("Section names: ", sections)
print("Tokenized Document: ", sec_text)
print("Document BIO Tags: ", sec_bio_tags)
print("Extractive/present Keyphrases: ", validation_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", validation_sample["abstractive_keyphrases"])
print("\n-----------\n")
# sample from the test split
print("Sample from test data split")
test_sample = dataset["test"][0]
sections, sec_text, sec_bio_tags = order_sections(test_sample)
print("Fields in the sample: ", [key for key in test_sample.keys()])
print("Section names: ", sections)
print("Tokenized Document: ", sec_text)
print("Document BIO Tags: ", sec_bio_tags)
print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
print("\n-----------\n")
```
**Output**
```bash
```
### Medium Dataset
```python
from datasets import load_dataset
# get medium dataset
dataset = load_dataset("midas/ldkp3k", "medium")
```
### Large Dataset
```python
from datasets import load_dataset
# get large dataset
dataset = load_dataset("midas/ldkp3k", "large")
```
## Citation Information
Please cite the works below if you use this dataset in your work.
```
@article{dl4srmahata2022ldkp,
title={LDKP - A Dataset for Identifying Keyphrases from Long Scientific Documents},
author={Mahata, Debanjan and Agarwal, Naveen and Gautam, Dibya and Kumar, Amardeep and Parekh, Swapnil and Singla, Yaman Kumar and Acharya, Anish and Shah, Rajiv Ratn},
journal={DL4SR-22: Workshop on Deep Learning for Search and Recommendation, co-located with the 31st ACM International Conference on Information and Knowledge Management (CIKM)},
address={Atlanta, USA},
month={October},
year={2022}
}
```
```
@article{mahata2022ldkp,
title={LDKP: A Dataset for Identifying Keyphrases from Long Scientific Documents},
author={Mahata, Debanjan and Agarwal, Naveen and Gautam, Dibya and Kumar, Amardeep and Parekh, Swapnil and Singla, Yaman Kumar and Acharya, Anish and Shah, Rajiv Ratn},
journal={arXiv preprint arXiv:2203.15349},
year={2022}
}
```
```
@article{lo2019s2orc,
title={S2ORC: The semantic scholar open research corpus},
author={Lo, Kyle and Wang, Lucy Lu and Neumann, Mark and Kinney, Rodney and Weld, Dan S},
journal={arXiv preprint arXiv:1911.02782},
year={2019}
}
```
```
@inproceedings{ccano2019keyphrase,
title={Keyphrase generation: A multi-aspect survey},
author={{\c{C}}ano, Erion and Bojar, Ond{\v{r}}ej},
booktitle={2019 25th Conference of Open Innovations Association (FRUCT)},
pages={85--94},
year={2019},
organization={IEEE}
}
```
```
@article{meng2017deep,
title={Deep keyphrase generation},
author={Meng, Rui and Zhao, Sanqiang and Han, Shuguang and He, Daqing and Brusilovsky, Peter and Chi, Yu},
journal={arXiv preprint arXiv:1704.06879},
year={2017}
}
```
## Contributions
Thanks to [@debanjanbhucs](https://github.com/debanjanbhucs), [@dibyaaaaax](https://github.com/dibyaaaaax), [@UmaGunturi](https://github.com/UmaGunturi) and [@ad6398](https://github.com/ad6398) for adding this dataset
| [
-0.09150252491235733,
-0.39015111327171326,
0.46032246947288513,
0.1386006474494934,
-0.37370651960372925,
0.14654752612113953,
-0.23087842762470245,
-0.12274618446826935,
0.1963415890932083,
0.22869887948036194,
-0.46371176838874817,
-0.8060685992240906,
-0.4822401702404022,
0.61718469858... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
enimai/MuST-C-fr | enimai | 2022-11-21T18:39:41Z | 82 | 0 | null | [
"task_categories:translation",
"language:en",
"language:fr",
"license:apache-2.0",
"region:us"
] | 2022-11-21T18:39:41Z | 2022-03-20T14:27:39.000Z | 2022-03-20T14:27:39 | ---
license: apache-2.0
language:
- en
- fr
task_categories:
- translation
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
huggan/night2day | huggan | 2022-04-12T14:18:51Z | 82 | 0 | null | [
"region:us"
] | 2022-04-12T14:18:51Z | 2022-03-23T16:43:09.000Z | 2022-03-23T16:43:09 | # Citation
```
@article{pix2pix2017,
title={Image-to-Image Translation with Conditional Adversarial Networks},
author={Isola, Phillip and Zhu, Jun-Yan and Zhou, Tinghui and Efros, Alexei A},
journal={CVPR},
year={2017}
}
``` | [
0.03880864754319191,
-0.28023239970207214,
0.36057421565055847,
0.03225378319621086,
-0.3917146623134613,
-0.6022403836250305,
-0.17084580659866333,
-0.5091608762741089,
-0.09093120694160461,
0.23600904643535614,
-0.13457730412483215,
-0.44220826029777527,
-0.9884014129638672,
0.1408354490... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
khalidalt/tydiqa-primary | khalidalt | 2022-07-28T21:56:04Z | 82 | 0 | tydi-qa | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:unknown",
"source_datasets:extended|wikipedia",
"language:en",
"language:ar",
"language:bn",
"language:fi",
"l... | 2022-07-28T21:56:04Z | 2022-06-16T17:20:46.000Z | 2022-06-16T17:20:46 | ---
pretty_name: TyDi QA
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
- ar
- bn
- fi
- id
- ja
- sw
- ko
- ru
- te
- th
license:
- apache-2.0
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets:
- extended|wikipedia
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: tydi-qa
---
# Dataset Card for "tydiqa"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/google-research-datasets/tydiqa](https://github.com/google-research-datasets/tydiqa)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 3726.74 MB
- **Size of the generated dataset:** 5812.92 MB
- **Total amount of disk used:** 9539.67 MB
### Dataset Summary
TyDi QA is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs.
The languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language
expresses -- such that we expect models performing well on this set to generalize across a large number of the languages
in the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic
information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but
don’t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without
the use of translation (unlike MLQA and XQuAD).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### primary_task
- **Size of downloaded dataset files:** 1863.37 MB
- **Size of the generated dataset:** 5757.59 MB
- **Total amount of disk used:** 7620.96 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"annotations": {
"minimal_answers_end_byte": [-1, -1, -1],
"minimal_answers_start_byte": [-1, -1, -1],
"passage_answer_candidate_index": [-1, -1, -1],
"yes_no_answer": ["NONE", "NONE", "NONE"]
},
"document_plaintext": "\"\\nรองศาสตราจารย์[1] หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร (22 กันยายน 2495 -) ผู้ว่าราชการกรุงเทพมหานครคนที่ 15 อดีตรองหัวหน้าพรรคปร...",
"document_title": "หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร",
"document_url": "\"https://th.wikipedia.org/wiki/%E0%B8%AB%E0%B8%A1%E0%B9%88%E0%B8%AD%E0%B8%A1%E0%B8%A3%E0%B8%B2%E0%B8%8A%E0%B8%A7%E0%B8%87%E0%B8%...",
"language": "thai",
"passage_answer_candidates": "{\"plaintext_end_byte\": [494, 1779, 2931, 3904, 4506, 5588, 6383, 7122, 8224, 9375, 10473, 12563, 15134, 17765, 19863, 21902, 229...",
"question_text": "\"หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร เรียนจบจากที่ไหน ?\"..."
}
```
### Data Fields
The data fields are the same among all splits.
#### primary_task
- `passage_answer_candidates`: a dictionary feature containing:
- `plaintext_start_byte`: a `int32` feature.
- `plaintext_end_byte`: a `int32` feature.
- `question_text`: a `string` feature.
- `document_title`: a `string` feature.
- `language`: a `string` feature.
- `annotations`: a dictionary feature containing:
- `passage_answer_candidate_index`: a `int32` feature.
- `minimal_answers_start_byte`: a `int32` feature.
- `minimal_answers_end_byte`: a `int32` feature.
- `yes_no_answer`: a `string` feature.
- `document_plaintext`: a `string` feature.
- `document_url`: a `string` feature.
### Data Splits
| name | train | validation |
| -------------- | -----: | ---------: |
| primary_task | 166916 | 18670 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{tydiqa,
title = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages},
author = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}
year = {2020},
journal = {Transactions of the Association for Computational Linguistics}
}
```
```
@inproceedings{ruder-etal-2021-xtreme,
title = "{XTREME}-{R}: Towards More Challenging and Nuanced Multilingual Evaluation",
author = "Ruder, Sebastian and
Constant, Noah and
Botha, Jan and
Siddhant, Aditya and
Firat, Orhan and
Fu, Jinlan and
Liu, Pengfei and
Hu, Junjie and
Garrette, Dan and
Neubig, Graham and
Johnson, Melvin",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.802",
doi = "10.18653/v1/2021.emnlp-main.802",
pages = "10215--10245",
}
}
```
| [
-0.6356588006019592,
-0.6846776604652405,
0.26167652010917664,
0.08935262262821198,
-0.1820766180753708,
0.11268153786659241,
-0.3632335364818573,
-0.3440060019493103,
0.6013126373291016,
0.40782761573791504,
-0.7166253924369812,
-0.9011983871459961,
-0.45372045040130615,
0.223753511905670... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tner/wnut2017 | tner | 2022-08-06T23:30:30Z | 82 | 0 | null | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"size_categories:1k<10K",
"language:en",
"license:other",
"region:us"
] | 2022-08-06T23:30:30Z | 2022-07-16T11:08:24.000Z | 2022-07-16T11:08:24 | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 1k<10K
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: WNUT 2017
---
# Dataset Card for "tner/wnut2017"
## Dataset Description
- **Repository:** [T-NER](https://github.com/asahi417/tner)
- **Paper:** [https://aclanthology.org/W17-4418/](https://aclanthology.org/W17-4418/)
- **Dataset:** WNUT 2017
- **Domain:** Twitter, Reddit, YouTube, and StackExchange
- **Number of Entity:** 6
### Dataset Summary
WNUT 2017 NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
- Entity Types: `creative-work`, `corporation`, `group`, `location`, `person`, `product`
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```
{
'tokens': ['@paulwalk', 'It', "'s", 'the', 'view', 'from', 'where', 'I', "'m", 'living', 'for', 'two', 'weeks', '.', 'Empire', 'State', 'Building', '=', 'ESB', '.', 'Pretty', 'bad', 'storm', 'here', 'last', 'evening', '.'],
'tags': [12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 3, 9, 9, 12, 3, 12, 12, 12, 12, 12, 12, 12, 12]
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/wnut2017/raw/main/dataset/label.json).
```python
{
"B-corporation": 0,
"B-creative-work": 1,
"B-group": 2,
"B-location": 3,
"B-person": 4,
"B-product": 5,
"I-corporation": 6,
"I-creative-work": 7,
"I-group": 8,
"I-location": 9,
"I-person": 10,
"I-product": 11,
"O": 12
}
```
### Data Splits
| name |train|validation|test|
|---------|----:|---------:|---:|
|wnut2017 | 2395| 1009|1287|
### Citation Information
```
@inproceedings{derczynski-etal-2017-results,
title = "Results of the {WNUT}2017 Shared Task on Novel and Emerging Entity Recognition",
author = "Derczynski, Leon and
Nichols, Eric and
van Erp, Marieke and
Limsopatham, Nut",
booktitle = "Proceedings of the 3rd Workshop on Noisy User-generated Text",
month = sep,
year = "2017",
address = "Copenhagen, Denmark",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W17-4418",
doi = "10.18653/v1/W17-4418",
pages = "140--147",
abstract = "This shared task focuses on identifying unusual, previously-unseen entities in the context of emerging discussions. Named entities form the basis of many modern approaches to other tasks (like event clustering and summarization), but recall on them is a real problem in noisy text - even among annotators. This drop tends to be due to novel entities and surface forms. Take for example the tweet {``}so.. kktny in 30 mins?!{''} {--} even human experts find the entity {`}kktny{'} hard to detect and resolve. The goal of this task is to provide a definition of emerging and of rare entities, and based on that, also datasets for detecting these entities. The task as described in this paper evaluated the ability of participating entries to detect and classify novel and emerging named entities in noisy text.",
}
``` | [
-0.6463169455528259,
-0.5259838700294495,
0.1860770583152771,
0.02052835375070572,
-0.24540555477142334,
0.22047248482704163,
-0.489566832780838,
-0.741310179233551,
0.550929069519043,
0.1582365185022354,
-0.5052570104598999,
-0.8456747531890869,
-0.6637323498725891,
0.34377655386924744,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
giulio98/xlcost-formatted | giulio98 | 2022-10-18T17:34:36Z | 82 | 2 | null | [
"region:us"
] | 2022-10-18T17:34:36Z | 2022-10-01T16:48:04.000Z | 2022-10-01T16:48:04 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kasnerz/logic2text | kasnerz | 2023-03-14T15:08:47Z | 82 | 0 | null | [
"region:us"
] | 2023-03-14T15:08:47Z | 2022-11-28T12:09:45.000Z | 2022-11-28T12:09:45 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
metaeval/strategy-qa | metaeval | 2023-02-09T14:56:34Z | 82 | 2 | null | [
"region:us"
] | 2023-02-09T14:56:34Z | 2023-02-09T14:55:42.000Z | 2023-02-09T14:55:42 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
potsawee/podcast_summary_assessment | potsawee | 2023-05-29T23:17:15Z | 82 | 3 | null | [
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"arxiv:2208.13265",
"region:us"
] | 2023-05-29T23:17:15Z | 2023-03-20T14:23:36.000Z | 2023-03-20T14:23:36 | ---
license: cc-by-4.0
language:
- en
size_categories:
- 1K<n<10K
dataset_info:
features:
- name: transcript
dtype: string
- name: summary
dtype: string
- name: score
dtype: string
- name: attributes
sequence: int64
- name: episode_id
dtype: string
- name: system_id
dtype: string
splits:
- name: evaluation
num_bytes: 100261033
num_examples: 3580
download_size: 11951831
dataset_size: 100261033
---
# Podcast Summary Assessment
- The description is available in our GitHub repo: https://github.com/potsawee/podcast_summary_assessment
- Paper: [Podcast Summary Assessment: A Resource for Evaluating Summary Assessment Methods](https://arxiv.org/abs/2208.13265)
### Citation Information
```
@article{manakul2022podcast,
title={Podcast Summary Assessment: A Resource for Evaluating Summary Assessment Methods},
author={Manakul, Potsawee and Gales, Mark JF},
journal={arXiv preprint arXiv:2208.13265},
year={2022}
}
``` | [
-0.4581451416015625,
-0.27525392174720764,
0.28497180342674255,
0.5348638892173767,
-0.22172227501869202,
0.0863361731171608,
-0.21606455743312836,
-0.1759941130876541,
0.8106722235679626,
0.009891767986118793,
-0.31738168001174927,
-0.3134802579879761,
-0.665935218334198,
-0.2056989669799... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MattiaL/tapir-cleaned-116k | MattiaL | 2023-05-09T07:59:44Z | 82 | 1 | null | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-nc-4.0",
"instruction-finetuning",
"region:us"
] | 2023-05-09T07:59:44Z | 2023-05-08T14:11:40.000Z | 2023-05-08T14:11:40 | ---
license: cc-by-nc-4.0
language:
- en
tags:
- instruction-finetuning
pretty_name: Tapir-Cleaned
task_categories:
- text-generation
size_categories:
- 100K<n<1M
---
# Dataset Card for Tapir-Cleaned
This is a revised version of the DAISLab dataset of IFTTT rules, which has been thoroughly cleaned, scored, and adjusted for the purpose of instruction-tuning.
## Tapir Dataset Summary
Tapir is a subset of the larger DAISLab dataset, which comprises 242,480 recipes extracted from the IFTTT platform.
After a thorough cleaning process that involved the removal of redundant and inconsistent recipes, the refined dataset was condensed to include 116,862 high-quality recipes.
This curated set of instruction data is particularly useful for conducting instruction-tuning exercises for language models,
allowing them to more accurately follow instructions and achieve superior performance.
The last version of Tapir includes a correlation score that helps to identify the most appropriate description-rule pairs for instruction tuning.
Description-rule pairs with a score greater than 0.75 are deemed good enough and are prioritized for further analysis and tuning.
### Supported Tasks and Leaderboards
The Tapir dataset designed for instruction training pretrained language models
### Languages
The data in Tapir are mainly in English (BCP-47 en).
# Dataset Structure
### Data Instances
```json
{
"instruction":"From the description of a rule: identify the 'trigger', identify the 'action', write a IF 'trigger' THEN 'action' rule.",
"input":"If lostphone is texted to my phone the volume will turn up to 100 so I can find it.",
"output":"IF Android SMS New SMS received matches search THEN Android Device Set ringtone volume",
"score":"0.804322",
"text": "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nFrom the description of a rule: identify the 'trigger', identify the 'action', write a IF 'trigger' THEN 'action' rule.\n\n### Input:\nIf lostphone is texted to my phone the volume will turn up to 100 so I can find it.\n\n### Response:\nIF Android SMS New SMS received matches search THEN Android Device Set ringtone volume",
}
```
### Data Fields
The data fields are as follows:
* `instruction`: describes the task the model should perform.
* `input`: context or input for the task. Each of the 116K input is unique.
* `output`: the answer taken from the original Tapir Dataset formatted as an IFTTT recipe.
* `score`: the correlation score obtained via BertForNextSentencePrediction
* `text`: the `instruction`, `input` and `output` formatted with the [prompt template](https://github.com/tatsu-lab/stanford_alpaca#data-release) used by the authors of Alpaca for fine-tuning their models.
### Data Splits
| | train |
|---------------|------:|
| tapir | 116862 |
### Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
### Citation Information
```
@misc{tapir,
author = {Mattia Limone, Gaetano Cimino, Annunziata Elefante},
title = {TAPIR: Trigger Action Platform for Information Retrieval},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/MattiaLimone/ifttt_recommendation_system}},
}
``` | [
-0.3936106860637665,
-0.6156448125839233,
-0.01770600490272045,
0.28775373101234436,
-0.35196760296821594,
-0.17075125873088837,
-0.21959051489830017,
-0.15221060812473297,
0.28398367762565613,
0.6089643836021423,
-0.5448781251907349,
-0.7126005291938782,
-0.5383265018463135,
-0.1240775287... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DISCOX/DISCO-200K-high-quality | DISCOX | 2023-06-20T14:25:45Z | 82 | 1 | null | [
"size_categories:100K<n<1M",
"license:cc-by-4.0",
"region:us"
] | 2023-06-20T14:25:45Z | 2023-06-10T19:17:45.000Z | 2023-06-10T19:17:45 | ---
license: cc-by-4.0
dataset_info:
features:
- name: video_url_youtube
dtype: string
- name: video_title_youtube
dtype: string
- name: track_name_spotify
dtype: string
- name: video_duration_youtube_sec
dtype: float64
- name: preview_url_spotify
dtype: string
- name: video_view_count_youtube
dtype: float64
- name: video_thumbnail_url_youtube
dtype: string
- name: search_query_youtube
dtype: string
- name: video_description_youtube
dtype: string
- name: track_id_spotify
dtype: string
- name: album_id_spotify
dtype: string
- name: artist_id_spotify
sequence: string
- name: track_duration_spotify_ms
dtype: int64
- name: primary_artist_name_spotify
dtype: string
- name: track_release_date_spotify
dtype: string
- name: explicit_content_spotify
dtype: bool
- name: similarity_duration
dtype: float64
- name: similarity_query_video_title
dtype: float64
- name: similarity_query_description
dtype: float64
- name: similarity_audio
dtype: float64
- name: audio_embedding_spotify
sequence: float32
- name: audio_embedding_youtube
sequence: float32
splits:
- name: train
num_bytes: 958015009
num_examples: 200000
download_size: 1154630326
dataset_size: 958015009
size_categories:
- 100K<n<1M
---
### Getting Started
You can download the dataset using HuggingFace:
```python
from datasets import load_dataset
ds = load_dataset("DISCOX/DISCO-200K-high-quality")
```
The dataset contains 200,000 high-quality samples from the DISCO-10M dataset found [here](https://huggingface.co/datasets/DISCOX/DISCO-10M).
High-quality refers to the similarity filtering, all samples in this dataset have a similarity between search query and video title greater than 0.8, and a similarity between Spotify preview and YouTube video greater than 0.7.
## Dataset Structure
The dataset contains the following features:
```json
{
'video_url_youtube',
'video_title_youtube',
'track_name_spotify',
'video_duration_youtube_sec',
'preview_url_spotify',
'video_view_count_youtube',
'video_thumbnail_url_youtube',
'search_query_youtube',
'video_description_youtube',
'track_id_spotify',
'album_id_spotify',
'artist_id_spotify',
'track_duration_spotify_ms',
'primary_artist_name_spotify',
'track_release_date_spotify',
'explicit_content_spotify',
'similarity_duration',
'similarity_query_video_title',
'similarity_query_description',
'similarity_audio',
'audio_embedding_spotify',
'audio_embedding_youtube',
}
```
More details about the dataset can be found [here](https://huggingface.co/datasets/DISCOX/DISCO-10M).
<!--
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
--> | [
-0.7952154278755188,
-0.5277819633483887,
0.01654820330440998,
0.4162525236606598,
-0.08077089488506317,
0.05655669420957565,
-0.15118859708309174,
0.013093501329421997,
0.6829128265380859,
0.7354075312614441,
-1.0779274702072144,
-0.8643563985824585,
-0.4429342746734619,
0.153871700167655... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mrSoul7766/ECTSum | mrSoul7766 | 2023-08-07T16:59:19Z | 82 | 0 | null | [
"region:us"
] | 2023-08-07T16:59:19Z | 2023-08-07T16:57:45.000Z | 2023-08-07T16:57:45 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
brando/debug0_af | brando | 2023-08-10T23:10:04Z | 82 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-08-10T23:10:04Z | 2023-08-09T01:46:02.000Z | 2023-08-09T01:46:02 | ---
license: apache-2.0
---
If you find this please cite it:
```
@software{brando2021ultimateutils,
author={Brando Miranda},
title={Ultimate Utils - the Ultimate Utils library for Machine Learning and Artificial Intelligence},
url={https://github.com/brando90/ultimate-utils},
year={2021}
}
```
it's not suppose to be used by people yet. It's under apache license too. | [
-0.05450998246669769,
0.1264432668685913,
0.39456990361213684,
0.3208855986595154,
-0.23295868933200836,
0.03671504184603691,
0.18892723321914673,
-0.419245183467865,
0.23767514526844025,
0.5069212317466736,
-0.62726891040802,
-0.5836659669876099,
-0.42159202694892883,
-0.05464412644505501... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vhtran/en-id | vhtran | 2023-08-20T07:53:35Z | 82 | 4 | null | [
"task_categories:translation",
"language:en",
"language:id",
"license:cc-by-4.0",
"region:us"
] | 2023-08-20T07:53:35Z | 2023-08-20T07:31:37.000Z | 2023-08-20T07:31:37 | ---
license: cc-by-4.0
task_categories:
- translation
language:
- en
- id
pretty_name: enidlrmt
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
fake-news-UFG/FactChecksbr | fake-news-UFG | 2023-08-24T17:40:04Z | 82 | 0 | null | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:pt",
"license:mit",
"doi:10.57967/hf/1016",
"region:us"
] | 2023-08-24T17:40:04Z | 2023-08-23T17:15:02.000Z | 2023-08-23T17:15:02 | ---
license: mit
task_categories:
- text-classification
language:
- pt
pretty_name: FactChecks.br
size_categories:
- 10K<n<100K
---
# FactChecks.br
## Dataset Description
- **Homepage:**
- **Repository:** [github.com/fake-news-UFG/FactChecks.br](github.com/fake-news-UFG/FactChecks.br)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Collection of Portuguese Fact-Checking Benchmarks.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset is in Portuguese.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
If you use "FactChecks.br Dataset", please include a cite:
```bibtex
@misc{FactChecksbr,
author = {R. S. Gomes, Juliana},
title = {FactChecks.br},
url = {https://github.com/fake-news-UFG/FactChecks.br},
doi = { 10.57967/hf/1016 },
}
```
### Contributions
Thanks to [@ju-resplande](https://github.com/ju-resplande) for adding this dataset. | [
-0.32192716002464294,
-0.4571555256843567,
0.1259218156337738,
0.39155346155166626,
-0.5469757914543152,
0.06027768552303314,
-0.29216957092285156,
-0.39261510968208313,
0.5331964492797852,
0.6667866706848145,
-0.536369264125824,
-0.9535477161407471,
-0.768456220626831,
0.10095791518688202... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
norashameri97/tmp-translation | norashameri97 | 2023-08-24T10:23:50Z | 82 | 0 | null | [
"region:us"
] | 2023-08-24T10:23:50Z | 2023-08-24T10:23:49.000Z | 2023-08-24T10:23:49 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: arabic
dtype: string
splits:
- name: train
num_bytes: 27
num_examples: 1
- name: test
num_bytes: 23
num_examples: 1
download_size: 1764
dataset_size: 50
---
# Dataset Card for "tmp-translation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.2939186692237854,
-0.09952787309885025,
0.3756123483181,
0.36216023564338684,
-0.5988318920135498,
0.24291640520095825,
-0.03828579559922218,
-0.03445076569914818,
0.7031342387199402,
0.4569895267486572,
-0.7478289008140564,
-0.9615538716316223,
-0.9279459714889526,
-0.23370882868766785... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
NLPCoreTeam/humaneval_ru | NLPCoreTeam | 2023-10-23T12:07:50Z | 82 | 6 | null | [
"task_categories:text-generation",
"size_categories:n<1K",
"language:ru",
"language:en",
"license:mit",
"code",
"arxiv:2107.03374",
"region:us"
] | 2023-10-23T12:07:50Z | 2023-08-30T13:06:37.000Z | 2023-08-30T13:06:37 | ---
license: mit
task_categories:
- text-generation
language:
- ru
- en
tags:
- code
size_categories:
- n<1K
---
# HumanEval_ru Dataset
## Dataset Summary
This is a version of Code Geneneration [HumanEval dataset](https://huggingface.co/datasets/openai_humaneval) translated to Russian.
## Supported tasks
The task is to generate body of the function based on the function signature and docstring. The programming problems are written in Python and contain Russian natural text in comments and docstrings.
## Task example
```python
from typing import List
def string_xor(a: str, b: str) -> str:
"""
Входными данными являются две строки a и b, состоящие только из 1 и 0.
Выполните двоичное XOR для этих входных данных и верните результат также в виде строки.
>>> string_xor('010', '110')
'100'
"""
# Your code here
```
## Dataset structure
Please refer to the structure of the [original HumanEval dataset](https://huggingface.co/datasets/openai_humaneval)
## Translation
Textual descriptions of tasks were translated automatically via Yandex.Translate API and then manually edited. Feel free to report errors in translations.
# Usage
## Load dataset
```python
from datasets import load_dataset
load_dataset('NLPCoreTeam/humaneval_ru')
DatasetDict({
train: Dataset({
features: ['task_id', 'prompt', 'canonical_solution', 'test', 'entry_point', 'signature', 'docstring', 'context', 'instruction', 'instruction_noexamples'],
num_rows: 164
})
})
```
## How to evaluate your models
To evaluate code generation capabilities of your models on HumanEval_ru please follow these steps (example is for [Codellama-7b-Python](https://huggingface.co/codellama/CodeLlama-7b-Python-hf)):
1. Clone https://github.com/NLP-Core-Team/bigcode-evaluation-harness
2. Run evaluation (WARNING: generated code is executed, it may be unsafe) with the following command
```console
# mkdir -p ./outs/humaneval_ru
# mkdir -p ./results/humaneval_ru
accelerate launch main.py \
--model codellama/CodeLlama-7b-Python-hf \
--max_length_generation 512 \
--tasks humaneval_ru \
--use_auth_token \
--temperature 0.2 \
--n_samples 20 \
--precision fp16 \
--batch_size 1 \
--allow_code_execution \
--save_generations_path ./outs/humaneval_ru/codellama-7b-py.json \
--metric_output_path ./results/humaneval_ru/codellama-7b-py.metrics
```
4. Resulting metrics of Codellama-7b-Python should be
```python
"humaneval_ru": {
"pass@1": 0.35,
"pass@10": 0.5122803695209872
},
```
# Benchmark
[Starcoder](https://huggingface.co/bigcode/starcoder) and [Codellama](https://huggingface.co/codellama/CodeLlama-7b-hf) models evaluations on HumanEval_Ru and HumanEval are presented in the table below. For further information on Pass@1 and Pass@10 please refer to [original paper](https://arxiv.org/abs/2107.03374).
| model | RU Pass@1 | RU Pass@10 | EN Pass@1 | EN Pass@10 |
|:------------------------|--------------------------:|---------------------------:|--------------------------:|---------------------------:|
| starcoderbase-1b | 0.1420 | 0.1801 | 0.1509 | 0.2045 |
| starcoderbase-3b | 0.1924 | 0.2606 | 0.2137 | 0.3289 |
| starcoderbase-7b | 0.2515 | 0.3359 | 0.2868 | 0.3852 |
| starcoderbase-15b | 0.2676 | 0.3872 | 0.3036 | 0.4611 |
| starcoder-15b-Python | 0.3103 | 0.4132 | 0.3353 | 0.4931 |
| CodeLlama-7b-hf | 0.2673 | 0.3688 | 0.2975 | 0.4351 |
| CodeLlama-7b-Python-hf | 0.3500 | 0.5122 | 0.3960 | 0.5761 |
| CodeLlama-13b-hf | 0.3380 | 0.4884 | 0.3557 | 0.5489 |
| CodeLlama-13b-Python-hf | 0.4380 | 0.5796 | 0.4301 | 0.6226 |
<details>
<summary> Script to reproduce the results in the table </summary>
```console
#!/bin/bash
# use with https://github.com/NLP-Core-Team/bigcode-evaluation-harness
# RU
mkdir -p ./outs/humaneval_ru
mkdir -p ./results/humaneval_ru
MODELS_PATH="bigcode"
echo $MODELS_PATH
declare -A bs=( ["starcoderbase-1b"]=16 ["starcoderbase-3b"]=8 ["starcoderbase-7b"]=4 ["starcoderbase"]=1 ["starcoder"]=1)
for model_name in starcoderbase-1b starcoderbase-3b starcoderbase-7b starcoderbase starcoder
do
echo $MODELS_PATH/$model_name
accelerate launch --mixed_precision="fp16" main.py \
--model $MODELS_PATH/$model_name \
--max_length_generation 512 \
--tasks humaneval_ru \
--use_auth_token \
--temperature 0.2 \
--n_samples 20 \
--precision fp16 \
--batch_size ${bs[$model_name]} \
--allow_code_execution \
--save_generations_path ./outs/humaneval_ru/$model_name.json \
--metric_output_path ./results/humaneval_ru/$model_name.metrics
done
MODELS_PATH="codellama"
echo $MODELS_PATH
declare -A bs=( ["CodeLlama-7b-Python-hf"]=8 ["CodeLlama-7b-hf"]=16 ["CodeLlama-13b-Python-hf"]=4 ["CodeLlama-13b-hf"]=4 )
for model_name in CodeLlama-7b-hf CodeLlama-7b-Python-hf CodeLlama-13b-hf CodeLlama-13b-Python-hf
do
echo $MODELS_PATH/$model_name
accelerate launch --mixed_precision="fp16" main.py \
--model $MODELS_PATH/$model_name \
--max_length_generation 512 \
--tasks humaneval_ru \
--use_auth_token \
--temperature 0.2 \
--n_samples 20 \
--precision fp16 \
--batch_size ${bs[$model_name]} \
--allow_code_execution \
--save_generations_path ./outs/humaneval_ru/$model_name.json \
--metric_output_path ./results/humaneval_ru/$model_name.metrics
done
# EN
mkdir -p ./outs/humaneval
mkdir -p ./results/humaneval
MODELS_PATH="bigcode"
echo $MODELS_PATH
declare -A bs=( ["starcoderbase-1b"]=16 ["starcoderbase-3b"]=8 ["starcoderbase-7b"]=4 ["starcoderbase"]=1 ["starcoder"]=1)
for model_name in starcoderbase-1b starcoderbase-3b starcoderbase-7b starcoderbase starcoder
do
echo $MODELS_PATH/$model_name
accelerate launch --mixed_precision="fp16" main.py \
--model $MODELS_PATH/$model_name \
--max_length_generation 512 \
--tasks humaneval \
--use_auth_token \
--temperature 0.2 \
--n_samples 20 \
--precision fp16 \
--batch_size ${bs[$model_name]} \
--allow_code_execution \
--save_generations_path ./outs/humaneval/$model_name.json \
--metric_output_path ./results/humaneval/$model_name.metrics
done
MODELS_PATH="codellama"
echo $MODELS_PATH
declare -A bs=( ["CodeLlama-7b-Python-hf"]=8 ["CodeLlama-7b-hf"]=16 ["CodeLlama-13b-Python-hf"]=4 ["CodeLlama-13b-hf"]=4 )
for model_name in CodeLlama-7b-hf CodeLlama-7b-Python-hf CodeLlama-13b-hf CodeLlama-13b-Python-hf
do
echo $MODELS_PATH/$model_name
accelerate launch --mixed_precision="fp16" main.py \
--model $MODELS_PATH/$model_name \
--max_length_generation 512 \
--tasks humaneval \
--use_auth_token \
--temperature 0.2 \
--n_samples 20 \
--precision fp16 \
--batch_size ${bs[$model_name]} \
--allow_code_execution \
--save_generations_path ./outs/humaneval/$model_name.json \
--metric_output_path ./results/humaneval/$model_name.metrics
done
```
</details> | [
-0.3457956612110138,
-0.5957945585250854,
0.43573665618896484,
0.16430307924747467,
-0.03988593816757202,
-0.25179725885391235,
-0.2895323932170868,
-0.2379852682352066,
0.13847699761390686,
0.523129940032959,
-0.560620903968811,
-0.739166259765625,
-0.5536145567893982,
0.1739843636751175,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
deven367/babylm-100M-children-stories | deven367 | 2023-09-16T05:17:25Z | 82 | 0 | null | [
"region:us"
] | 2023-09-16T05:17:25Z | 2023-09-16T05:17:12.000Z | 2023-09-16T05:17:12 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 17676869
num_examples: 76758
- name: valid
num_bytes: 1425137
num_examples: 5996
- name: test
num_bytes: 1804421
num_examples: 7959
download_size: 12749002
dataset_size: 20906427
---
# Dataset Card for "babylm-100M-children-stories"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5506556034088135,
-0.20028901100158691,
-0.02566072717308998,
0.39583101868629456,
-0.27478286623954773,
0.11230945587158203,
0.3167092502117157,
-0.2538001239299774,
0.616499662399292,
0.4638524055480957,
-1.082490086555481,
-0.6256014704704285,
-0.5085520148277283,
-0.4214418530464172... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kineticseas/sql-test | kineticseas | 2023-09-26T20:52:14Z | 82 | 0 | null | [
"region:us"
] | 2023-09-26T20:52:14Z | 2023-09-26T19:01:58.000Z | 2023-09-26T19:01:58 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Nicolas-BZRD/JORF_opendata | Nicolas-BZRD | 2023-09-29T14:37:00Z | 82 | 0 | null | [
"size_categories:1M<n<10M",
"language:fr",
"license:odc-by",
"legal",
"region:us"
] | 2023-09-29T14:37:00Z | 2023-09-29T13:39:14.000Z | 2023-09-29T13:39:14 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4361779320
num_examples: 3616038
download_size: 1747268676
dataset_size: 4361779320
license: odc-by
language:
- fr
tags:
- legal
size_categories:
- 1M<n<10M
---
# JORF ("Laws and decrees" edition of the Official Journal)
The documents published in the ["Laws and decrees" edition of the Official Journal](https://echanges.dila.gouv.fr/OPENDATA/JORF/) since 1990 comprise :
- laws, ordinances, decrees, orders and circulars.
- decisions issued by institutions or courts that must be published in the Official Journal (Constitutional Council, Conseil supérieur de l'audiovisuel, Autorité de régulation des télécommunications, etc.)
- notices and communications since 1 January 2002 (notices to importers and exporters, competition notices and job vacancy notices).
In the interests of privacy and the protection of personal data, certain sensitive nominative measures are not reproduced in this section:
- decrees concerning naturalisation, reinstatement, mention of a minor child benefiting from the collective effect attached to the acquisition of French nationality by the parents and the francization of surnames and forenames
- change of name decrees
- rulings by the Court of Budgetary and Financial Discipline. | [
-0.4121806025505066,
-0.6782709956169128,
0.3946336805820465,
0.4535071551799774,
-0.450262188911438,
-0.42712488770484924,
-0.22626686096191406,
-0.21778039634227753,
0.5452285408973694,
1.4594581127166748,
-0.11759217083454132,
-0.4037541151046753,
-0.49792566895484924,
0.038872446864843... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ehartford/ultrachat-uncensored | ehartford | 2023-10-23T05:29:16Z | 82 | 16 | null | [
"license:mit",
"region:us"
] | 2023-10-23T05:29:16Z | 2023-10-12T05:25:04.000Z | 2023-10-12T05:25:04 | ---
license: mit
---
This is based on ultrachat dataset https://huggingface.co/datasets/stingning/ultrachat
I filtered it using the classic "unfiltered" keywords list https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered to remove instances of refusals and bias
About 90% of the dataset was removed.
What remains (400k conversations) is unlikely to inclinate the model to refuse.
I am investigating a less heavy handed approach using dolphin-2.1 to reword any detected refusals. | [
-0.7114272117614746,
-0.6970067024230957,
0.3731948435306549,
-0.01943150907754898,
-0.6529189944267273,
-0.20452284812927246,
0.01498106773942709,
-0.5327111482620239,
0.3508899211883545,
1.1381096839904785,
-0.9630928635597229,
-0.45153412222862244,
-0.4573865532875061,
0.095512807369232... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Ermarrero/riddles_v1 | Ermarrero | 2023-10-18T15:51:02Z | 82 | 2 | null | [
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-10-18T15:51:02Z | 2023-10-18T03:53:09.000Z | 2023-10-18T03:53:09 | ---
license: apache-2.0
language:
- en
---
All credit goes to https://github.com/crawsome/riddles
I had GPT4 process each riddle by giving it the correct answers in order to create a desirable output.
The prompt used to create the output was:
"Answer the users question using the answer. Do not mention you have the answer to the riddle. You are to answer it as if you never got the answer. Prioritize educational insights by breaking down the logic behind your decisions. Make sure to cover all relevant topics in a logical order."
The goal of this dataset was fine-tune a language model in outputing reasoning.
The performance of this small dataset is pretty good, it does a rather good job in breaking down a question in a logical order.
Open Orca Mistral 7B fined tuned with dataset in one epoch:
Question: I am easy to lift, but hard to throw. What am I?
Output: The riddle describes an object that is easy to lift but hard to throw. This suggests that the object is lightweight and not difficult to pick up, but also lacks the properties that would make it suitable for throwing, such as aerodynamics or a solid shape.
One object that fits this description is a feather. Feathers are lightweight and easy to lift, but they are not designed for throwing. Instead, they are used for insulation, flight, and decoration.
So, the answer to the riddle is a feather.
Correct Answer: Feather
Question: Rachel goes to the supermarket and buys 10 tomatoes. Unfortunately, on the way back home, all but 9 get ruined. How many tomatoes are left in a good condition?
Output: Let's break down the problem step by step. Rachel buys 10 tomatoes, but on her way back home, all but 9 get ruined. This means that 10 - 9 = 1 tomato gets ruined. Since there are 9 tomatoes left in good condition, the answer is 9.
So, Rachel has 9 tomatoes left in a good condition after her journey back home.
Correct Answer: 9
| [
-0.06070249155163765,
-0.9952693581581116,
0.3683150112628937,
-0.06780228018760681,
-0.13973042368888855,
-0.06842507421970367,
-0.032050713896751404,
-0.5742206573486328,
0.2702856659889221,
0.332660049200058,
-0.6935046315193176,
0.039544764906167984,
-0.6961683034896851,
0.409507185220... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
alexandrainst/lexdk-open | alexandrainst | 2023-10-27T20:22:31Z | 82 | 1 | null | [
"size_categories:10K<n<100K",
"language:da",
"license:cc0-1.0",
"region:us"
] | 2023-10-27T20:22:31Z | 2023-10-25T09:36:11.000Z | 2023-10-25T09:36:11 | ---
language:
- da
license: cc0-1.0
size_categories:
- 10K<n<100K
pretty_name: Lex.dk Open
dataset_info:
features:
- name: url
dtype: string
- name: title
dtype: string
- name: clarification
dtype: string
- name: authors
sequence: string
- name: date
dtype: string
- name: license
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 18335490
num_examples: 11887
download_size: 10050922
dataset_size: 18335490
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for Lex.dk Open
## Dataset Description
- **Point of Contact:** [Dan Saattrup Nielsen](mailto:dan.nielsen@alexandra.dk)
- **Size of downloaded dataset files:** 10.05 MB
- **Size of the generated dataset:** 18.34 MB
- **Total amount of disk used:** 28.39 MB
### Dataset Summary
This dataset consists of articles from the Danish encyclopedia [Lex.dk](https://www.lex.dk).
Only the articles released with a permissive license are included here, which constitutes about 7.5% of the total amount of articles.
### Languages
The dataset is available in Danish (`da`).
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 10.05 MB
- **Size of the generated dataset:** 18.34 MB
- **Total amount of disk used:** 28.39 MB
An example from the dataset looks as follows.
```
{
'url': 'https://denstoredanske.lex.dk/Kullmanns_M%C3%B8lle',
'title': 'Kullmanns Mølle',
'clarification': '',
'authors': ['https://brugere.lex.dk/6929'],
'date': '2021-01-20T13:23:20+01:00',
'license': 'fri anvendelse',
'text': 'Kullmanns Mølle er en mølle i Gudhjem, opkaldt efter Matts Kullmann, der byggede møllen i 1893 til sin søn, Christian Kullmann, se Gudhjem Mølle.'
}
```
### Data Fields
The data fields are the same among all splits.
- `url`: a `string` feature.
- `title`: a `string` feature.
- `clarification`: a `string` feature.
- `authors`: a `list` feature.
- `authors`: a `string` feature.
- `license`: a `string` feature.
- `text`: a `string` feature.
### Dataset Statistics
There are 11,887 samples in the dataset.
#### Article Length Distribution

## Additional Information
### Dataset Curators
[Dan Saattrup Nielsen](https://saattrupdan.github.io/) from the [The Alexandra
Institute](https://alexandra.dk/) built the dataset and uploaded it to the Hugging Face Hub.
### Licensing Information
The dataset is licensed under the [CC0
license](https://creativecommons.org/share-your-work/public-domain/cc0/). | [
-0.6602432727813721,
-0.3646295666694641,
0.38852834701538086,
0.13619673252105713,
-0.450998455286026,
-0.1300102174282074,
-0.22182779014110565,
-0.3438393473625183,
0.5272418260574341,
0.40712714195251465,
-0.8560897707939148,
-1.1390587091445923,
-0.3463042676448822,
0.259317547082901,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tahrirchi/uz-books | tahrirchi | 2023-10-28T19:11:13Z | 82 | 8 | null | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"language:uz",
"license:apache-2.0",
"uz",
"books",
"region:us"
... | 2023-10-28T19:11:13Z | 2023-10-27T16:35:16.000Z | 2023-10-27T16:35:16 | ---
configs:
- config_name: default
data_files:
- split: original
path: data/original-*
- split: lat
path: data/lat-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: original
num_bytes: 19244856855
num_examples: 39712
- name: lat
num_bytes: 13705512346
num_examples: 39712
download_size: 16984559355
dataset_size: 32950369201
annotations_creators:
- no-annotation
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
multilinguality:
- monolingual
language:
- uz
size_categories:
- 10M<n<100M
pretty_name: UzBooks
license: apache-2.0
tags:
- uz
- books
---
# Dataset Card for BookCorpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://tahrirchi.uz/grammatika-tekshiruvi](https://tahrirchi.uz/grammatika-tekshiruvi)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 16.98 GB
- **Size of the generated dataset:** 32.95 GB
- **Total amount of disk used:** 49.93 GB
### Dataset Summary
In an effort to democratize research on low-resource languages, we release UzBooks dataset, a cleaned book corpus consisting of nearly 40000 books in Uzbek Language divided into two branches: "original" and "lat," representing the OCRed (Latin and Cyrillic) and fully Latin versions of the texts, respectively.
Please refer to our [blogpost](https://tahrirchi.uz/grammatika-tekshiruvi) and paper (Coming soon!) for further details.
To load and use dataset, run this script:
```python
from datasets import load_dataset
uz_books=load_dataset("tahrirchi/uz-books")
```
## Dataset Structure
### Data Instances
#### plain_text
- **Size of downloaded dataset files:** 16.98 GB
- **Size of the generated dataset:** 32.95 GB
- **Total amount of disk used:** 49.93 GB
An example of 'train' looks as follows.
```
{
"text": "Hamsa\nAlisher Navoiy ..."
}
```
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `text`: a `string` feature that contains text of the books.
### Data Splits
| name | |
|-----------------|--------:|
| original | 39712 |
| lat | 39712 |
## Dataset Creation
The books have been crawled from various internet sources and preprocessed using Optical Character Recognition techniques in [Tesseract OCR Engine](https://github.com/tesseract-ocr/tesseract). The latin version is created by converting the original dataset with highly curated scripts in order to put more emphasis on the research and development of the field.
## Citation
Please cite this model using the following format:
```
@online{Mamasaidov2023UzBooks,
author = {Mukhammadsaid Mamasaidov and Abror Shopulatov},
title = {UzBooks dataset},
year = {2023},
url = {https://huggingface.co/datasets/tahrirchi/uz-books},
note = {Accessed: 2023-10-28}, % change this date
urldate = {2023-10-28} % change this date
}
```
## Gratitude
We are thankful to these awesome organizations and people for helping to make it happen:
- [Ilya Gusev](https://github.com/IlyaGusev/): for advise throughout the process
- [David Dale](https://daviddale.ru): for advise throughout the process
## Contacts
We believe that this work will enable and inspire all enthusiasts around the world to open the hidden beauty of low-resource languages, in particular Uzbek.
For further development and issues about the dataset, please use m.mamasaidov@tahrirchi.uz or a.shopolatov@tahrirchi.uz to contact. | [
-0.19015483558177948,
-0.06340225040912628,
-0.026393016800284386,
-0.12318761646747589,
-0.43606388568878174,
0.05054821819067001,
-0.17923834919929504,
-0.3777042329311371,
0.07262765616178513,
0.6821366548538208,
-0.645859956741333,
-0.9084548950195312,
-0.2139911651611328,
0.2442473173... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
KomeijiForce/Text2Emoji | KomeijiForce | 2023-11-06T06:26:11Z | 82 | 3 | null | [
"task_categories:translation",
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"region:us"
] | 2023-11-06T06:26:11Z | 2023-11-06T03:29:15.000Z | 2023-11-06T03:29:15 | ---
task_categories:
- translation
- text-generation
language:
- en
size_categories:
- 100K<n<1M
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
emi429/humansleepproject-rr | emi429 | 2023-11-08T19:14:19Z | 82 | 0 | null | [
"region:us"
] | 2023-11-08T19:14:19Z | 2023-11-08T19:14:07.000Z | 2023-11-08T19:14:07 | ---
dataset_info:
features:
- name: rr_intervals
sequence: float64
- name: patient_id
dtype: string
- name: sleep_stage
sequence: string
splits:
- name: train
num_bytes: 383327073
num_examples: 469
download_size: 19983035
dataset_size: 383327073
---
# Dataset Card for "humansleepproject-rr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.48843106627464294,
-0.07595955580472946,
0.0880604237318039,
0.3922465741634369,
-0.08408248424530029,
0.2261606752872467,
0.2931315302848816,
-0.3776792287826538,
0.9210872054100037,
0.4711539149284363,
-0.9934053421020508,
-0.6955307722091675,
-0.446560263633728,
-0.18023993074893951,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
shi3z/Japanese_Wikipedia_Conversation | shi3z | 2023-11-10T22:46:29Z | 82 | 5 | null | [
"task_categories:conversational",
"size_categories:10K<n<100K",
"language:ja",
"license:cc-by-sa-4.0",
"region:us"
] | 2023-11-10T22:46:29Z | 2023-11-10T07:36:40.000Z | 2023-11-10T07:36:40 | ---
license: cc-by-sa-4.0
task_categories:
- conversational
language:
- ja
size_categories:
- 10K<n<100K
---
Wikipedia日本語版データセット(izumi-lab/wikipedia-ja-20230720)を元にGPT-3.5-Turboで会話文を生成したデータセットです
GPT-3.5-Turboを使っているため、商用利用はできません | [
-0.4108586013317108,
-0.8125351071357727,
0.784458577632904,
0.4860193133354187,
-0.7468231320381165,
0.03294551745057106,
-0.04149334505200386,
-0.4683808982372284,
0.34589675068855286,
0.15645532310009003,
-1.047611117362976,
-0.10302005708217621,
-0.5537780523300171,
0.2931051254272461,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Lifan-Z/tox-antitox-proteins | Lifan-Z | 2023-11-13T09:00:37Z | 82 | 0 | null | [
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"biology",
"medical",
"region:us"
] | 2023-11-13T09:00:37Z | 2023-11-13T02:52:31.000Z | 2023-11-13T02:52:31 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- biology
- medical
pretty_name: tox-antitox-proteins
size_categories:
- n<1K
---
This dataset is used for finetuning protGPT2. The features are ['attention_mask', 'input_ids'], no 'labels'.
After using DataCollatorForLanguageModeling and DataLoader, the features will be ['attention_mask', 'input_ids', 'labels']. | [
-0.7465370297431946,
-0.42321768403053284,
-0.13222405314445496,
0.19668641686439514,
-0.44686850905418396,
-0.2253403216600418,
-0.03423284739255905,
-0.0926503911614418,
-0.08963892608880997,
0.7941369414329529,
-1.0330488681793213,
-0.2847486138343811,
-0.5838263034820557,
-0.0861714035... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
atmallen/qm_alice_mixture_1.0e | atmallen | 2023-11-16T18:17:54Z | 82 | 0 | null | [
"region:us"
] | 2023-11-16T18:17:54Z | 2023-11-16T03:33:04.000Z | 2023-11-16T03:33:04 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: alice_label
dtype: bool
- name: bob_label
dtype: bool
- name: difficulty
dtype: int64
- name: statement
dtype: string
- name: choices
sequence: string
- name: character
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: train
num_bytes: 22366655.5
num_examples: 200000
- name: validation
num_bytes: 2254431.5
num_examples: 20000
- name: test
num_bytes: 2248382.5
num_examples: 20000
download_size: 0
dataset_size: 26869469.5
---
# Dataset Card for "qm_alice__mixture_1.0e"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6196177005767822,
-0.23818856477737427,
0.2790529131889343,
0.3366594910621643,
-0.2271079421043396,
0.060786377638578415,
0.6237174868583679,
-0.023386726155877113,
0.9700729846954346,
0.44074612855911255,
-0.7608954310417175,
-0.843500554561615,
-0.5937542915344238,
-0.361078888177871... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
vlsp-2023-vllm/exams_vi | vlsp-2023-vllm | 2023-11-20T12:12:10Z | 82 | 0 | null | [
"region:us"
] | 2023-11-20T12:12:10Z | 2023-11-19T09:52:41.000Z | 2023-11-19T09:52:41 | ---
dataset_info:
features:
- name: question
dtype: string
- name: id
dtype: string
- name: choices
struct:
- name: label
sequence: string
- name: text
sequence: string
- name: answerKey
dtype: string
- name: metadata
struct:
- name: grade
dtype: string
- name: subject
dtype: string
splits:
- name: test
num_bytes: 7847326
num_examples: 19150
download_size: 3472929
dataset_size: 7847326
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# Dataset Card for "exams_vi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.434678852558136,
-0.24180300533771515,
0.19520941376686096,
0.15856362879276276,
-0.2230212539434433,
-0.24656781554222107,
0.39499711990356445,
0.06620891392230988,
0.676971971988678,
0.505330502986908,
-0.7205005884170532,
-0.7542616128921509,
-0.292982280254364,
-0.16785621643066406,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
deokhk/am_wiki_sentences_100000 | deokhk | 2023-11-21T07:38:28Z | 82 | 0 | null | [
"region:us"
] | 2023-11-21T07:38:28Z | 2023-11-21T07:38:22.000Z | 2023-11-21T07:38:22 | ---
dataset_info:
features:
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 15005045
num_examples: 100000
- name: dev
num_bytes: 114806
num_examples: 1000
download_size: 7271644
dataset_size: 15119851
---
# Dataset Card for "am_wiki_sentences_100000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6089914441108704,
-0.5364269614219666,
0.12742583453655243,
0.41400274634361267,
-0.07063919305801392,
-0.25631818175315857,
-0.04242316260933876,
-0.10817159712314606,
0.9508715271949768,
0.5522432327270508,
-0.8027204871177673,
-0.6396758556365967,
-0.39795252680778503,
0.180319234728... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ai4bharat/IndicQuestionGeneration | ai4bharat | 2022-10-13T06:08:25Z | 81 | 1 | null | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:98K<n<98K",
"source_datasets:we start with the SQuAD question answering dataset repurposed to serve as a question generation dataset. We translate this dataset into different Indic languages.",
... | 2022-10-13T06:08:25Z | 2022-03-10T09:59:41.000Z | 2022-03-10T09:59:41 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- as
- bn
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
license:
- cc-by-nc-4.0
multilinguality:
- multilingual
pretty_name: IndicQuestionGeneration
size_categories:
- 98K<n<98K
source_datasets:
- we start with the SQuAD question answering dataset repurposed to serve as a question generation dataset. We translate this dataset into different Indic languages.
task_categories:
- conditional-text-generation
task_ids:
- conditional-text-generation-other-question-generation
---
# Dataset Card for "IndicQuestionGeneration"
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://indicnlp.ai4bharat.org/indicnlg-suite
- **Paper:** [IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages](https://arxiv.org/abs/2203.05437)
- **Point of Contact:**
### Dataset Summary
IndicQuestionGeneration is the question generation dataset released as part of IndicNLG Suite. Each
example has five fields: id, squad_id, answer, context and question. We create this dataset in eleven
languages, including as, bn, gu, hi, kn, ml, mr, or, pa, ta, te. This is translated data. The examples in each language are exactly similar but in different languages.
The number of examples in each language is 98,027.
### Supported Tasks and Leaderboards
**Tasks:** Question Generation
**Leaderboards:** Currently there is no Leaderboard for this dataset.
### Languages
- `Assamese (as)`
- `Bengali (bn)`
- `Gujarati (gu)`
- `Kannada (kn)`
- `Hindi (hi)`
- `Malayalam (ml)`
- `Marathi (mr)`
- `Oriya (or)`
- `Punjabi (pa)`
- `Tamil (ta)`
- `Telugu (te)`
## Dataset Structure
### Data Instances
One random example from the `hi` dataset is given below in JSON format.
```
{
"id": 8,
"squad_id": "56be8e613aeaaa14008c90d3",
"answer": "अमेरिकी फुटबॉल सम्मेलन",
"context": "अमेरिकी फुटबॉल सम्मेलन (एएफसी) के चैंपियन डेनवर ब्रोंकोस ने नेशनल फुटबॉल कांफ्रेंस (एनएफसी) की चैंपियन कैरोलिना पैंथर्स को 24-10 से हराकर अपना तीसरा सुपर बाउल खिताब जीता।",
"question": "एएफसी का मतलब क्या है?"
}
```
### Data Fields
- `id (string)`: Unique identifier.
- `squad_id (string)`: Unique identifier in Squad dataset.
- `answer (strings)`: Answer as one of the two inputs.
- `context (string)`: Context, the other input.
- `question (string)`: Question, the output.
### Data Splits
Here is the number of samples in each split for all the languages.
Language | ISO 639-1 Code | Train | Dev | Test |
---------- | ---------- | ---------- | ---------- | ---------- |
Assamese | as | 69,979 | 17,495 | 10,553 |
Bengali | bn | 69,979 | 17,495 | 10,553 |
Gujarati | gu | 69,979 | 17,495 | 10,553 |
Hindi | hi | 69,979 | 17,495 | 10,553 |
Kannada | kn | 69,979 | 17,495 | 10,553 |
Malayalam | ml | 69,979 | 17,495 | 10,553 |
Marathi | mr | 69,979 | 17,495 | 10,553 |
Oriya | or | 69,979 | 17,495 | 10,553 |
Punjabi | pa | 69,979 | 17,495 | 10,553 |
Tamil | ta | 69,979 | 17,495 | 10,553 |
Telugu | te | 69,979 | 17,495 | 10,553 |
## Dataset Creation
### Curation Rationale
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Source Data
Squad Dataset(https://rajpurkar.github.io/SQuAD-explorer/)
#### Initial Data Collection and Normalization
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
#### Who are the source language producers?
[Detailed in the paper](https://arxiv.org/abs/2203.05437)
### Annotations
[More information needed]
#### Annotation process
[More information needed]
#### Who are the annotators?
[More information needed]
### Personal and Sensitive Information
[More information needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More information needed]
### Discussion of Biases
[More information needed]
### Other Known Limitations
[More information needed]
## Additional Information
### Dataset Curators
[More information needed]
### Licensing Information
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
### Citation Information
If you use any of the datasets, models or code modules, please cite the following paper:
```
@inproceedings{Kumar2022IndicNLGSM,
title={IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages},
author={Aman Kumar and Himani Shrotriya and Prachi Sahu and Raj Dabre and Ratish Puduppully and Anoop Kunchukuttan and Amogh Mishra and Mitesh M. Khapra and Pratyush Kumar},
year={2022},
url = "https://arxiv.org/abs/2203.05437",
```
### Contributions
[Detailed in the paper](https://arxiv.org/abs/2203.05437) | [
-0.521327793598175,
-0.6005013585090637,
-0.1278429478406906,
0.39316698908805847,
-0.21365787088871002,
0.27179640531539917,
-0.514108419418335,
-0.44075942039489746,
0.40583959221839905,
0.2497083842754364,
-0.8886772990226746,
-0.7718185186386108,
-0.5346751809120178,
0.512029230594635,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pacovaldez/stackoverflow-questions-2016 | pacovaldez | 2022-11-30T23:16:54Z | 81 | 0 | null | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"stackoverflow",
"technic... | 2022-11-30T23:16:54Z | 2022-11-30T01:18:27.000Z | 2022-11-30T01:18:27 | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- found
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: stackoverflow_post_questions
size_categories:
- 1M<n<10M
source_datasets:
- original
tags:
- stackoverflow
- technical questions
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# Dataset Card for [Stackoverflow Post Questions]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Contributions](#contributions)
## Dataset Description
Companies that sell Open-source software tools usually hire an army of Customer representatives to try to answer every question asked about their tool. The first step in this process
is the prioritization of the question. The classification scale usually consists of 4 values, P0, P1, P2, and P3, with different meanings across every participant in the industry. On
the other hand, every software developer in the world has dealt with Stack Overflow (SO); the amount of shared knowledge there is incomparable to any other website. Questions in SO are
usually annotated and curated by thousands of people, providing metadata about the quality of the question. This dataset aims to provide an accurate prioritization for programming
questions.
### Dataset Summary
The dataset contains the title and body of stackoverflow questions and a label value(0,1,2,3) that was calculated using thresholds defined by SO badges.
### Languages
English
## Dataset Structure
title: string,
body: string,
label: int
### Data Splits
The split is 40/40/20, where classes have been balaned to be around the same size.
## Dataset Creation
The data set was extracted and labeled with the following query in BigQuery:
```
SELECT
title,
body,
CASE
WHEN score >= 100 OR favorite_count >= 100 OR view_count >= 10000 THEN 0
WHEN score >= 25 OR favorite_count >= 25 OR view_count >= 2500 THEN 1
WHEN score >= 10 OR favorite_count >= 10 OR view_count >= 1000 THEN 2
ELSE 3
END AS label
FROM `bigquery-public-data`.stackoverflow.posts_questions
```
### Source Data
The data was extracted from the Big Query public dataset: `bigquery-public-data.stackoverflow.posts_questions`
#### Initial Data Collection and Normalization
The original dataset contained high class imbalance:
label count
0 977424
1 2401534
2 3418179
3 16222990
Grand Total 23020127
The data was sampled from each class to have around the same amount of records on every class.
### Contributions
Thanks to [@pacofvf](https://github.com/pacofvf) for adding this dataset.
| [
-0.9427905082702637,
-0.6233850717544556,
0.13444818556308746,
0.24416834115982056,
-0.2413521260023117,
0.06649365276098251,
-0.08735049515962601,
-0.17303334176540375,
0.3847637176513672,
0.607528030872345,
-0.4429824650287628,
-0.6410248875617981,
-0.6805639863014221,
-0.075287967920303... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Cohere/miracl-fr-corpus-22-12 | Cohere | 2023-02-06T11:57:34Z | 81 | 0 | null | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:fr",
"license:apache-2.0",
"region:us"
] | 2023-02-06T11:57:34Z | 2023-01-31T06:02:06.000Z | 2023-01-31T06:02:06 | ---
annotations_creators:
- expert-generated
language:
- fr
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (fr) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-fr-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-fr-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-fr-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-fr-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-fr-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-fr-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-fr-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-fr-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-fr-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-fr-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-fr-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-fr-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
| [
-0.6344762444496155,
-0.8110339045524597,
0.31565192341804504,
0.2613868713378906,
-0.059219710528850555,
-0.06759046018123627,
-0.3108191192150116,
-0.5036748647689819,
0.5457637310028076,
0.21529194712638855,
-0.5575382113456726,
-1.0098727941513062,
-0.7041245698928833,
0.34159016609191... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
medalpaca/medical_meadow_pubmed_causal | medalpaca | 2023-04-06T17:01:00Z | 81 | 2 | null | [
"task_categories:question-answering",
"language:en",
"region:us"
] | 2023-04-06T17:01:00Z | 2023-04-06T16:59:22.000Z | 2023-04-06T16:59:22 | ---
task_categories:
- question-answering
language:
- en
---
# Dataset Card for Pubmed Causal
## Dataset Description
- **Paper:** https://aclanthology.org/D19-1473/
### Dataset Summary
This is the dataset used in the paper: Detecting Causal Language Use in Science Findings.
### Citation Information
```
@inproceedings{yu-etal-2019-detecting,
title = "Detecting Causal Language Use in Science Findings",
author = "Yu, Bei and
Li, Yingya and
Wang, Jun",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D19-1473",
doi = "10.18653/v1/D19-1473",
pages = "4664--4674",
}
``` | [
0.017358893528580666,
-0.7789967060089111,
0.49242305755615234,
0.499442458152771,
-0.3018395006656647,
-0.37898051738739014,
-0.20155566930770874,
-0.42039117217063904,
0.33040326833724976,
0.4410775601863861,
-0.3502785563468933,
-0.6391820311546326,
-0.625193178653717,
0.568481564521789... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/seeds | mstz | 2023-04-16T17:58:19Z | 81 | 0 | null | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc",
"seeds",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | 2023-04-16T17:58:19Z | 2023-04-13T10:55:57.000Z | 2023-04-13T10:55:57 | ---
language:
- en
tags:
- seeds
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Page Blocks
size_categories:
- 1K<n<10K
task_categories:
- tabular-classification
configs:
- seeds
- seeds_binary
license: cc
---
# Post Operative
The [Seeds dataset](https://archive-beta.ics.uci.edu/dataset/236/seeds) from the [UCI repository](https://archive-beta.ics.uci.edu/).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-----------------------|---------------------------|-------------------------|
| seeds | Multiclass classification.| |
| seeds_0 | Binary classification. | Is the seed of class 0? |
| seeds_1 | Binary classification. | Is the seed of class 1? |
| seeds_2 | Binary classification. | Is the seed of class 2? | | [
-0.4819001257419586,
-0.07675720751285553,
0.26736682653427124,
0.48842597007751465,
-0.04250550642609596,
-0.14983028173446655,
-0.0320441797375679,
-0.2541881203651428,
0.2671267092227936,
0.37205132842063904,
-0.6335316300392151,
-0.7309476733207703,
-1.0347498655319214,
0.2169628888368... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DeadPixels/DPhi_Sprint_25_Flowers | DeadPixels | 2023-04-29T10:34:03Z | 81 | 0 | null | [
"task_categories:image-classification",
"size_categories:1K<n<10K",
"license:cc-by-2.0",
"region:us"
] | 2023-04-29T10:34:03Z | 2023-04-29T10:25:36.000Z | 2023-04-29T10:25:36 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': daisy
'1': dandelion
'2': rose
'3': sunflower
'4': tulip
splits:
- name: train
num_bytes: 123964921.405
num_examples: 2589
- name: test
num_bytes: 47588262
num_examples: 864
- name: validation
num_bytes: 47493769
num_examples: 864
download_size: 237386772
dataset_size: 219046952.405
license: cc-by-2.0
task_categories:
- image-classification
pretty_name: 'Data Sprint #25: Flower Recognition Datas'
size_categories:
- 1K<n<10K
---
# Dataset Card for "DPhi_Sprint_25_Flowers"
All images in this archive are licensed under the Creative Commons By-Attribution License, available at:
https://creativecommons.org/licenses/by/2.0/
The photographers are listed in LICENSE.txt, thanks to all of them for making their work available.
However, you will observe the image file names are different in this file than those we have provided. The file names were changed solely for the purpose of the data sprint. | [
0.020099172368645668,
0.037408050149679184,
0.24945539236068726,
0.6675243377685547,
-0.5770195126533508,
0.05684938654303551,
0.18528032302856445,
-0.5221327543258667,
-0.08864548802375793,
0.6092527508735657,
-1.3652385473251343,
-0.6456913948059082,
-0.37880438566207886,
0.1184667125344... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
open-llm-leaderboard/results | open-llm-leaderboard | 2023-11-28T07:25:22Z | 81 | 18 | null | [
"region:us"
] | 2023-11-28T07:25:22Z | 2023-06-19T15:15:24.000Z | 2023-06-19T15:15:24 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
CheshireAI/guanaco-unchained | CheshireAI | 2023-08-17T00:12:34Z | 81 | 23 | null | [
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-08-17T00:12:34Z | 2023-07-07T09:40:46.000Z | 2023-07-07T09:40:46 | ---
license: apache-2.0
language:
- en
pretty_name: Guanaco Unchained
size_categories:
- 1K<n<10K
---
# Guanaco Unchained
"Guanaco Unchained" is a refined and optimized version of the original [Guanaco dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco). It is specifically curated to maintain high-quality data while minimizing alignment issues.
The main transformations that were applied to the dataset include:
- Language Filtering: To ensure quality control, most of the non-English prompts were removed.
- AI Identification Removal: Any references suggesting the model's identity as AI, such as "OpenAssistant", "As an AI language model", and similar prompts, were removed. This adjustment allows for a more human-like interaction.
- Content Refining: Responses that indicated refusal, moralizing, or strong subjectivity were either removed or modified to increase accuracy and reduce bias.
- Context Trimming: In scenarios where a human response lacked a corresponding model answer, the last human response was removed to maintain consistency in the instruct pair format.
- Apologetic Language Reduction: The dataset was also revised to remove or modify apologetic language in the responses, thereby ensuring assertiveness and precision.
Dataset Information:
The primary source of the data is the [Guanaco dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco). Following this, a series of processing steps (as outlined above) were performed to remove unnecessary or ambiguous elements, resulting in the "Guanaco Unchained" dataset. The structure of the dataset remains consistent with the original Guanaco dataset, containing pairs of human prompts and assistant responses.
Known Limitations:
The dataset was manually curated, and therefore, may contain unintentional errors, oversights, or inconsistencies. Despite the concerted effort to remove all instances of AI identification, there may still be undetected instances. The dataset's multilingual capability may also be reduced due to the removal of non-English prompts.
Additional Information:
The "Guanaco Unchained" dataset is ideally suited for any application that aims for a more human-like interaction with minimized AI identifiers and alignment issues. It is particularly beneficial in contexts where direct, assertive, and high-quality English responses are desired.
| [
-0.26696905493736267,
-0.7662283182144165,
0.11590930074453354,
-0.005616877228021622,
-0.3976641893386841,
0.39751288294792175,
-0.38887113332748413,
-0.45481523871421814,
0.3500572443008423,
0.5965403914451599,
-0.6856366991996765,
-0.6983737349510193,
-0.2665069103240967,
0.007790575735... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Tverous/claim-amr | Tverous | 2023-07-28T00:08:45Z | 81 | 0 | null | [
"region:us"
] | 2023-07-28T00:08:45Z | 2023-07-21T09:16:55.000Z | 2023-07-21T09:16:55 | ---
dataset_info:
features:
- name: uid
dtype: string
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: reason
dtype: string
- name: claim_cleaned_amr
dtype: string
splits:
- name: train
num_bytes: 60227369
num_examples: 100459
- name: dev
num_bytes: 853786
num_examples: 1200
- name: test
num_bytes: 846997
num_examples: 1200
download_size: 21047805
dataset_size: 61928152
---
# Dataset Card for "claim-amr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5194985866546631,
-0.1691618710756302,
-0.016377871856093407,
0.0846429392695427,
0.056627437472343445,
0.14877498149871826,
0.1973503828048706,
-0.40752729773521423,
1.1014599800109863,
0.46329885721206665,
-0.6559662818908691,
-0.46855300664901733,
-0.5978237390518188,
-0.166854456067... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Shrey-1329/cxiu_hf_dataset | Shrey-1329 | 2023-07-25T15:05:56Z | 81 | 1 | null | [
"task_categories:image-to-text",
"size_categories:1K<n<10K",
"language:en",
"medical",
"region:us"
] | 2023-07-25T15:05:56Z | 2023-07-25T14:02:46.000Z | 2023-07-25T14:02:46 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1108945726.54
num_examples: 6060
download_size: 1108991167
dataset_size: 1108945726.54
task_categories:
- image-to-text
language:
- en
tags:
- medical
size_categories:
- 1K<n<10K
---
# Indiana University Chest Xray Dataset Card
## Data sources:
This is a converted and processed version of the open access pneumonia chest x-ray dataset provided by the indiana university.<br>
You can see its information page [here](https://openi.nlm.nih.gov/faq).<br>
The compressed images in the png format were downloaded from [here](https://openi.nlm.nih.gov/imgs/collections/NLMCXR_png.tgz) and the corresponding reports from [here](https://openi.nlm.nih.gov/imgs/collections/NLMCXR_reports.tgz).
## Data fields:
There are two fields: image and text.
The images are the x-rays and the texts are their associated findings.
## Preprocessing done:
1. **Make all text lowercase**: Convert all text to lowercase to ensure consistent and case-insensitive processing.
2. **Remove all punctuation**: Eliminate any punctuation marks (e.g., periods, commas, exclamation marks) from the text to avoid interference in language analysis.
3. **Remove all numbers**: Eliminate all numeric characters from the text since they might not be relevant for certain natural language processing tasks.
4. **Remove all words with 2 or more Xs in a row**: Remove any words that contain two or more consecutive occurrences of the letter "X" as they may not contribute meaningful information.
5. **Remove the bottom and top 2% of text by length**: Discard the shortest and longest text samples, removing the bottom 2% and top 2% of the text's length, respectively. This step is aimed at reducing the impact of outliers and ensuring a more balanced dataset. | [
-0.25033536553382874,
-0.28781411051750183,
0.5273675322532654,
0.10547811537981033,
-0.5734307765960693,
-0.18931108713150024,
-0.06137222424149513,
-0.16690130531787872,
0.25578734278678894,
0.8627109527587891,
-0.4546830952167511,
-0.6534112095832825,
-0.8896259665489197,
0.210184827446... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
brando/debug1_af | brando | 2023-10-20T19:03:38Z | 81 | 1 | null | [
"license:apache-2.0",
"region:us"
] | 2023-10-20T19:03:38Z | 2023-08-09T22:53:07.000Z | 2023-08-09T22:53:07 | ---
license: apache-2.0
---
If you find this please cite it:
```
@software{brando2021ultimateutils,
author={Brando Miranda},
title={Ultimate Utils - the Ultimate Utils library for Machine Learning and Artificial Intelligence},
url={https://github.com/brando90/ultimate-utils},
year={2021}
}
```
it's not suppose to be used by people yet.
It's under **apache license 2.0** too.
Files are
```
Topic # of theorems # Statements Selected (floor)
Polynomial 515 0
Polynomial_Factorial 47 11
``` | [
-0.08536981791257858,
0.05280827730894089,
0.4806257486343384,
0.4165215790271759,
-0.2798440456390381,
0.03589414432644844,
0.12171651422977448,
-0.3321659564971924,
0.1335400491952896,
0.6047855615615845,
-0.44678816199302673,
-0.6057837009429932,
-0.520870566368103,
0.04087112843990326,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
silk-road/Chat-Haruhi-Fusion-A_B | silk-road | 2023-08-24T16:47:29Z | 81 | 3 | null | [
"region:us"
] | 2023-08-24T16:47:29Z | 2023-08-24T16:46:38.000Z | 2023-08-24T16:46:38 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: context
dtype: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 259951538
num_examples: 66519
download_size: 0
dataset_size: 259951538
---
# Dataset Card for "Chat-Haruhi-Fusion-A_B"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.500516414642334,
-0.4368354082107544,
-0.0019704068545252085,
0.24928513169288635,
-0.20563995838165283,
0.2161535918712616,
0.15598717331886292,
-0.2737210690975189,
1.096404790878296,
0.48043957352638245,
-1.0031412839889526,
-0.6023252010345459,
-0.514272153377533,
-0.382160067558288... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mindchain/bush_01 | mindchain | 2023-09-24T17:38:25Z | 81 | 0 | null | [
"region:us"
] | 2023-09-24T17:38:25Z | 2023-09-24T16:54:33.000Z | 2023-09-24T16:54:33 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
chengsp/geo | chengsp | 2023-10-19T07:16:00Z | 81 | 0 | null | [
"region:us"
] | 2023-10-19T07:16:00Z | 2023-10-13T12:11:52.000Z | 2023-10-13T12:11:52 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Nbardy/renamed_waves | Nbardy | 2023-10-19T19:27:20Z | 81 | 0 | null | [
"region:us"
] | 2023-10-19T19:27:20Z | 2023-10-19T19:21:43.000Z | 2023-10-19T19:21:43 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 498961211.25
num_examples: 1306
download_size: 497509644
dataset_size: 498961211.25
---
# Dataset Card for "renamed_waves"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5537831783294678,
-0.38336697220802307,
0.3069649338722229,
0.4711691439151764,
-0.23417721688747406,
0.05954587832093239,
0.15304629504680634,
-0.3059702515602112,
1.0343824625015259,
0.547399640083313,
-0.8813735246658325,
-0.5048283338546753,
-0.7464422583580017,
-0.3730684220790863,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bh8648/split_dataset_16-2 | bh8648 | 2023-10-19T23:27:27Z | 81 | 0 | null | [
"region:us"
] | 2023-10-19T23:27:27Z | 2023-10-19T23:27:24.000Z | 2023-10-19T23:27:24 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: page_num
dtype: int64
splits:
- name: train
num_bytes: 475910
num_examples: 105
download_size: 217195
dataset_size: 475910
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "split_dataset_16-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6418232321739197,
-0.3977450430393219,
0.16814105212688446,
0.2378254234790802,
-0.4865129888057709,
0.1585623323917389,
0.28284355998039246,
-0.24173864722251892,
0.8188090920448303,
0.44107377529144287,
-0.9119371771812439,
-0.5080401301383972,
-0.6247325539588928,
-0.2365560978651046... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
philikai/SQL_Spider_DDL | philikai | 2023-11-19T17:45:07Z | 81 | 1 | null | [
"license:mit",
"region:us"
] | 2023-11-19T17:45:07Z | 2023-10-26T12:44:18.000Z | 2023-10-26T12:44:18 | ---
license: mit
dataset_info:
features:
- name: DDL_schema
dtype: string
- name: query
dtype: string
- name: question
dtype: string
- name: db_id
dtype: string
splits:
- name: train
num_bytes: 15862354
num_examples: 8659
- name: validation
num_bytes: 1328365
num_examples: 1034
download_size: 621170
dataset_size: 17190719
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
open-llm-leaderboard/details_01-ai__Yi-34B_public | open-llm-leaderboard | 2023-11-08T19:47:04Z | 81 | 0 | null | [
"region:us"
] | 2023-11-08T19:47:04Z | 2023-11-08T19:46:56.000Z | 2023-11-08T19:46:56 | ---
pretty_name: Evaluation run of 01-ai/Yi-34B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [01-ai/Yi-34B](https://huggingface.co/01-ai/Yi-34B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_01-ai__Yi-34B_public\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-11-08T19:46:38.378007](https://huggingface.co/datasets/open-llm-leaderboard/details_01-ai__Yi-34B_public/blob/main/results_2023-11-08T19-46-38.378007.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.6081166107382551,\n\
\ \"em_stderr\": 0.004999326629880105,\n \"f1\": 0.6419882550335565,\n\
\ \"f1_stderr\": 0.004748239351156368,\n \"acc\": 0.6683760448499347,\n\
\ \"acc_stderr\": 0.012160441706531726\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.6081166107382551,\n \"em_stderr\": 0.004999326629880105,\n\
\ \"f1\": 0.6419882550335565,\n \"f1_stderr\": 0.004748239351156368\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5064442759666414,\n \
\ \"acc_stderr\": 0.013771340765699767\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8303078137332282,\n \"acc_stderr\": 0.010549542647363686\n\
\ }\n}\n```"
repo_url: https://huggingface.co/01-ai/Yi-34B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_11_08T19_46_38.378007
path:
- '**/details_harness|drop|3_2023-11-08T19-46-38.378007.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-11-08T19-46-38.378007.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_11_08T19_46_38.378007
path:
- '**/details_harness|gsm8k|5_2023-11-08T19-46-38.378007.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-11-08T19-46-38.378007.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_11_08T19_46_38.378007
path:
- '**/details_harness|winogrande|5_2023-11-08T19-46-38.378007.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-11-08T19-46-38.378007.parquet'
- config_name: results
data_files:
- split: 2023_11_08T19_46_38.378007
path:
- results_2023-11-08T19-46-38.378007.parquet
- split: latest
path:
- results_2023-11-08T19-46-38.378007.parquet
---
# Dataset Card for Evaluation run of 01-ai/Yi-34B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/01-ai/Yi-34B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [01-ai/Yi-34B](https://huggingface.co/01-ai/Yi-34B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_01-ai__Yi-34B_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-08T19:46:38.378007](https://huggingface.co/datasets/open-llm-leaderboard/details_01-ai__Yi-34B_public/blob/main/results_2023-11-08T19-46-38.378007.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.6081166107382551,
"em_stderr": 0.004999326629880105,
"f1": 0.6419882550335565,
"f1_stderr": 0.004748239351156368,
"acc": 0.6683760448499347,
"acc_stderr": 0.012160441706531726
},
"harness|drop|3": {
"em": 0.6081166107382551,
"em_stderr": 0.004999326629880105,
"f1": 0.6419882550335565,
"f1_stderr": 0.004748239351156368
},
"harness|gsm8k|5": {
"acc": 0.5064442759666414,
"acc_stderr": 0.013771340765699767
},
"harness|winogrande|5": {
"acc": 0.8303078137332282,
"acc_stderr": 0.010549542647363686
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | [
-0.44488057494163513,
-0.6534476280212402,
0.20427286624908447,
0.24979768693447113,
-0.1272965967655182,
0.0731532946228981,
-0.3182018995285034,
-0.20884977281093597,
0.3667151927947998,
0.5382844805717468,
-0.7756655812263489,
-0.9645209312438965,
-0.6217008829116821,
0.1415327340364456... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
davidgaofc/RM_inout | davidgaofc | 2023-11-23T05:38:01Z | 81 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-23T05:38:01Z | 2023-11-23T04:18:35.000Z | 2023-11-23T04:18:35 | ---
license: mit
dataset_info:
features:
- name: Text
dtype: string
- name: Label
dtype: int64
splits:
- name: train
num_bytes: 791717
num_examples: 1640
download_size: 349585
dataset_size: 791717
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mulcyber/europarl-mono | mulcyber | 2021-02-05T16:05:40Z | 80 | 0 | null | [
"region:us"
] | 2021-02-05T16:05:40Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hackathon-pln-es/es_tweets_laboral | hackathon-pln-es | 2022-10-25T10:03:39Z | 80 | 1 | null | [
"task_categories:text-classification",
"task_ids:intent-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:es",
"license:unknown",
"region:us"
] | 2022-10-25T10:03:39Z | 2022-04-01T13:20:33.000Z | 2022-04-01T13:20:33 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- es
license:
- unknown
multilinguality:
- monolingual
pretty_name: "Tweets en espa\xF1ol denuncia laboral"
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- intent-classification
---
# Dataset Card for [es_tweets_laboral]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
Dataset creado por @hucruz, @DanielaGarciaQuezada, @hylandude, @BloodBoy21
Etiquetado por @DanielaGarciaQuezada
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
español
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | [
-0.2683809697628021,
-0.4879953861236572,
0.16164010763168335,
0.43082550168037415,
-0.39930394291877747,
0.4258362650871277,
-0.46837225556373596,
-0.34730836749076843,
0.7384668588638306,
0.5903599262237549,
-0.9436470866203308,
-1.2324846982955933,
-0.852749764919281,
0.0959886536002159... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
EddieChen372/tokenized-1024-react | EddieChen372 | 2022-06-24T05:51:29Z | 80 | 0 | null | [
"region:us"
] | 2022-06-24T05:51:29Z | 2022-06-10T06:03:20.000Z | 2022-06-10T06:03:20 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
imvladikon/bmc | imvladikon | 2022-11-17T16:52:43Z | 80 | 0 | null | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-reuters-corpus",
"language:he",
"license:other",
"arxiv:2007.156... | 2022-11-17T16:52:43Z | 2022-06-22T15:39:14.000Z | 2022-06-22T15:39:14 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- he
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-reuters-corpus
task_categories:
- token-classification
task_ids:
- named-entity-recognition
train-eval-index:
- config: bmc
task: token-classification
task_id: entity_extraction
splits:
train_split: train
eval_split: validation
test_split: test
col_mapping:
tokens: tokens
ner_tags: tags
metrics:
- type: seqeval
name: seqeval
---
# Splits for the Ben-Mordecai and Elhadad Hebrew NER Corpus (BMC)
In order to evaluate performance in accordance with the original Ben-Mordecai and Elhadad (2005) work, we provide three 75%-25% random splits.
* Only the 7 entity categories viable for evaluation were kept (DATE, LOC, MONEY, ORG, PER, PERCENT, TIME) --- all MISC entities were filtered out.
* Sequence label scheme was changed from IOB to BIOES
* The dev sets are 10% taken out of the 75%
## Citation
If you use use the BMC corpus, please cite the original paper as well as our paper which describes the splits:
* Ben-Mordecai and Elhadad (2005):
```console
@mastersthesis{naama,
title={Hebrew Named Entity Recognition},
author={Ben-Mordecai, Naama},
advisor={Elhadad, Michael},
year={2005},
url="https://www.cs.bgu.ac.il/~elhadad/nlpproj/naama/",
institution={Department of Computer Science, Ben-Gurion University},
school={Department of Computer Science, Ben-Gurion University},
}
```
* Bareket and Tsarfaty (2020)
```console
@misc{bareket2020neural,
title={Neural Modeling for Named Entities and Morphology (NEMO^2)},
author={Dan Bareket and Reut Tsarfaty},
year={2020},
eprint={2007.15620},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| [
-0.64317387342453,
-0.6445387601852417,
0.22267192602157593,
0.22634270787239075,
-0.402762234210968,
0.1106986403465271,
-0.3491348922252655,
-0.8171479105949402,
0.4978891611099243,
0.18563663959503174,
-0.3547763228416443,
-0.6091108918190002,
-0.657472550868988,
0.2814723551273346,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kennbyee25/distilroberta-base_tokenized_english_patents | kennbyee25 | 2022-06-28T17:19:25Z | 80 | 0 | null | [
"region:us"
] | 2022-06-28T17:19:25Z | 2022-06-28T17:19:21.000Z | 2022-06-28T17:19:21 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ColumbiaNLP/FLUTE | ColumbiaNLP | 2022-10-07T18:28:02Z | 80 | 7 | null | [
"task_categories:text-classification",
"task_categories:text2text-generation",
"task_ids:natural-language-inference",
"task_ids:explanation-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language_creators:machine-generated",
"language_creators:crowdsourced... | 2022-10-07T18:28:02Z | 2022-07-05T14:38:38.000Z | 2022-07-05T14:38:38 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- expert-generated
- machine-generated
- crowdsourced
license:
- afl-3.0
multilinguality:
- monolingual
pretty_name: FLUTE
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
- text2text-generation
task_ids:
- natural-language-inference
- explanation-generation
---
# Dataset Card for FigLang2022SharedTask
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://figlang2022sharedtask.github.io/
- **Repository:**
- **Paper:** TBA
- **Point of Contact:** tuhin.chakr@cs.columbia.edu
### Dataset Summary
Model in the loop approach for fig lang generation and explainability
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
TBA
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | [
-0.6565388441085815,
-0.4976519048213959,
0.11394597589969635,
0.24876557290554047,
-0.4713422656059265,
0.10567474365234375,
-0.2837713360786438,
-0.40753069519996643,
0.45375514030456543,
0.9612830281257629,
-1.0118775367736816,
-0.8128015398979187,
-0.6922563910484314,
0.111737243831157... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
priyank-m/chinese_text_recognition | priyank-m | 2022-09-21T09:08:19Z | 80 | 8 | null | [
"task_categories:image-to-text",
"task_ids:image-captioning",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:zh",
"ocr",
"text-recognition",
"chinese",
"region:us"
] | 2022-09-21T09:08:19Z | 2022-09-06T21:18:47.000Z | 2022-09-06T21:18:47 | ---
annotations_creators: []
language:
- zh
language_creators: []
license: []
multilinguality:
- monolingual
pretty_name: chinese_text_recognition
size_categories:
- 100K<n<1M
source_datasets: []
tags:
- ocr
- text-recognition
- chinese
task_categories:
- image-to-text
task_ids:
- image-captioning
---
Source of data: https://github.com/FudanVI/benchmarking-chinese-text-recognition | [
-0.04905067756772041,
-0.38102129101753235,
0.29208412766456604,
0.2125415802001953,
-0.09203989803791046,
-0.2617035508155823,
-0.25080564618110657,
-0.5383390784263611,
-0.17857421934604645,
0.3410932719707489,
-0.3336036205291748,
-0.6735005378723145,
-0.029892297461628914,
0.1082445010... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mswhite/artwork | mswhite | 2023-02-20T18:32:56Z | 80 | 0 | null | [
"license:artistic-2.0",
"region:us"
] | 2023-02-20T18:32:56Z | 2023-02-17T22:24:42.000Z | 2023-02-17T22:24:42 | ---
license: artistic-2.0
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
junelee/sharegpt_deepl_ko | junelee | 2023-04-27T01:43:36Z | 80 | 46 | null | [
"region:us"
] | 2023-04-27T01:43:36Z | 2023-04-11T02:30:26.000Z | 2023-04-11T02:30:26 | # shareGPT 한국어 번역 데이터셋
이 프로젝트는 shareGPT [데이터셋](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/tree/main) 60만 대화문을 DeepL 을 통해 한국어로 번역하고 있습니다.
현재 번역이 진행 중이며, 아래의 진행상황을 참고해 주세요.
## 진행상황
62만 대화문중 62만 대화문번역완료.
## 파일구조
- original_dataset.json : 원본 shareGPT 파일(62만 영문대화문)
- ko_dataset.json : 번역본 shareGPT파일, 구조 원본과 동일
- ko_dataset_2.json : ko_dataset.json 에서 파일구조가 불안정한(대화가 없거나, 대화의 시작이 gpt 인데 그 이후 대화가 없는것들) 대화 삭제 버전
- ko_alpaca_style_dataset.json : 알파카 파인튜닝을 위한 구조로 변경
## 라이센스
원본 데이터가 OPENAI 이기 때문에 해당 [약관](https://openai.com/policies/terms-of-use)에 따릅니다.
그 이외의 부분은 다음 라이센스를 따릅니다: 저작자표시 2.0 대한민국 (CC BY 2.0 KR)
## 만든이
https://github.com/melodysdreamj | [
-0.6343101263046265,
-0.6965616941452026,
0.2930690348148346,
0.41162070631980896,
-0.8794945478439331,
-0.12047184258699417,
0.10086467862129211,
-0.25476163625717163,
0.5561032891273499,
0.4904109835624695,
-0.532672643661499,
-0.8771573305130005,
-0.8345715999603271,
0.05742662400007248... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zetavg/zh-tw-wikipedia | zetavg | 2023-05-06T12:44:26Z | 80 | 14 | null | [
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:zh",
"region:us"
] | 2023-05-06T12:44:26Z | 2023-05-06T10:35:05.000Z | 2023-05-06T10:35:05 | ---
dataset_info:
features:
- name: pageid
dtype: int64
- name: html
dtype: string
- name: markdown
dtype: string
- name: coordinate
struct:
- name: globe
dtype: string
- name: lat
dtype: float64
- name: lon
dtype: float64
- name: length
dtype: int64
- name: touched
dtype: string
- name: lastrevid
dtype: int64
- name: original_title
dtype: string
splits:
- name: train
num_bytes: 21930303312
num_examples: 2533212
download_size: 0
dataset_size: 21930303312
task_categories:
- text-generation
language:
- zh
pretty_name: 台灣正體中文維基百科 (zh-tw Wikipedia)
size_categories:
- 1M<n<10M
---
# 台灣正體中文維基百科 (zh-tw Wikipedia)
截至 2023 年 5 月,中文維基百科 2,533,212 篇條目的台灣正體文字內容。每篇條目為一列 (row),包含 HTML 以及 Markdown 兩種格式。
A nearly-complete collection of 2,533,212 Traditional Chinese (`zh-tw`) Wikipedia pages, gathered between May 1, 2023, and May 7, 2023. Includes both the original HTML format and an auto-converted Markdown version, which has been processed using [vinta/pangu.py](https://github.com/vinta/pangu.py).
於 2023 年 5 月 1 日至 5 月 7 日間取自維基百科 [`action=query`](https://zh.wikipedia.org/w/api.php?action=help&modules=query) & [`prop=extracts`](https://zh.wikipedia.org/w/api.php?action=help&modules=query%2Bextracts) API,內容皆與維基百科網站之台灣正體版本一致,沒有繁簡體混雜的問題。
For development usage, checkout [`zetavg/zh-tw-wikipedia-dev`](https://huggingface.co/datasets/zetavg/zh-tw-wikipedia-dev), which is a subset that contains only 1,000 randomly picked items.
## 資料內容
* `pageid` — 維基百科頁面 ID。
* `html` — 頁面原始的 HTML 匯出。
* `markdown` — 頁面轉換為 Markdown 格式,並以 [vinta/pangu.py](https://github.com/vinta/pangu.js) 於全形字與半形字之間加入空格後的版本。
* `coordinate` — 頁面主題的經緯度座標,例如 `{ "lat": 22.63333333, "lon": 120.26666667 }`。若無則為 `null`。
* `length` — 頁面內容長度。
* `touched` — 頁面的最後修訂時間。
* `lastrevid` — 最新修訂版本的修訂 ID。
* `original_title` — 維基百科未經轉換的原始頁面標題,可能為簡體中文。
## 已知問題
* 無法抽取為 *受限格式 HTML* 的內容皆會遺失,例如所有圖片、圖表、表格、參考資料列表,以及部分程式碼區塊。
* 極少數內容過長的條目沒有納入,大致上計有:`四千`、`五千`、`六千`、`英雄傳說VI`、`軌跡系列角色列表`、`碧之軌跡角色列表`、`零之軌跡角色列表`。
* 缺少頁面標題 `title` 欄位(原可透過 API `inprop=varianttitles` 取得,但資料抓取時程式撰寫遺漏了這個欄位)。 | [
-0.6892412900924683,
-0.5828278064727783,
0.25619179010391235,
0.21092046797275543,
-0.7677735686302185,
-0.45412102341651917,
-0.41915085911750793,
-0.44488903880119324,
0.6062335968017578,
0.2996802031993866,
-0.6616953015327454,
-0.6916513442993164,
-0.26466304063796997,
0.2185915857553... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BramVanroy/dolly-15k-dutch | BramVanroy | 2023-10-11T08:24:55Z | 80 | 1 | null | [
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:nl",
"license:cc-by-nc-sa-3.0",
"dolly",
"instruct",
"instruction",
"doi:10.57967/hf/0785",
"region:us"
] | 2023-10-11T08:24:55Z | 2023-06-19T06:51:47.000Z | 2023-06-19T06:51:47 | ---
license: cc-by-nc-sa-3.0
task_categories:
- question-answering
- text-generation
language:
- nl
tags:
- dolly
- instruct
- instruction
pretty_name: Dolly 15k Dutch
size_categories:
- 10K<n<100K
---
# Dataset Card for Dolly 15k Dutch
## Dataset Description
- **Homepage:** N/A
- **Repository:** N/A
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** Bram Vanroy
### Dataset Summary
This dataset contains 14,934 instructions, contexts and responses, in several natural language categories such as classification, closed QA, generation, etc. The English [original dataset](https://huggingface.co/datasets/databricks/databricks-dolly-15k) was created by @databricks, who crowd-sourced the data creation via its employees. The current dataset is a translation of that dataset through ChatGPT (`gpt-3.5-turbo`).
☕ [**Want to help me out?**](https://www.buymeacoffee.com/bramvanroy) Translating the data with the OpenAI API, and prompt testing, cost me 💸$19.38💸. If you like this dataset, please consider [buying me a coffee](https://www.buymeacoffee.com/bramvanroy) to offset a portion of this cost, I appreciate it a lot! ☕
### Languages
- Dutch
## Dataset Structure
### Data Instances
```python
{
"id": 14963,
"instruction": "Wat zijn de duurste steden ter wereld?",
"context": "",
"response": "Dit is een uitgebreide lijst van de duurste steden: Singapore, Tel Aviv, New York, Hong Kong, Los Angeles, Zurich, Genève, San Francisco, Parijs en Sydney.",
"category": "brainstorming"
}
```
### Data Fields
- **id**: the ID of the item. The following 77 IDs are not included because they could not be translated (or were too long): `[1502, 1812, 1868, 4179, 4541, 6347, 8851, 9321, 10588, 10835, 11257, 12082, 12319, 12471, 12701, 12988, 13066, 13074, 13076, 13181, 13253, 13279, 13313, 13346, 13369, 13446, 13475, 13528, 13546, 13548, 13549, 13558, 13566, 13600, 13603, 13657, 13668, 13733, 13765, 13775, 13801, 13831, 13906, 13922, 13923, 13957, 13967, 13976, 14028, 14031, 14045, 14050, 14082, 14083, 14089, 14110, 14155, 14162, 14181, 14187, 14200, 14221, 14222, 14281, 14473, 14475, 14476, 14587, 14590, 14667, 14685, 14764, 14780, 14808, 14836, 14891, 1
4966]`
- **instruction**: the instruction (question)
- **context**: additional context that the AI can use to answer the question
- **response**: the AI's expected response
- **category**: the category of this type of question (see [Dolly](https://huggingface.co/datasets/databricks/databricks-dolly-15k#annotator-guidelines) for more info)
## Dataset Creation
Both the translations and the topics were translated with OpenAI's API for `gpt-3.5-turbo`. `max_tokens=1024, temperature=0` as parameters.
The prompt template to translate the input is (where `src_lang` was English and `tgt_lang` Dutch):
```python
CONVERSATION_TRANSLATION_PROMPT = """You are asked to translate a task's instruction, optional context to the task, and the response to the task, from {src_lang} to {tgt_lang}.
Here are the requirements that you should adhere to:
1. maintain the format: the task consists of a task instruction (marked `instruction: `), optional context to the task (marked `context: `) and response for the task marked with `response: `;
2. do not translate the identifiers `instruction: `, `context: `, and `response: ` but instead copy them to your output;
3. make sure that text is fluent to read and does not contain grammatical errors. Use standard {tgt_lang} without regional bias;
4. translate the instruction and context text using informal, but standard, language;
5. make sure to avoid biases (such as gender bias, grammatical bias, social bias);
6. if the instruction is to correct grammar mistakes or spelling mistakes then you have to generate a similar mistake in the context in {tgt_lang}, and then also generate a corrected output version in the output in {tgt_lang};
7. if the instruction is to translate text from one language to another, then you do not translate the text that needs to be translated in the instruction or the context, nor the translation in the response (just copy them as-is);
8. do not translate code fragments but copy them to your output. If there are English examples, variable names or definitions in code fragments, keep them in English.
Now translate the following task with the requirements set out above. Do not provide an explanation and do not add anything else.\n\n"""
```
The system message was:
```
You are a helpful assistant that translates English to Dutch according to the requirements that are given to you.
```
Note that 77 items (0.5%) were not successfully translated. This can either mean that the prompt was too long for the given limit (`max_tokens=1024`) or that the generated translation could not be parsed into `instruction`, `context` and `response` fields. The missing IDs are `[1502, 1812, 1868, 4179, 4541, 6347, 8851, 9321, 10588, 10835, 11257, 12082, 12319, 12471, 12701, 12988, 13066, 13074, 13076, 13181, 13253, 13279, 13313, 13346, 13369, 13446, 13475, 13528, 13546, 13548, 13549, 13558, 13566, 13600, 13603, 13657, 13668, 13733, 13765, 13775, 13801, 13831, 13906, 13922, 13923, 13957, 13967, 13976, 14028, 14031, 14045, 14050, 14082, 14083, 14089, 14110, 14155, 14162, 14181, 14187, 14200, 14221, 14222, 14281, 14473, 14475, 14476, 14587, 14590, 14667, 14685, 14764, 14780, 14808, 14836, 14891, 1
4966]`.
### Source Data
#### Initial Data Collection and Normalization
Initial data collection by [databricks](https://huggingface.co/datasets/databricks/databricks-dolly-15k). See their repository for more information about this dataset.
## Considerations for Using the Data
Note that the translations in this new dataset have not been verified by humans! Use at your own risk, both in terms of quality and biases.
### Discussion of Biases
As with any machine-generated texts, users should be aware of potential biases that are included in this dataset. Although the prompt specifically includes `make sure to avoid biases (such as gender bias, grammatical bias, social bias)`, of course the impact of such command is not known. It is likely that biases remain in the dataset so use with caution.
### Other Known Limitations
The translation quality has not been verified. Use at your own risk!
### Licensing Information
This repository follows the original databricks license, which is CC BY-SA 3.0 but see below for a specific restriction.
This text was generated (either in part or in full) with GPT-3 (`gpt-3.5-turbo`), OpenAI’s large-scale language-generation model. Upon generating draft language, the author reviewed, edited, and revised the language to their own liking and takes ultimate responsibility for the content of this publication.
If you use this dataset, you must also follow the [Sharing](https://openai.com/policies/sharing-publication-policy) and [Usage](https://openai.com/policies/usage-policies) policies.
As clearly stated in their [Terms of Use](https://openai.com/policies/terms-of-use), specifically 2c.iii, "[you may not] use output from the Services to develop models that compete with OpenAI". That means that you cannot use this dataset to build models that are intended to commercially compete with OpenAI. [As far as I am aware](https://law.stackexchange.com/questions/93308/licensing-material-generated-with-chatgpt), that is a specific restriction that should serve as an addendum to the current license.
### Citation Information
If you use this data set, please cite :
Vanroy, B. (2023). Dolly 15k Dutch [Data set]. Hugging Face. https://doi.org/10.57967/hf/0785
```bibtex
@misc {https://doi.org/10.57967/hf/0785,
author = {Vanroy, Bram },
title = { {D}olly 15k {D}utch },
year = 2023,
url = { https://huggingface.co/datasets/BramVanroy/dolly-15k-dutch },
doi = { 10.57967/hf/0785 },
publisher = { Hugging Face }
}
```
### Contributions
Thanks to [databricks](https://huggingface.co/datasets/databricks/databricks-dolly-15k) for the initial, high-quality dataset. | [
-0.3178199827671051,
-0.8314254283905029,
-0.043185990303754807,
0.5728631615638733,
-0.18720901012420654,
-0.46189084649086,
-0.3273102641105652,
-0.4060041904449463,
0.28821712732315063,
0.4723952114582062,
-0.6232887506484985,
-0.6419499516487122,
-0.5972840785980225,
0.3907521069049835... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jamescalam/langchain-docs-23-06-27 | jamescalam | 2023-06-27T15:51:24Z | 80 | 5 | null | [
"region:us"
] | 2023-06-27T15:51:24Z | 2023-06-27T14:08:06.000Z | 2023-06-27T14:08:06 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
flozi00/conversations | flozi00 | 2023-11-22T10:06:34Z | 80 | 10 | null | [
"task_categories:conversational",
"task_categories:text-generation",
"language:de",
"region:us"
] | 2023-11-22T10:06:34Z | 2023-07-06T13:24:36.000Z | 2023-07-06T13:24:36 | ---
language:
- de
task_categories:
- conversational
- text-generation
dataset_info:
features:
- name: conversations
dtype: string
- name: from
dtype: string
- name: labels
dtype: string
splits:
- name: train
num_bytes: 256990391.7622295
num_examples: 134598
download_size: 133459205
dataset_size: 256990391.7622295
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
This dataset is an uncensored and massively cleaned, double checked merge of several german datasets / subsets
The mission of this work is building an high quality dataset for the german llm community.
This repo is continously updated and old parts being replaced with never.
Quality for Quantity
https://github.com/flozi00/chat-data-experiments/blob/main/chat_combiner.py | [
-0.3246893882751465,
-0.45641690492630005,
-0.04537332057952881,
0.15852054953575134,
-0.5028384923934937,
0.15803305804729462,
-0.2221217304468155,
-0.3751583695411682,
0.22389841079711914,
1.0072574615478516,
-0.6576440930366516,
-0.7238858342170715,
-0.5852991342544556,
-0.1573404669761... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DynamicSuperb/SpeechTextMatching_LJSpeech | DynamicSuperb | 2023-07-12T11:11:34Z | 80 | 0 | null | [
"region:us"
] | 2023-07-12T11:11:34Z | 2023-07-09T13:17:45.000Z | 2023-07-09T13:17:45 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype: audio
- name: text
dtype: string
- name: instruction
dtype: string
- name: label
dtype: string
- name: transcription
dtype: string
splits:
- name: test
num_bytes: 3802579053.0
num_examples: 13100
download_size: 3786166384
dataset_size: 3802579053.0
---
# Dataset Card for "speechTextMatching_LJSpeech"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.33410319685935974,
-0.35783651471138,
0.1042252779006958,
0.2638584077358246,
-0.01642085239291191,
0.1929783672094345,
-0.16765351593494415,
-0.18618512153625488,
0.942814826965332,
0.44495946168899536,
-0.8118863701820374,
-0.7676716446876526,
-0.5648074746131897,
-0.3778022527694702,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
phi0108/demo-structure-en | phi0108 | 2023-07-28T13:51:08Z | 80 | 0 | null | [
"region:us"
] | 2023-07-28T13:51:08Z | 2023-07-28T13:50:42.000Z | 2023-07-28T13:50:42 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jxie/lipop | jxie | 2023-08-04T22:25:41Z | 80 | 0 | null | [
"region:us"
] | 2023-08-04T22:25:41Z | 2023-08-04T22:25:28.000Z | 2023-08-04T22:25:28 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: float64
splits:
- name: train_0
num_bytes: 200193
num_examples: 3360
- name: val_0
num_bytes: 24928
num_examples: 420
- name: test_0
num_bytes: 24770
num_examples: 420
- name: train_1
num_bytes: 199909
num_examples: 3360
- name: val_1
num_bytes: 25212
num_examples: 420
- name: test_1
num_bytes: 24770
num_examples: 420
- name: train_2
num_bytes: 200080
num_examples: 3360
- name: val_2
num_bytes: 24726
num_examples: 420
- name: test_2
num_bytes: 25085
num_examples: 420
download_size: 387383
dataset_size: 749673
---
# Dataset Card for "lipop"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7217918634414673,
-0.0658733993768692,
0.030381862074136734,
0.014054229483008385,
-0.1853993535041809,
-0.0911455750465393,
0.29242777824401855,
-0.40812569856643677,
0.709317147731781,
0.774911642074585,
-0.9051501154899597,
-0.8466838002204895,
-0.5945296287536621,
-0.341181486845016... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
google/trueteacher | google | 2023-09-06T19:39:38Z | 80 | 9 | null | [
"language:en",
"license:cc-by-nc-4.0",
"natural-language-inference",
"news-articles-summarization",
"arxiv:2305.11171",
"arxiv:2210.11416",
"region:us"
] | 2023-09-06T19:39:38Z | 2023-08-15T11:56:54.000Z | 2023-08-15T11:56:54 | ---
license: cc-by-nc-4.0
language:
- en
tags:
- natural-language-inference
- news-articles-summarization
---
# **TrueTeacher**
## Dataset Summary
This is a large-scale synthetic dataset for training **Factual Consistency Evaluation** models, introduced in the [TrueTeacher paper (Gekhman et al, 2023)](https://arxiv.org/pdf/2305.11171.pdf).
## Dataset Details
The dataset contains model-generated summaries of articles from the train split of the **CNN/DailyMail** dataset [(Hermann et al., 2015)](https://proceedings.neurips.cc/paper_files/paper/2015/file/afdec7005cc9f14302cd0474fd0f3c96-Paper.pdf)
which are annotated for factual consistency using **FLAN-PaLM 540B** [(Chung et al.,2022)](https://arxiv.org/pdf/2210.11416.pdf).
Summaries were generated using summarization models with different capacities, which were created by fine-tuning **T5** [(Raffel et al., 2020)](https://jmlr.org/papers/volume21/20-074/20-074.pdf) on the **XSum** dataset [(Narayan et al., 2018)](https://aclanthology.org/D18-1206.pdf).
We used the following 5 capacities: T5-11B, T5-3B, T5-large, T5-base and T5-small.
## Data format
The data contains json lines with the following keys:
- `"summarization_model"` - The summarization model used to generate the summary.
- `"cnndm_id"` - The original id from the CNN/DailyMail dataset, this need to be used in order to retrieve the corresponding article from CNN/DailyMail (which was used as the grounding document).
- `"summary"` - The model-generated summary.
- `"label"` - A binary label ('1' - Factualy Consistent, '0' - Factualy Inconsistent).
Here is an example of a single data item:
```json
{
"summarization_model": "T5-11B",
"cnndm_id": "f72048a23154de8699c307e2f41157abbfcae261",
"summary": "Children's brains are being damaged by prolonged internet access, a former children's television presenter has warned."
"label": "1",
}
```
## Loading the dataset
To use the dataset, you need to fetch the relevant documents from the CNN/DailyMail dataset. The follwoing code can be used for that purpose:
```python
from datasets import load_dataset
from tqdm import tqdm
trueteacher_data = load_dataset("google/trueteacher", split='train')
cnn_dailymail_data = load_dataset("cnn_dailymail", version="3.0.0", split='train')
cnn_dailymail_articles_by_id = {example['id']: example['article'] for example in cnn_dailymail_data}
trueteacher_data_with_documents = []
for example in tqdm(trueteacher_data):
example['document'] = cnn_dailymail_articles_by_id[example['cnndm_id']]
trueteacher_data_with_documents.append(example)
```
## Intended Use
This dataset is intended for a research use (**non-commercial**) in English.
The recommended use case is training factual consistency evaluation models for summarization.
## Out-of-scope use
Any use cases which violate the **cc-by-nc-4.0** license.
Usage in languages other than English.
## Citation
If you use this dataset for a research publication, please cite the TrueTeacher paper (using the bibtex entry below), as well as the CNN/DailyMail, XSum, T5 and FLAN papers mentioned above.
```
@misc{gekhman2023trueteacher,
title={TrueTeacher: Learning Factual Consistency Evaluation with Large Language Models},
author={Zorik Gekhman and Jonathan Herzig and Roee Aharoni and Chen Elkind and Idan Szpektor},
year={2023},
eprint={2305.11171},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | [
-0.15864227712154388,
-0.6437073945999146,
0.21302570402622223,
0.06772726774215698,
-0.2371291220188141,
-0.14529640972614288,
-0.38206151127815247,
-0.1385856568813324,
-0.08044799417257309,
0.3285931646823883,
-0.4869077503681183,
-0.6628203988075256,
-0.7245525121688843,
0.234835758805... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BEE-spoke-data/SYSK-Transcripts | BEE-spoke-data | 2023-11-08T01:40:22Z | 80 | 0 | null | [
"size_categories:1K<n<10K",
"license:cc-by-4.0",
"region:us"
] | 2023-11-08T01:40:22Z | 2023-09-17T21:38:31.000Z | 2023-09-17T21:38:31 | ---
license: cc-by-4.0
size_categories:
- 1K<n<10K
dataset_info:
- config_name: default
features:
- name: id
dtype: string
- name: title
dtype: string
- name: link
dtype: string
- name: desc
dtype: string
- name: summary
dtype: string
- name: pubDate
dtype: string
- name: pubFormatted
dtype: string
- name: enc_len
dtype: string
- name: enc_type
dtype: string
- name: audio_url
dtype: string
- name: transcript
dtype: string
splits:
- name: train
num_bytes: 72117648
num_examples: 1965
download_size: 40830798
dataset_size: 72117648
- config_name: seg-no-ads
features:
- name: id
dtype: string
- name: title
dtype: string
- name: link
dtype: string
- name: desc
dtype: string
- name: summary
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 69771386
num_examples: 1911
download_size: 40026675
dataset_size: 69771386
- config_name: segmented
features:
- name: id
dtype: string
- name: title
dtype: string
- name: link
dtype: string
- name: desc
dtype: string
- name: summary
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 69521425.32519084
num_examples: 1911
download_size: 40832338
dataset_size: 69521425.32519084
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- config_name: seg-no-ads
data_files:
- split: train
path: seg-no-ads/train-*
- config_name: segmented
data_files:
- split: train
path: segmented/train-*
---
# Dataset Card for "SYSK_Transcripts"
Transcriptions + Summaries of _Stuff You Should Know_. DOI 10.17605/OSF.IO/VM9NT
```python
Dataset({
features: ['id', 'title', 'link', 'desc', 'summary', 'pubDate', 'pubFormatted', 'enc_len', 'enc_type', 'audio_url', 'transcript'],
num_rows: 1965
})
```
## citation
```
@article{https://doi.org/10.17605/osf.io/vm9nt,
doi = {10.17605/OSF.IO/VM9NT},
url = {https://osf.io/vm9nt/},
author = {Pierson, Britt},
keywords = {audio processing, audio timestamp, corpus, dataset, natural language processing, nlp, podcast, podcasts trancripts, podcasts transcript, podcast transcript, podcast transcripts, sentiment analysis, transcript, transcripts, transcripts of podcast, transcripts of podcasts},
title = {"Stuff You Should Know" Podcast Transcripts - Full Dataset with Transcript of All Episodes (SYSK_Transcripts)},
publisher = {Open Science Framework},
year = {2022},
copyright = {CC-By Attribution 4.0 International}
}
``` | [
-0.19299018383026123,
-0.3621409833431244,
0.1796882301568985,
0.356415718793869,
-0.38723528385162354,
-0.11256088316440582,
-0.3568163216114044,
0.024361319839954376,
0.5722482800483704,
0.2719126343727112,
-0.6761715412139893,
-1.0336177349090576,
-0.44154271483421326,
0.359803408384323... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kaist-ai/Feedback-Collection | kaist-ai | 2023-10-14T14:53:22Z | 80 | 41 | null | [
"task_categories:text-generation",
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"arxiv:2310.08491",
"region:us"
] | 2023-10-14T14:53:22Z | 2023-10-13T01:17:17.000Z | 2023-10-13T01:17:17 | ---
license: cc-by-4.0
task_categories:
- text-generation
- text-classification
language:
- en
size_categories:
- 10K<n<100K
configs:
- config_name: default
data_files:
- split: train
path: "new_feedback_collection.json"
---
## Dataset Description
- **Homepage:https://github.com/kaistAI/Prometheus**
- **Repository:https://github.com/kaistAI/Prometheus**
- **Paper:https://arxiv.org/abs/2310.08491**
- **Point of Contact:seungone@kaist.ac.kr**
# Dataset Card
### Dataset Summary
The Feedback Collection is a dataset designed to induce fine-grained evaluation capabilities into language models.\\

Recently, proprietary LLMs (e.g., GPT-4) have been used to evaluate long-form responses. In our experiments, we found that open-source LMs are not capable of evaluating long-form responses, showing low correlation with both human evaluators and GPT-4.\\
In our paper, we found that by (1) fine-tuning feedback generated by GPT-4 and (2) including the appropriate reference materials (reference answers & score rubrics), we can effectively induce fine-grained evaluation into open-source LMs.
The Feedback Collection provides 1K score rubrics, 20K instructions & reference answers, 100K responses & feedback (20K for each score in the range 1-5).\\
Experimental results show that Prometheus (a LM obtained by fine-tuning Llama-2-Chat on the Feedback Collection) can function as an evaluator in both an absolute scoring setting and a ranking scoring setting.
### Languages
English
## Dataset Structure
* instruction: The input that is given to the evaluator LM. It includes the instruction & response to evaluate, the reference answer, the score rubric.
* output: The output that the evaluator LM should generate. It includes the feedback and score decision divided by a phrase ```[RESULT]```.
* orig```_```instruction: The instruction to be evaluated. Note that this differs with the instruction that includes all the components.
* orig```_```response: The response to be evaluated.
* orig```_```reference```_```answer: A reference answer to the orig```_```instruction.
* orig```_```criteria: The score criteria used to evaluate the orig```_``` response.
* orig```_```score1```_```description: A description of when to give a score of 1 to the orig```_```response.
* orig```_```score2```_```description: A description of when to give a score of 2 to the orig```_```response.
* orig```_```score3```_```description: A description of when to give a score of 3 to the orig```_```response.
* orig```_```score4```_```description: A description of when to give a score of 4 to the orig```_```response.
* orig```_```score5```_```description: A description of when to give a score of 5 to the orig```_```response.
* orig```_```feedback: A feedback that critiques the orig```_```response.
* orig```_```score: An integer between 1 and 5 given to the orig```_```response.
In our paper, we trained the input using the following prompt format (already processed in the 'instruction'):
```
###Task Description:
An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given.
1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general.
2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric.
3. The output format should look as follows: \"Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)\"
4. Please do not generate any other opening, closing, and explanations.
###The instruction to evaluate:
{orig_instruction}
###Response to evaluate:
{orig_response}
###Reference Answer (Score 5):
{orig_reference_answer}
###Score Rubrics:
[{orig_criteria}]
Score 1: {orig_score1_description}
Score 2: {orig_score2_description}
Score 3: {orig_score3_description}
Score 4: {orig_score4_description}
Score 5: {orig_score5_description}
###Feedback:
```
The following prompt format (already processed in the 'output') was used to train the evaluator LM:
```
{orig_feedback}
[RESULT] {orig_score}
```
Then during evaluation, we parsed the prediction after the phrase ```[RESULT]```.
### Data Splits
| name | train |
|-------------------|------:|
|Feedback-Collection|99,952|
### Citation Information
If you find the following model helpful, please consider citing our paper!
```bibtex
@misc{kim2023prometheus,
title={Prometheus: Inducing Fine-grained Evaluation Capability in Language Models},
author={Seungone Kim and Jamin Shin and Yejin Cho and Joel Jang and Shayne Longpre and Hwaran Lee and Sangdoo Yun and Seongjin Shin and Sungdong Kim and James Thorne and Minjoon Seo},
year={2023},
eprint={2310.08491},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | [
-0.4175482988357544,
-0.8648768663406372,
0.4940171539783478,
0.045103639364242554,
-0.10844704508781433,
-0.39793649315834045,
-0.5149983167648315,
-0.490091472864151,
0.08996251970529556,
0.41367602348327637,
-0.4247308671474457,
-0.642475962638855,
-0.44475045800209045,
0.15934655070304... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
DataProvenanceInitiative/dialog_submix_original | DataProvenanceInitiative | 2023-10-16T17:38:01Z | 80 | 0 | null | [
"region:us"
] | 2023-10-16T17:38:01Z | 2023-10-16T17:37:44.000Z | 2023-10-16T17:37:44 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: task_source
dtype: string
- name: task_name
dtype: string
- name: template_type
dtype: string
splits:
- name: train
num_bytes: 1024507265
num_examples: 553869
download_size: 583008075
dataset_size: 1024507265
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dialog_submix_original"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5436298251152039,
-0.49761998653411865,
0.039583876729011536,
-0.019958773627877235,
-0.3116263151168823,
-0.01574881002306938,
0.34842944145202637,
0.20701299607753754,
0.879675030708313,
0.7819443345069885,
-1.1887719631195068,
-0.4969188868999481,
-0.3694137930870056,
-0.174602448940... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ckandemir/bitcoin_tweets_sentiment_kaggle | ckandemir | 2023-11-06T07:25:54Z | 80 | 0 | null | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:kaggle",
"datasets",
"bitcoin",
"text-classification",
"sentiment-analysis",
"region:us"
] | 2023-11-06T07:25:54Z | 2023-10-27T08:01:02.000Z | 2023-10-27T08:01:02 | ---
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- kaggle
task_categories:
- text-classification
task_ids:
- sentiment-classification
tags:
- datasets
- bitcoin
- text-classification
- sentiment-analysis
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: eval
path: data/eval-*
dataset_info:
features:
- name: Date
dtype: string
- name: text
dtype: string
- name: Sentiment
dtype: string
splits:
- name: train
num_bytes: 12842246
num_examples: 77791
- name: test
num_bytes: 1609120
num_examples: 9724
- name: eval
num_bytes: 1598297
num_examples: 9724
download_size: 9868625
dataset_size: 16049663
---
# Dataset Card for "Bitcoin Tweets ("
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Dataset Distribution](#dataset-distribution)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
### Dataset Summary
This dataset contains a collection of 16 million tweets related to Bitcoin, collected from Twitter. Each tweet is tagged with sentiment (positive, negative, neutral). The dataset was originally created and uploaded to Kaggle by user gauravduttakiit. It is a valuable resource for training and evaluating models for sentiment analysis within the context of cryptocurrency discussions.
### Supported Tasks and Leaderboards
- `text-classification`: This dataset can be used to train a model for sentiment analysis. The performance of the model can be evaluated using standard metrics like accuracy, F1 score, precision, and recall.
### Languages
The text data is primarily in English.
## Dataset Structure
### Data Instances
Each instance in the dataset contains the following fields:
- `tweet`: the text of the tweet.
- `sentiment`: the sentiment of the tweet, labeled as either "positive", "negative", or "neutral".
### Data Fields
- `tweet`: a string containing the text of the tweet.
- `sentiment`: a string indicating the sentiment of the tweet.
### Data Splits
The dataset is not explicitly split into training, validation, and test sets. Users will need to create these splits as per their requirements.
## Dataset Creation
### Curation Rationale
The dataset was curated to analyze the sentiment within the cryptocurrency community, specifically focusing on Bitcoin.
### Source Data
#### Initial Data Collection and Normalization
The data was collected from Twitter using specific keywords related to Bitcoin. For more details regarding data collection, one can refer to the [original Kaggle dataset](https://www.kaggle.com/datasets/gauravduttakiit/bitcoin-tweets-16m-tweets-with-sentiment-tagged).
#### Who are the source data providers?
The data was provided by Kaggle user gauravduttakiit.
### Annotations
The sentiment labels were generated using automated sentiment analysis tools. For more details, refer to the [original Kaggle dataset](https://www.kaggle.com/datasets/gauravduttakiit/bitcoin-tweets-16m-tweets-with-sentiment-tagged).
## Dataset Distribution
### Dataset Curators
The dataset was curated by gauravduttakiit and uploaded to Kaggle.
### Licensing Information
Refer to the [original Kaggle dataset](https://www.kaggle.com/datasets/gauravduttakiit/bitcoin-tweets-16m-tweets-with-sentiment-tagged) for licensing information. | [
-0.33413785696029663,
-0.6314150094985962,
-0.02537287212908268,
0.15908876061439514,
-0.7116889953613281,
0.4224133789539337,
-0.3159504234790802,
-0.4027751684188843,
0.6065671443939209,
0.3075004518032074,
-0.6112971305847168,
-1.0651953220367432,
-0.6181238889694214,
-0.120881408452987... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ernlavr/IDMGSP-danish | ernlavr | 2023-11-18T13:21:17Z | 80 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-18T13:21:17Z | 2023-11-12T18:52:42.000Z | 2023-11-12T18:52:42 | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: label
dtype: int64
- name: abstract
dtype: string
- name: translation
dtype: string
splits:
- name: train
num_bytes: 24634068
num_examples: 16000
- name: test
num_bytes: 12269388
num_examples: 8000
download_size: 19382041
dataset_size: 36903456
---
## IDMGSP Danish Translated
Danish translated version of the original [IDMGSP](https://huggingface.co/datasets/tum-nlp/IDMGSP) dataset. This Danish version contains all `abstracts` from the `classifier_input` subset. Translated using an [`Nllb-200-Distilled-600M`](https://huggingface.co/facebook/nllb-200-distilled-600M). Made as part of the [DM-AI 2023](https://dmiai.dk/) competition | [
-0.6569007039070129,
-0.42960262298583984,
0.3670230209827423,
0.45863595604896545,
-0.43528372049331665,
0.2343692034482956,
0.06410522758960724,
-0.4421553909778595,
0.23009417951107025,
0.43205317854881287,
-0.34552547335624695,
-0.6251652240753174,
-0.8287511467933655,
0.57228404283523... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
deokhk/en_wiki_sentences_1000 | deokhk | 2023-11-21T10:48:21Z | 80 | 0 | null | [
"region:us"
] | 2023-11-21T10:48:21Z | 2023-11-21T10:48:16.000Z | 2023-11-21T10:48:16 | ---
dataset_info:
features:
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 128906
num_examples: 1000
- name: dev
num_bytes: 124329
num_examples: 1000
download_size: 157596
dataset_size: 253235
---
# Dataset Card for "en_wiki_sentences_1000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6778450608253479,
-0.4465470016002655,
0.23677383363246918,
0.45549795031547546,
-0.08482680469751358,
-0.339771568775177,
-0.055307742208242416,
-0.10383172333240509,
0.8072403073310852,
0.6039378046989441,
-0.8617110252380371,
-0.6113024950027466,
-0.4907331168651581,
0.28971517086029... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yxchar/chemprot-tlm | yxchar | 2021-11-04T22:59:08Z | 79 | 0 | null | [
"region:us"
] | 2021-11-04T22:59:08Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ashraq/financial-news-articles | ashraq | 2022-10-25T18:01:06Z | 79 | 5 | null | [
"region:us"
] | 2022-10-25T18:01:06Z | 2022-10-25T17:59:05.000Z | 2022-10-25T17:59:05 | ---
dataset_info:
features:
- name: title
dtype: string
- name: text
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 848347009
num_examples: 306242
download_size: 492243206
dataset_size: 848347009
---
# Dataset Card for "financial-news-articles"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
The data was obtained from [here](https://www.kaggle.com/datasets/jeet2016/us-financial-news-articles) | [
-0.38404861092567444,
-0.6943566799163818,
0.3994850516319275,
0.12458028644323349,
-0.30671459436416626,
0.10358040034770966,
0.08534139394760132,
-0.3858218789100647,
0.8630300164222717,
0.5220274329185486,
-0.7099872827529907,
-0.7798811793327332,
-0.4574474096298218,
-0.289763092994689... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
orai-nlp/basqueGLUE | orai-nlp | 2022-12-21T09:54:32Z | 79 | 1 | null | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_ids:intent-classification",
"task_ids:natural-language-inference",
"task_ids:sentiment-classification",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"task_ids:coreference-resolution",
"annot... | 2022-12-21T09:54:32Z | 2022-12-20T14:28:19.000Z | 2022-12-20T14:28:19 | ---
annotations_creators:
- expert-generated
language:
- eu
language_creators:
- expert-generated
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
pretty_name: BasqueGLUE
size_categories:
- 100K<n<1M
source_datasets:
- original
tags: []
task_categories:
- text-classification
- token-classification
task_ids:
- intent-classification
- natural-language-inference
- sentiment-classification
- topic-classification
- named-entity-recognition
- coreference-resolution
configs:
- bec
- bhtc
- coref
- intent
- nerc_id
- nerc_od
- qnli
- slot
- vaxx
- wic
dataset_info:
- config_name: bec
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': N
'1': NEU
'2': P
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 693284
num_examples: 6078
- name: test
num_bytes: 148510
num_examples: 1302
- name: validation
num_bytes: 148377
num_examples: 1302
download_size: 1217803
dataset_size: 990171
- config_name: bhtc
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': Ekonomia
'1': Euskal Herria
'2': Euskara
'3': Gizartea
'4': Historia
'5': Ingurumena
'6': Iritzia
'7': Komunikazioa
'8': Kultura
'9': Nazioartea
'10': Politika
'11': Zientzia
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 2431494
num_examples: 8585
- name: test
num_bytes: 523066
num_examples: 1854
- name: validation
num_bytes: 519555
num_examples: 1857
download_size: 3896312
dataset_size: 3474115
- config_name: coref
features:
- name: text
dtype: string
- name: span1_text
dtype: string
- name: span2_text
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'false'
'1': 'true'
- name: span1_index
dtype: int32
- name: span2_index
dtype: int32
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 365830
num_examples: 986
- name: test
num_bytes: 201378
num_examples: 587
- name: validation
num_bytes: 108632
num_examples: 320
download_size: 855074
dataset_size: 675840
- config_name: intent
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': alarm/cancel_alarm
'1': alarm/modify_alarm
'2': alarm/set_alarm
'3': alarm/show_alarms
'4': alarm/snooze_alarm
'5': alarm/time_left_on_alarm
'6': reminder/cancel_reminder
'7': reminder/set_reminder
'8': reminder/show_reminders
'9': weather/checkSunrise
'10': weather/checkSunset
'11': weather/find
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 182856
num_examples: 3418
- name: test
num_bytes: 56118
num_examples: 1087
- name: validation
num_bytes: 101644
num_examples: 1904
download_size: 595375
dataset_size: 340618
- config_name: nerc_id
features:
- name: tokens
sequence: string
- name: tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-LOC
'4': I-LOC
'5': B-ORG
'6': I-ORG
'7': B-MISC
'8': I-MISC
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 946007
num_examples: 2842
- name: test
num_bytes: 653960
num_examples: 1846
- name: validation
num_bytes: 237464
num_examples: 711
download_size: 1723325
dataset_size: 1837431
- config_name: nerc_od
features:
- name: tokens
sequence: string
- name: tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-LOC
'4': I-LOC
'5': B-ORG
'6': I-ORG
'7': B-MISC
'8': I-MISC
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 1183471
num_examples: 3553
- name: test
num_bytes: 262853
num_examples: 598
- name: validation
num_bytes: 270028
num_examples: 601
download_size: 1613369
dataset_size: 1716352
- config_name: qnli
features:
- name: question
dtype: string
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': not_entailment
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 327189
num_examples: 1764
- name: test
num_bytes: 42569
num_examples: 238
- name: validation
num_bytes: 46359
num_examples: 230
download_size: 532399
dataset_size: 416117
- config_name: slot
features:
- name: tokens
sequence: string
- name: tags
sequence:
class_label:
names:
'0': O
'1': B-datetime
'2': B-location
'3': B-negation
'4': B-alarm/alarm_modifier
'5': B-alarm/recurring_period
'6': B-reminder/noun
'7': B-reminder/todo
'8': B-reminder/reference
'9': B-reminder/recurring_period
'10': B-weather/attribute
'11': B-weather/noun
'12': I-datetime
'13': I-location
'14': I-negation
'15': I-alarm/alarm_modifier
'16': I-alarm/recurring_period
'17': I-reminder/noun
'18': I-reminder/todo
'19': I-reminder/reference
'20': I-reminder/recurring_period
'21': I-weather/attribute
'22': I-weather/noun
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 388774
num_examples: 3418
- name: test
num_bytes: 114876
num_examples: 1088
- name: validation
num_bytes: 214053
num_examples: 1900
download_size: 962250
dataset_size: 717703
- config_name: vaxx
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': AGAINST
'1': NONE
'2': FAVOR
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 176436
num_examples: 864
- name: test
num_bytes: 70947
num_examples: 312
- name: validation
num_bytes: 42795
num_examples: 206
download_size: 333997
dataset_size: 290178
- config_name: wic
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: word
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'false'
'1': 'true'
- name: start1
dtype: int32
- name: start2
dtype: int32
- name: end1
dtype: int32
- name: end2
dtype: int32
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 172847108
num_examples: 408559
- name: test
num_bytes: 589578
num_examples: 1400
- name: validation
num_bytes: 251549
num_examples: 600
download_size: 22938354
dataset_size: 173688235
---
# Dataset Card for BasqueGLUE
## Table of Contents
* [Table of Contents](#table-of-contents)
* [Dataset Description](#dataset-description)
* [Dataset Summary](#dataset-summary)
* [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
* [Languages](#languages)
* [Dataset Structure](#dataset-structure)
* [Data Instances](#data-instances)
* [Data Fields](#data-fields)
* [Data Splits](#data-splits)
* [Dataset Creation](#dataset-creation)
* [Curation Rationale](#curation-rationale)
* [Source Data](#source-data)
* [Annotations](#annotations)
* [Personal and Sensitive Information](#personal-and-sensitive-information)
* [Considerations for Using the Data](#considerations-for-using-the-data)
* [Social Impact of Dataset](#social-impact-of-dataset)
* [Discussion of Biases](#discussion-of-biases)
* [Other Known Limitations](#other-known-limitations)
* [Additional Information](#additional-information)
* [Dataset Curators](#dataset-curators)
* [Licensing Information](#licensing-information)
* [Citation Information](#citation-information)
* [Contributions](#contributions)
## Dataset Description
* **Repository:** <https://github.com/orai-nlp/BasqueGLUE>
* **Paper:** [BasqueGLUE: A Natural Language Understanding Benchmark for Basque](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.172.pdf)
* **Point of Contact:** [Contact Information](https://github.com/orai-nlp/BasqueGLUE#contact-information)
### Dataset Summary
Natural Language Understanding (NLU) technology has improved significantly over the last few years, and multitask benchmarks such as GLUE are key to evaluate this improvement in a robust and general way. These benchmarks take into account a wide and diverse set of NLU tasks that require some form of language understanding, beyond the detection of superficial, textual clues. However, they are costly to develop and language-dependent, and therefore they are only available for a small number of languages.
We present BasqueGLUE, the first NLU benchmark for Basque, which has been elaborated from previously existing datasets and following similar criteria to those used for the construction of GLUE and SuperGLUE. BasqueGLUE is freely available under an open license.
| Dataset | \|Train\| | \|Val\| | \|Test\| | Task | Metric | Domain |
|----------------|----------:|--------:|---------:|------------------------|:------:|-----------------|
| NERCid | 51,539 | 12,936 | 35,855 | NERC | F1 | News |
| NERCood | 64,475 | 14,945 | 14,462 | NERC | F1 | News, Wikipedia |
| FMTODeu_intent | 3,418 | 1,904 | 1,087 | Intent classification | F1 | Dialog system |
| FMTODeu_slot | 19,652 | 10,791 | 5,633 | Slot filling | F1 | Dialog system |
| BHTCv2 | 8,585 | 1,857 | 1,854 | Topic classification | F1 | News |
| BEC2016eu | 6,078 | 1,302 | 1,302 | Sentiment analysis | F1 | Twitter |
| VaxxStance | 864 | 206 | 312 | Stance detection | MF1* | Twitter |
| QNLIeu | 1,764 | 230 | 238 | QA/NLI | Acc | Wikipedia |
| WiCeu | 408,559 | 600 | 1,400 | WSD | Acc | Wordnet |
| EpecKorrefBin | 986 | 320 | 587 | Coreference resolution | Acc | News |
### Supported Tasks and Leaderboards
This benchmark comprises the following tasks:
#### NERCid
This dataset contains sentences from the news domain with manually annotated named entities. The data is the merge of EIEC (a dataset of a collection of news wire articles from Euskaldunon Egunkaria newspaper, (Alegria et al. 2004)), and newly annotated data from naiz.eus. The data is annotated following the BIO annotation scheme over four categories: person, organization, location, and miscellaneous.
#### NERCood
This dataset contains sentences with manually annotated named entities. The training data is the merge of EIEC (a dataset of a collection of news wire articles from Euskaldunon Egunkaria newspaper, (Alegria et al. 2004)), and newly annotated data from naiz.eus. The data is annotated following the BIO annotation scheme over four categories: person, organization, location, and miscellaneous. For validation and test sets, sentences from Wikipedia were annotated following the same annotation guidelines.
#### FMTODeu_intent
This dataset contains utterance texts and intent annotations drawn from the manually-annotated Facebook Multilingual Task Oriented Dataset (FMTOD) (Schuster et al. 2019). Basque translated data was drawn from the datasets created for Building a Task-oriented Dialog System for languages with no training data: the Case for Basque (de Lacalle et al. 2020). The examples are annotated with one of 12 different intent classes corresponding to alarm, reminder or weather related actions.
#### FMTODeu_slot
This dataset contains utterance texts and sequence intent argument annotations designed for slot filling tasks, drawn from the manually-annotated Facebook Multilingual Task Oriented Dataset (FMTOD) (Schuster et al. 2019). Basque translated data was drawn from the datasets created for Building a Task-oriented Dialog System for languages with no training data: the Case for Basque (de Lacalle et al. 2020). The task is a sequence labelling task similar to NERC, following BIO annotation scheme over 11 categories.
#### BHTCv2
The corpus contains 12,296 news headlines (brief article descriptions) from the Basque weekly newspaper [Argia](https://www.argia.eus). Topics are classified uniquely according to twelve thematic categories.
#### BEC2016eu
The Basque Election Campaign 2016 Opinion Dataset (BEC2016eu) is a new dataset for the task of sentiment analysis, a sequence classification task, which contains tweets about the campaign for the Basque elections from 2016. The crawling was carried out during the election campaign period (2016/09/09-2016/09/23), by monitoring the main parties and their respective candidates. The tweets were manually annotated as positive, negative or neutral.
#### VaxxStance
The VaxxStance (Agerri et al., 2021) dataset originally provides texts and stance annotations for social media texts around the anti-vaccine movement. Texts are given a label indicating whether they express an AGAINST, FAVOR or NEUTRAL stance towards the topic.
#### QNLIeu
This task includes the QA dataset ElkarHizketak (Otegi et al. 2020), a low resource conversational Question Answering (QA) dataset for Basque created by native speaker volunteers. The dataset is built on top of Wikipedia sections about popular people and organizations, and it contains around 400 dialogues and 1600 question and answer pairs. The task was adapted into a sentence-pair binary classification task, following the design of QNLI for English (Wang et al. 2019). Each question and answer pair are given a label indicating whether the answer is entailed by the question.
#### WiCeu
Word in Context or WiC (Pilehvar and Camacho-Collados 2019) is a word sense disambiguation (WSD) task, designed as a particular form of sentence pair binary classification. Given two text snippets and a polyse mous word that appears in both of them (the span of the word is marked in both snippets), the task is to determine whether the word has the same sense in both sentences. This dataset is based on the EPEC-EuSemcor (Pociello et al. 2011) sense-tagged corpus.
#### EpecKorrefBin
EPEC-KORREF-Bin is a dataset derived from EPEC-KORREF (Soraluze et al. 2012), a corpus of Basque news documents with manually annotated mentions and coreference chains, which we have been converted into a binary classification task. In this task, the model has to predict whether two mentions from a text, which can be pronouns, nouns or noun phrases, are referring to the same entity.
#### Leaderboard
Results obtained for two BERT base models as a baseline for the Benchmark.
| | AVG | NERC | F_intent | F_slot | BHTC | BEC | Vaxx | QNLI | WiC | coref |
|------------------------------------------------------------|:-----:|:-----:|:---------:|:-------:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|
| Model | | F1 | F1 | F1 | F1 | F1 | MF1 | acc | acc | acc |
|[BERTeus](https://huggingface.co/ixa-ehu/berteus-base-cased)| 73.23 | 81.92 | 82.52 | 74.34 | 78.26 | 69.43 | 59.30 | 74.26 | 70.71 | 68.31 |
|[ElhBERTeu](https://huggingface.co/elh-eus/ElhBERTeu) | 73.71 | 82.30 | 82.24 | 75.64 | 78.05 | 69.89 | 63.81 | 73.84 | 71.71 | 65.93 |
The results obtained on NERC are the average of in-domain and out-of-domain NERC.
### Languages
Data are available in Basque (BCP-47 `eu`)
## Dataset Structure
### Data Instances
#### NERCid/NERCood
An example of 'train' looks as follows:
```
{
"idx": 0,
"tags": ["O", "O", "O", "O", "B-ORG", "O", ...],
"tokens": ["Greba", "orokorrera", "deitu", "du", "EHk", "27rako", ...]
}
```
#### FMTODeu_intent
An example of 'train' looks as follows:
```
{
"idx": 0,
"label": "alarm/modify_alarm",
"text": "aldatu alarma 7am-tik 7pm-ra , mesedez"
}
```
#### FMTODeu_slot
An example of 'train' looks as follows:
```
{
"idx": 923,
"tags": ["O", "B-reminder/todo", "I-datetime", "I-datetime", "B-reminder/todo"],
"tokens": ["gogoratu", "zaborra", "gaur", "gauean", "ateratzea"]
}
```
#### BHTCv2
An example of 'test' looks as follows:
```
{
"idx": 0,
"label": "Gizartea",
"text": "Genero berdintasunaz, hezkuntzaz eta klase gizarteaz hamar liburu baino gehiago..."
}
```
#### BEC2016eu
An example of 'test' looks as follows:
```
{
"idx": 0,
"label": "NEU",
"text": '"Emandako hitza bete egingo dut" Urkullu\nBa galdeketa enegarrenez daramazue programan (ta zuen AHTa...)\n#I25debatea #URL"'
}
```
#### VaxxStance
An example of 'train' looks as follows:
```
{
"idx": 0,
"label": "FAVOR",
"text": "\"#COVID19 Oraingo datuak, izurriaren dinamika, txertoaren eragina eta birusaren..
}
```
#### QNLIeu
An example of 'train' looks as follows:
```
{
"idx": 1,
"label": "not_entailment",
"question": "Zein posiziotan jokatzen du Busquets-ek?",
"sentence": "Busquets 23 partidatan izan zen konbokatua eta 2 gol sartu zituen."
}
```
#### WiCeu
An example of 'test' looks as follows:
```
{
"idx": 16,
"label": false,
"word": "udal",
"sentence1": "1a . Lekeitioko udal mugarteko Alde Historikoa Birgaitzeko Plan Berezia behin...",
"sentence2": "Diezek kritikatu egin zuen EAJk zenbait udaletan EH gobernu taldeetatik at utzi...",
"start1": 16,
"start2": 40,
"end1": 21,
"end2": 49
}
```
#### EpecKorrefBin
An example of 'train' looks as follows:
```
{
"idx": 6,
"label": false,
"text": "Isuntza da faborito nagusia Elantxobeko banderan . ISUNTZA trainerua da faborito nagusia bihar Elantxoben jokatuko den bandera irabazteko .",
"span1_text": "Elantxobeko banderan",
"span2_text": "ISUNTZA trainerua",
"span1_index": 4,
"span2_index": 8
}
```
### Data Fields
#### NERCid
* `tokens`: a list of `string` features
* `tags`: a list of entity labels, with possible values including `person` (PER), `location` (LOC), `organization` (ORG), `miscellaneous` (MISC)
* `idx`: an `int32` feature
#### NERCood
* `tokens`: a list of `string` features
* `tags`: a list of entity labels, with possible values including `person` (PER), `location` (LOC), `organization` (ORG), `miscellaneous` (MISC)
* `idx`: an `int32` feature
#### FMTODeu_intent
* `text`: a `string` feature
* `label`: an intent label, with possible values including:
* `alarm/cancel_alarm`
* `alarm/modify_alarm`
* `alarm/set_alarm`
* `alarm/show_alarms`
* `alarm/snooze_alarm`
* `alarm/time_left_on_alarm`
* `reminder/cancel_reminder`
* `reminder/set_reminder`
* `reminder/show_reminders`
* `weather/checkSunrise`
* `weather/checkSunset`
* `weather/find`
* `idx`: an `int32` feature
#### FMTODeu_slot
* `tokens`: a list of `string` features
* `tags`: a list of intent labels, with possible values including:
* `datetime`
* `location`
* `negation`
* `alarm/alarm_modifier`
* `alarm/recurring_period`
* `reminder/noun`
* `reminder/todo`
* `reminder/reference`
* `reminder/recurring_period`
* `weather/attribute`
* `weather/noun`
* `idx`: an `int32` feature
#### BHTCv2
* `text`: a `string` feature
* `label`: a polarity label, with possible values including `neutral` (NEU), `negative` (N), `positive` (P)
* `idx`: an `int32` feature
#### BEC2016eu
* `text`: a `string` feature
* `label`: a topic label, with possible values including:
* `Ekonomia`
* `Euskal Herria`
* `Euskara`
* `Gizartea`
* `Historia`
* `Ingurumena`
* `Iritzia`
* `Komunikazioa`
* `Kultura`
* `Nazioartea`
* `Politika`
* `Zientzia`
* `idx`: an `int32` feature
#### VaxxStance
* `text`: a `string` feature
* `label`: a stance label, with possible values including `AGAINST`, `FAVOR`, `NONE`
* `idx`: an `int32` feature
#### QNLIeu
* `question`: a `string` feature
* `sentence`: a `string` feature
* `label`: an entailment label, with possible values including `entailment`, `not_entailment`
* `idx`: an `int32` feature
#### WiCeu
* `word`: a `string` feature
* `sentence1`: a `string` feature
* `sentence2`: a `string` feature
* `label`: a `boolean` label indicating sense agreement, with possible values including `true`, `false`
* `start1`: an `int` feature indicating character position where word occurence begins in first sentence
* `start2`: an `int` feature indicating character position where word occurence begins in second sentence
* `end1`: an `int` feature indicating character position where word occurence ends in first sentence
* `end2`: an `int` feature indicating character position where word occurence ends in second sentence
* `idx`: an `int32` feature
#### EpecKorrefBin
* `text`: a `string` feature.
* `label`: a `boolean` coreference label, with possible values including `true`, `false`.
* `span1_text`: a `string` feature
* `span2_text`: a `string` feature
* `span1_index`: an `int` feature indicating token index where `span1_text` feature occurs in `text`
* `span2_index`: an `int` feature indicating token index where `span2_text` feature occurs in `text`
* `idx`: an `int32` feature
### Data Splits
| Dataset | \|Train\| | \|Val\| | \|Test\| |
|---------|--------:|------:|-------:|
| NERCid | 51,539 | 12,936 | 35,855 |
| NERCood | 64,475 | 14,945 | 14,462 |
| FMTODeu_intent | 3,418 | 1,904 | 1,087 |
| FMTODeu_slot | 19,652 | 10,791 | 5,633 |
| BHTCv2 | 8,585 | 1,857 | 1,854 |
| BEC2016eu | 6,078 | 1,302 | 1,302 |
| VaxxStance | 864 | 206 | 312 |
| QNLIeu | 1,764 | 230 | 238 |
| WiCeu | 408,559 | 600 | 1,400 |
| EpecKorrefBin | 986 | 320 | 587 |
## Dataset Creation
### Curation Rationale
We believe that BasqueGLUE is a significant contribution towards developing NLU tools in Basque, which we believe will facilitate the technological advance for the Basque language. In order to create BasqueGLUE we took as a reference the GLUE and SuperGLUE frameworks. When possible, we re-used existing datasets for Basque, adapting them to the corresponding task formats if necessary. Additionally, BasqueGLUE also includes six new datasets that have not been published before. In total, BasqueGLUE consists of nine Basque NLU tasks and covers a wide range of tasks with different difficulties across several domains. As with the original GLUE benchmark, the training data for the tasks vary in size, which allows to measure the performance of how the models transfer knowledge across tasks.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Gorka Urbizu [1], Iñaki San Vicente [1], Xabier Saralegi [1], Rodrigo Agerri [2] and Aitor Soroa [2]
Affiliation of the authors:
[1] orai NLP Technologies
[2] HiTZ Center - Ixa, University of the Basque Country UPV/EHU
### Licensing Information
Each dataset of the BasqueGLUE benchmark has it's own license (due to most of them being or being derived from already existing datasets). See their respective README files for details.
Here we provide a brief summary of their licenses:
| Dataset | License |
|---------|---------|
| NERCid | CC BY-NC-SA 4.0 |
| NERCood | CC BY-NC-SA 4.0 |
| FMTODeu_intent | CC BY-NC-SA 4.0 |
| FMTODeu_slot | CC BY-NC-SA 4.0 |
| BHTCv2 | CC BY-NC-SA 4.0 |
| BEC2016eu | Twitter's license + CC BY-NC-SA 4.0 |
| VaxxStance | Twitter's license + CC BY 4.0 |
| QNLIeu | CC BY-SA 4.0 |
| WiCeu | CC BY-NC-SA 4.0 |
| EpecKorrefBin | CC BY-NC-SA 4.0 |
For the rest of the files of the benchmark, including the loading and evaluation scripts, the following license applies:
Copyright (C) by Orai NLP Technologies.
This benchmark and evaluation scripts are licensed under the Creative Commons Attribution Share Alike 4.0
International License (CC BY-SA 4.0). To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
### Citation Information
```
@InProceedings{urbizu2022basqueglue,
author = {Urbizu, Gorka and San Vicente, Iñaki and Saralegi, Xabier and Agerri, Rodrigo and Soroa, Aitor},
title = {BasqueGLUE: A Natural Language Understanding Benchmark for Basque},
booktitle = {Proceedings of the Language Resources and Evaluation Conference},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {1603--1612},
abstract = {Natural Language Understanding (NLU) technology has improved significantly over the last few years and multitask benchmarks such as GLUE are key to evaluate this improvement in a robust and general way. These benchmarks take into account a wide and diverse set of NLU tasks that require some form of language understanding, beyond the detection of superficial, textual clues. However, they are costly to develop and language-dependent, and therefore they are only available for a small number of languages. In this paper, we present BasqueGLUE, the first NLU benchmark for Basque, a less-resourced language, which has been elaborated from previously existing datasets and following similar criteria to those used for the construction of GLUE and SuperGLUE. We also report the evaluation of two state-of-the-art language models for Basque on BasqueGLUE, thus providing a strong baseline to compare upon. BasqueGLUE is freely available under an open license.},
url = {https://aclanthology.org/2022.lrec-1.172}
}
```
### Contributions
Thanks to [@richplant](https://github.com/richplant) for adding this dataset to hugginface.
| [
-0.4893433153629303,
-0.8301721215248108,
0.29199346899986267,
0.35725367069244385,
-0.18487349152565002,
-0.028245707973837852,
-0.35980063676834106,
-0.5778352618217468,
0.4785135090351105,
0.3671775460243225,
-0.6403752565383911,
-0.7832212448120117,
-0.6483064889907837,
0.2245447188615... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
alexandreteles/mental-health-conversational-data | alexandreteles | 2022-12-28T06:19:29Z | 79 | 7 | null | [
"license:other",
"region:us"
] | 2022-12-28T06:19:29Z | 2022-12-28T06:03:01.000Z | 2022-12-28T06:03:01 | ---
license: other
dataset_info:
features:
- name: Context
dtype: string
- name: Knowledge
dtype: string
- name: Response
dtype: string
splits:
- name: train
num_bytes: 79696
num_examples: 661
download_size: 21508
dataset_size: 79696
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
torchgeo/ucmerced | torchgeo | 2023-01-31T21:48:28Z | 79 | 1 | null | [
"task_categories:image-classification",
"size_categories:10K<n<100K",
"language:en",
"license:cc",
"region:us"
] | 2023-01-31T21:48:28Z | 2023-01-31T21:45:28.000Z | 2023-01-31T21:45:28 | ---
license: cc
task_categories:
- image-classification
language:
- en
pretty_name: UC Merced
size_categories:
- 10K<n<100K
---
Redistributed from http://weegee.vision.ucmerced.edu/datasets/landuse.html without modification. See https://www.usgs.gov/faqs/what-are-terms-uselicensing-map-services-and-data-national-map for license. | [
-0.39003440737724304,
-0.3012692332267761,
0.5058104395866394,
0.23116900026798248,
-0.5373645424842834,
-0.4783203601837158,
-0.049834057688713074,
-0.3876357674598694,
0.16172775626182556,
1.3837976455688477,
-0.42729538679122925,
-0.7086511850357056,
-0.4966479241847992,
-0.044007986783... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mstz/german | mstz | 2023-04-16T17:28:54Z | 79 | 0 | null | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"breast",
"tabular_classification",
"binary_classification",
"region:us"
] | 2023-04-16T17:28:54Z | 2023-03-23T11:32:15.000Z | 2023-03-23T11:32:15 | ---
language:
- en
tags:
- breast
- tabular_classification
- binary_classification
pretty_name: German
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- encoding
- loan
license: cc
---
# German
The [German dataset](https://archive.ics.uci.edu/ml/datasets/Statlog+%28German+Credit+Data%29) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Dataset on loan grants to customers.
# Configurations and tasks
| **Configuration** | **Task** | Description |
|-------------------|---------------------------|-----------------------------------------------------------------|
| encoding | | Encoding dictionary showing original values of encoded features.|
| loan | Binary classification | Has the loan request been accepted? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/german", "loan")["train"]
```
# Features
|**Feature** |**Type** |
|------------------------------------|-----------|
|`checking_account_status` | `int8` |
|`account_life_in_months` | `int8` |
|`credit_status` | `int8` |
|`loan_purpose` | `string` |
|`current_credit` | `int32` |
|`current_savings` | `int8` |
|`employed_since` | `int8` |
|`installment_rate_percentage` | `int8` |
|`sex` | `int8` |
|`marital_status` | `string` |
|`guarantors` | `int8` |
|`years_living_in_current_residence` | `int8` |
|`age` | `int8` |
|`installment_plans` | `string` |
|`housing_status` | `int8` |
|`nr_credit_accounts_in_bank` | `int8` |
|`job_status` | `int8` |
|`number_of_people_in_support` | `int8` |
|`has_registered_phone_number` | `int8` |
|`is_foreign` | `int8` | | [
-0.45596709847450256,
-0.52751225233078,
0.26871457695961,
0.27969011664390564,
0.002365415683016181,
-0.23744502663612366,
-0.036162760108709335,
-0.1605134755373001,
0.24329085648059845,
0.713158130645752,
-0.6906548142433167,
-0.8183269500732422,
-0.6989130973815918,
0.19148869812488556... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
RyokoAI/ShareGPT52K | RyokoAI | 2023-04-02T13:16:51Z | 79 | 244 | null | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"language:es",
"language:de",
"language:multilingual",
"license:cc0-1.0",
"conversation",
"rlhf",
"chatgpt",
"gpt-3.5",
"region:us"
] | 2023-04-02T13:16:51Z | 2023-04-01T19:59:59.000Z | 2023-04-01T19:59:59 | ---
license: cc0-1.0
language:
- en
- es
- de
- multilingual
pretty_name: ShareGPT 90K Conversations
task_categories:
- text-generation
tags:
- conversation
- rlhf
- chatgpt
- gpt-3.5
size_categories:
- 10K<n<100K
---
# Dataset Card for ShareGPT~~52K~~90K
## Dataset Description
- **Homepage: N/A**
- **Repository: N/A**
- **Paper: N/A**
- **Leaderboard: N/A**
- **Point of Contact:** Ronsor Labs <ronsor@ronsor.com>
### Dataset Summary
This dataset is a collection of approximately ~~52,000~~**90,000** conversations scraped via the ShareGPT API before it was shut down.
These conversations include both user prompts and responses from OpenAI's ChatGPT.
This repository now contains the new 90K conversations version. The previous 52K may
be found in the `old/` directory.
### Supported Tasks and Leaderboards
* text-generation
### Languages
This dataset is expected to primarily consist of messages in English and other Western languages.
## Dataset Structure
### Data Instances
```json
{
"id": "2wtLdZm",
"conversations": [
{
"from": "human",
"value": "Are you familiar with the SAS programming language?"
},
{
"from": "gpt",
"value": "<div class=\"markdown prose w-full break-words dark:prose-invert light\"><p>Yes, I am familiar with the SAS programming language. SAS is a statistical software suite that is widely used in data analysis, data management, and business intelligence applications. It has its own programming language, which is used to manipulate and analyze data, generate reports, and automate data-driven workflows. The language is known for its ease of use and powerful capabilities, and it is commonly used in industries such as healthcare, finance, and government.</p></div>"
},
{
"from": "human",
"value": "Can you write a SAS proc that imports a dataset from a CSV saved on my local machine?"
}
]
}
```
### Data Fields
* **id**: conversation id
* **conversations**: conversation session array
* **from**: ChatGPT ("gpt") or the user ("human")
* **value**: message contents as raw HTML
### Data Splits
N/A
## Dataset Creation
### Curation Rationale
This is a decently large dataset of realistic human-AI conversations which I believe should be released
to the research community.
### Source Data
#### Initial Data Collection and Normalization
This data was collected using the ShareGPT API.
#### Who are the source language producers?
ShareGPT users and OpenAI ChatGPT.
### Annotations
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
This dataset *may* contain personal information, if ShareGPT users were sending such information to
ChatGPT. ChatGPT warns users not to submit personal information to it, however, so without further
evaluation, we believe that this dataset should contain little or no personal information.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset may be used to train models that are competitive with OpenAI's ChatGPT. Please filter
this dataset first, as it may contain canned responses, raw HTML, and other undesirable information.
### Discussion of Biases
This dataset exhibits all the biases of OpenAI's ChatGPT models (GPT-3.5 and GPT-4) as well as the
biases of the users who uploaded the conversations.
### Other Known Limitations
N/A
## Additional Information
### Dataset Curators
None.
### Licensing Information
**CC0: No Rights Reserved.**
The output of machine learning algorithms is uncopyrightable in the United States and other jurisdictions.
**Additionally, the OpenAI terms of service do not apply to this dataset as users of this dataset
are not accessing the OpenAI service.**
### Citation Information
TODO
### Contributions
These conversations were allegedly scraped by an anonymous user on 4chan.
The 90K version was sourced from [this post](https://boards.4channel.org/g/thread/92487155/lmg-local-models-general-snail-edition#p92490887).
Thanks, anon!
| [
-0.3825679421424866,
-0.798620879650116,
0.054280515760183334,
0.23506660759449005,
-0.2234070897102356,
0.06102269887924194,
-0.3291679322719574,
-0.49134528636932373,
0.37092599272727966,
0.43919605016708374,
-0.7961573600769043,
-0.6623243689537048,
-0.6331831812858582,
-0.0968400686979... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
KK04/LogicInference_OA | KK04 | 2023-04-05T15:38:22Z | 79 | 5 | null | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"Logic Inference",
"region:us"
] | 2023-04-05T15:38:22Z | 2023-04-05T14:35:16.000Z | 2023-04-05T14:35:16 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
splits:
- name: train
num_bytes: 30414202
num_examples: 54607
download_size: 7588805
dataset_size: 30414202
license: apache-2.0
task_categories:
- question-answering
language:
- en
tags:
- Logic Inference
size_categories:
- 10K<n<100K
---
# Dataset Card for "LogicInference_OA"
This is an re-produce of the dataset from LogicInference Dataset in paper: https://openreview.net/pdf?id=HAGeIS_Lcg9.
The github page of LogicInference Dataset: https://github.com/google-research/google-research/tree/master/logic_inference_dataset.
This dataset is aimed to offer more dataset for Open Assistant project, depending on their demands, there three columns: INSTRUCTION, RESPONSE, SOURCE.
The results in this dataset is a little different from which was introduced in the original paper:
1.For all three splits (IID/OOD/length), only IID is used. In the original paper, it seems that model can reach better performance with data generated by this split method.
2.In the original paper, there are two form of responses: LOGICINFERENCE<sub>b</sub> (with the answer at the beginning) and LOGICINFERENCE<sub>e</sub> (with the answer at the end). This dataset uses LOGICINFERENCE<sub>e</sub>, that means: for all questions, the model will first do logic inference, and give the final answer at the end.
3.The original paper, some parameters in generate_dataset.py are:
N_INFERENCE_PROBLEMS = 5000
N_VARIATIONS = 25
N_EXAMPLES = 200000
TRAIN_RATIO = 0.9
LENGTH_SPLIT_THRESHOLD = 4
RANDOM_SEED = 0
I choose some new parameters:
N_INFERENCE_PROBLEMS = 10000
N_VARIATIONS = 25
N_EXAMPLES = 55000
TRAIN_RATIO = 1
LENGTH_SPLIT_THRESHOLD = 4
RANDOM_SEED = 1111
The original script generated 4814 different inference problems and extended all those inference problems to around 200,000 Q-A pairs. My settings generated 5491 different inference problems and extended them to around 54,607 Instruction-Response pairs. I think for Open Assistant projects, maybe the number of different inference problems is more important, and generated many similar Instruction-Response pairs will only add training time and doesn't make much sense. | [
-0.5357800722122192,
-0.6464894413948059,
0.45615696907043457,
-0.21475327014923096,
-0.12577258050441742,
-0.48342886567115784,
-0.020093947649002075,
-0.5494310259819031,
0.35464444756507874,
0.47986868023872375,
-0.6225241422653198,
-0.05175662413239479,
-0.4753884971141815,
0.148561000... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Babelscape/REDFM | Babelscape | 2023-06-20T07:33:35Z | 79 | 4 | null | [
"task_categories:token-classification",
"size_categories:10K<n<100K",
"language:ar",
"language:de",
"language:en",
"language:es",
"language:it",
"language:fr",
"language:zh",
"license:cc-by-sa-4.0",
"arxiv:2306.09802",
"region:us"
] | 2023-06-20T07:33:35Z | 2023-06-13T16:46:41.000Z | 2023-06-13T16:46:41 | ---
dataset_info:
- config_name: ar
features:
- name: docid
dtype: string
- name: title
dtype: string
- name: uri
dtype: string
- name: text
dtype: string
- name: entities
list:
- name: uri
dtype: string
- name: surfaceform
dtype: string
- name: type
dtype: string
- name: start
dtype: int32
- name: end
dtype: int32
- name: relations
list:
- name: subject
struct:
- name: uri
dtype: string
- name: surfaceform
dtype: string
- name: type
dtype: string
- name: start
dtype: int32
- name: end
dtype: int32
- name: predicate
dtype:
class_label:
names:
'0': country
'1': place of birth
'2': spouse
'3': country of citizenship
'4': instance of
'5': capital
'6': child
'7': shares border with
'8': author
'9': director
'10': occupation
'11': founded by
'12': league
'13': owned by
'14': genre
'15': named after
'16': follows
'17': headquarters location
'18': cast member
'19': manufacturer
'20': located in or next to body of water
'21': location
'22': part of
'23': mouth of the watercourse
'24': member of
'25': sport
'26': characters
'27': participant
'28': notable work
'29': replaces
'30': sibling
'31': inception
- name: object
struct:
- name: uri
dtype: string
- name: surfaceform
dtype: string
- name: type
dtype: string
- name: start
dtype: int32
- name: end
dtype: int32
splits:
- name: test
num_bytes: 521806
num_examples: 345
- name: validation
num_bytes: 577499
num_examples: 385
download_size: 3458539
dataset_size: 1099305
- config_name: de
features:
- name: docid
dtype: string
- name: title
dtype: string
- name: uri
dtype: string
- name: text
dtype: string
- name: entities
list:
- name: uri
dtype: string
- name: surfaceform
dtype: string
- name: type
dtype: string
- name: start
dtype: int32
- name: end
dtype: int32
- name: relations
list:
- name: subject
struct:
- name: uri
dtype: string
- name: surfaceform
dtype: string
- name: type
dtype: string
- name: start
dtype: int32
- name: end
dtype: int32
- name: predicate
dtype:
class_label:
names:
'0': country
'1': place of birth
'2': spouse
'3': country of citizenship
'4': instance of
'5': capital
'6': child
'7': shares border with
'8': author
'9': director
'10': occupation
'11': founded by
'12': league
'13': owned by
'14': genre
'15': named after
'16': follows
'17': headquarters location
'18': cast member
'19': manufacturer
'20': located in or next to body of water
'21': location
'22': part of
'23': mouth of the watercourse
'24': member of
'25': sport
'26': characters
'27': participant
'28': notable work
'29': replaces
'30': sibling
'31': inception
- name: object
struct:
- name: uri
dtype: string
- name: surfaceform
dtype: string
- name: type
dtype: string
- name: start
dtype: int32
- name: end
dtype: int32
splits:
- name: train
num_bytes: 2455615
num_examples: 2071
- name: test
num_bytes: 334212
num_examples: 285
- name: validation
num_bytes: 310862
num_examples: 252
download_size: 8072481
dataset_size: 3100689
- config_name: en
features:
- name: docid
dtype: string
- name: title
dtype: string
- name: uri
dtype: string
- name: text
dtype: string
- name: entities
list:
- name: uri
dtype: string
- name: surfaceform
dtype: string
- name: type
dtype: string
- name: start
dtype: int32
- name: end
dtype: int32
- name: relations
list:
- name: subject
struct:
- name: uri
dtype: string
- name: surfaceform
dtype: string
- name: type
dtype: string
- name: start
dtype: int32
- name: end
dtype: int32
- name: predicate
dtype:
class_label:
names:
'0': country
'1': place of birth
'2': spouse
'3': country of citizenship
'4': instance of
'5': capital
'6': child
'7': shares border with
'8': author
'9': director
'10': occupation
'11': founded by
'12': league
'13': owned by
'14': genre
'15': named after
'16': follows
'17': headquarters location
'18': cast member
'19': manufacturer
'20': located in or next to body of water
'21': location
'22': part of
'23': mouth of the watercourse
'24': member of
'25': sport
'26': characters
'27': participant
'28': notable work
'29': replaces
'30': sibling
'31': inception
- name: object
struct:
- name: uri
dtype: string
- name: surfaceform
dtype: string
- name: type
dtype: string
- name: start
dtype: int32
- name: end
dtype: int32
splits:
- name: train
num_bytes: 4387657
num_examples: 2878
- name: test
num_bytes: 654376
num_examples: 446
- name: validation
num_bytes: 617141
num_examples: 449
download_size: 13616716
dataset_size: 5659174
- config_name: es
features:
- name: docid
dtype: string
- name: title
dtype: string
- name: uri
dtype: string
- name: text
dtype: string
- name: entities
list:
- name: uri
dtype: string
- name: surfaceform
dtype: string
- name: type
dtype: string
- name: start
dtype: int32
- name: end
dtype: int32
- name: relations
list:
- name: subject
struct:
- name: uri
dtype: string
- name: surfaceform
dtype: string
- name: type
dtype: string
- name: start
dtype: int32
- name: end
dtype: int32
- name: predicate
dtype:
class_label:
names:
'0': country
'1': place of birth
'2': spouse
'3': country of citizenship
'4': instance of
'5': capital
'6': child
'7': shares border with
'8': author
'9': director
'10': occupation
'11': founded by
'12': league
'13': owned by
'14': genre
'15': named after
'16': follows
'17': headquarters location
'18': cast member
'19': manufacturer
'20': located in or next to body of water
'21': location
'22': part of
'23': mouth of the watercourse
'24': member of
'25': sport
'26': characters
'27': participant
'28': notable work
'29': replaces
'30': sibling
'31': inception
- name: object
struct:
- name: uri
dtype: string
- name: surfaceform
dtype: string
- name: type
dtype: string
- name: start
dtype: int32
- name: end
dtype: int32
splits:
- name: train
num_bytes: 2452744
num_examples: 1866
- name: test
num_bytes: 345782
num_examples: 281
- name: validation
num_bytes: 299692
num_examples: 228
download_size: 7825400
dataset_size: 3098218
- config_name: fr
features:
- name: docid
dtype: string
- name: title
dtype: string
- name: uri
dtype: string
- name: text
dtype: string
- name: entities
list:
- name: uri
dtype: string
- name: surfaceform
dtype: string
- name: type
dtype: string
- name: start
dtype: int32
- name: end
dtype: int32
- name: relations
list:
- name: subject
struct:
- name: uri
dtype: string
- name: surfaceform
dtype: string
- name: type
dtype: string
- name: start
dtype: int32
- name: end
dtype: int32
- name: predicate
dtype:
class_label:
names:
'0': country
'1': place of birth
'2': spouse
'3': country of citizenship
'4': instance of
'5': capital
'6': child
'7': shares border with
'8': author
'9': director
'10': occupation
'11': founded by
'12': league
'13': owned by
'14': genre
'15': named after
'16': follows
'17': headquarters location
'18': cast member
'19': manufacturer
'20': located in or next to body of water
'21': location
'22': part of
'23': mouth of the watercourse
'24': member of
'25': sport
'26': characters
'27': participant
'28': notable work
'29': replaces
'30': sibling
'31': inception
- name: object
struct:
- name: uri
dtype: string
- name: surfaceform
dtype: string
- name: type
dtype: string
- name: start
dtype: int32
- name: end
dtype: int32
splits:
- name: train
num_bytes: 2280992
num_examples: 1865
- name: test
num_bytes: 427990
num_examples: 415
- name: validation
num_bytes: 429165
num_examples: 416
download_size: 8257363
dataset_size: 3138147
- config_name: it
features:
- name: docid
dtype: string
- name: title
dtype: string
- name: uri
dtype: string
- name: text
dtype: string
- name: entities
list:
- name: uri
dtype: string
- name: surfaceform
dtype: string
- name: type
dtype: string
- name: start
dtype: int32
- name: end
dtype: int32
- name: relations
list:
- name: subject
struct:
- name: uri
dtype: string
- name: surfaceform
dtype: string
- name: type
dtype: string
- name: start
dtype: int32
- name: end
dtype: int32
- name: predicate
dtype:
class_label:
names:
'0': country
'1': place of birth
'2': spouse
'3': country of citizenship
'4': instance of
'5': capital
'6': child
'7': shares border with
'8': author
'9': director
'10': occupation
'11': founded by
'12': league
'13': owned by
'14': genre
'15': named after
'16': follows
'17': headquarters location
'18': cast member
'19': manufacturer
'20': located in or next to body of water
'21': location
'22': part of
'23': mouth of the watercourse
'24': member of
'25': sport
'26': characters
'27': participant
'28': notable work
'29': replaces
'30': sibling
'31': inception
- name: object
struct:
- name: uri
dtype: string
- name: surfaceform
dtype: string
- name: type
dtype: string
- name: start
dtype: int32
- name: end
dtype: int32
splits:
- name: train
num_bytes: 1918310
num_examples: 1657
- name: test
num_bytes: 489445
num_examples: 509
- name: validation
num_bytes: 485557
num_examples: 521
download_size: 7537265
dataset_size: 2893312
- config_name: zh
features:
- name: docid
dtype: string
- name: title
dtype: string
- name: uri
dtype: string
- name: text
dtype: string
- name: entities
list:
- name: uri
dtype: string
- name: surfaceform
dtype: string
- name: type
dtype: string
- name: start
dtype: int32
- name: end
dtype: int32
- name: relations
list:
- name: subject
struct:
- name: uri
dtype: string
- name: surfaceform
dtype: string
- name: type
dtype: string
- name: start
dtype: int32
- name: end
dtype: int32
- name: predicate
dtype:
class_label:
names:
'0': country
'1': place of birth
'2': spouse
'3': country of citizenship
'4': instance of
'5': capital
'6': child
'7': shares border with
'8': author
'9': director
'10': occupation
'11': founded by
'12': league
'13': owned by
'14': genre
'15': named after
'16': follows
'17': headquarters location
'18': cast member
'19': manufacturer
'20': located in or next to body of water
'21': location
'22': part of
'23': mouth of the watercourse
'24': member of
'25': sport
'26': characters
'27': participant
'28': notable work
'29': replaces
'30': sibling
'31': inception
- name: object
struct:
- name: uri
dtype: string
- name: surfaceform
dtype: string
- name: type
dtype: string
- name: start
dtype: int32
- name: end
dtype: int32
splits:
- name: test
num_bytes: 311905
num_examples: 270
- name: validation
num_bytes: 364077
num_examples: 307
download_size: 1952982
dataset_size: 675982
- config_name: all_languages
features:
- name: docid
dtype: string
- name: title
dtype: string
- name: uri
dtype: string
- name: lan
dtype: string
- name: text
dtype: string
- name: entities
list:
- name: uri
dtype: string
- name: surfaceform
dtype: string
- name: type
dtype: string
- name: start
dtype: int32
- name: end
dtype: int32
- name: relations
list:
- name: subject
struct:
- name: uri
dtype: string
- name: surfaceform
dtype: string
- name: type
dtype: string
- name: start
dtype: int32
- name: end
dtype: int32
- name: predicate
dtype:
class_label:
names:
'0': country
'1': place of birth
'2': spouse
'3': country of citizenship
'4': instance of
'5': capital
'6': child
'7': shares border with
'8': author
'9': director
'10': occupation
'11': founded by
'12': league
'13': owned by
'14': genre
'15': named after
'16': follows
'17': headquarters location
'18': cast member
'19': manufacturer
'20': located in or next to body of water
'21': location
'22': part of
'23': mouth of the watercourse
'24': member of
'25': sport
'26': characters
'27': participant
'28': notable work
'29': replaces
'30': sibling
'31': inception
- name: object
struct:
- name: uri
dtype: string
- name: surfaceform
dtype: string
- name: type
dtype: string
- name: start
dtype: int32
- name: end
dtype: int32
splits:
- name: train
num_bytes: 13557340
num_examples: 10337
- name: test
num_bytes: 3100822
num_examples: 2551
- name: validation
num_bytes: 3099341
num_examples: 2558
download_size: 50720746
dataset_size: 19757503
task_categories:
- token-classification
language:
- ar
- de
- en
- es
- it
- fr
- zh
size_categories:
- 10K<n<100K
license: cc-by-sa-4.0
---
# RED<sup>FM</sup>: a Filtered and Multilingual Relation Extraction Dataset
This is the human-filtered dataset from the 2023 ACL paper [RED^{FM}: a Filtered and Multilingual Relation Extraction Dataset](https://arxiv.org/abs/2306.09802). If you use the model, please reference this work in your paper:
@inproceedings{huguet-cabot-et-al-2023-redfm-dataset,
title = "RED$^{\rm FM}$: a Filtered and Multilingual Relation Extraction Dataset",
author = "Huguet Cabot, Pere-Llu{\'\i}s and Tedeschi, Simone and Ngonga Ngomo, Axel-Cyrille and
Navigli, Roberto",
booktitle = "Proc. of the 61st Annual Meeting of the Association for Computational Linguistics: ACL 2023",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2306.09802",
}
## License
RED<sup>FM</sup> is licensed under the CC BY-SA 4.0 license. The text of the license can be found [here](https://creativecommons.org/licenses/by-sa/4.0/). | [
-0.5591685175895691,
-0.5347570776939392,
0.28278741240501404,
0.34507355093955994,
-0.35599076747894287,
-0.26453474164009094,
-0.28184643387794495,
-0.7819048166275024,
0.1701854020357132,
0.5721792578697205,
-0.8718693256378174,
-0.6197825074195862,
-0.5297898054122925,
0.51553362607955... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yulongmannlp/dev_para | yulongmannlp | 2023-06-26T00:32:27Z | 79 | 0 | null | [
"region:us"
] | 2023-06-26T00:32:27Z | 2023-06-26T00:30:52.000Z | 2023-06-26T00:30:52 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Juzzy88/science_dict_full | Juzzy88 | 2023-08-05T08:20:01Z | 79 | 0 | null | [
"region:us"
] | 2023-08-05T08:20:01Z | 2023-08-01T22:52:24.000Z | 2023-08-01T22:52:24 | ---
dataset_info:
features:
- name: role_1
dtype: string
- name: topic;
dtype: string
- name: sub_topic
dtype: string
- name: message_1
dtype: string
- name: message_2
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 200646759
num_examples: 38400
- name: val
num_bytes: 50121062
num_examples: 9600
- name: test
num_bytes: 62653743
num_examples: 12000
download_size: 148930334
dataset_size: 313421564
---
# Dataset Card for "science_dict_full"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.489177405834198,
-0.23858462274074554,
0.4817970395088196,
0.32302170991897583,
-0.3600168526172638,
0.16407065093517303,
0.15218965709209442,
0.012706296518445015,
0.9586687684059143,
0.1567213237285614,
-0.787651538848877,
-0.7555415630340576,
-0.7268859148025513,
0.0877770185470581,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Hani89/medical_asr_recording_dataset | Hani89 | 2023-10-10T05:41:22Z | 79 | 1 | null | [
"task_categories:automatic-speech-recognition",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"medical",
"region:us"
] | 2023-10-10T05:41:22Z | 2023-10-10T05:13:04.000Z | 2023-10-10T05:13:04 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: audio
struct:
- name: array
sequence:
sequence: float32
- name: path
dtype: string
- name: sampling_rate
dtype: int64
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 3128740048
num_examples: 5328
- name: test
num_bytes: 776455056
num_examples: 1333
download_size: 3882364624
dataset_size: 3905195104
license: apache-2.0
task_categories:
- automatic-speech-recognition
language:
- en
tags:
- medical
size_categories:
- 1K<n<10K
---
**Data Source**<br>
[Kaggle Medical Speech, Transcription, and Intent](https://www.kaggle.com/datasets/paultimothymooney/medical-speech-transcription-and-intent "Visit Original Dataset Page on Kaggle")<br>
**Context**<br>
>8.5 hours of audio utterances paired with text for common medical symptoms.<br>
**Content**<br>
>This data contains thousands of audio utterances for common medical symptoms like “knee pain” or “headache,” totaling more than 8 hours in aggregate. Each utterance was created by individual human contributors based on a given symptom. These audio snippets can be used to train conversational agents in the medical field.<br>
>
>This Figure Eight dataset was created via a multi-job workflow. The first involved contributors writing text phrases to describe symptoms given. For example, for “headache,” a contributor might write “I need help with my migraines.” Subsequent jobs captured audio utterances for accepted text strings.<br>
>
>Note that some of the labels are incorrect and some of the audio files have poor quality. I would recommend cleaning the dataset before training any machine learning models.<br>
>
>This dataset contains both the audio utterances and corresponding transcriptions.<br>
**What's new**<br>
*The data is clean from all columns except for the file_path and phrase<br>
*All Audios are loaded into the DatasetDict as an 1D array, float32<br>
*All Audios are resampled into 16K<br>
*The new structure :<br>
train = {<br>
'audio': {<br>
'path': file_path, *the mp3 files is not included here, please visit the kaggle to dowload em*<br>
'array': waveform_np,<br>
'sampling_rate': 16000<br>
},<br>
'sentence': the text transcription<br>
} | [
0.1322869211435318,
-0.7646380662918091,
0.5918672680854797,
0.27640867233276367,
-0.14877673983573914,
-0.23919014632701874,
-0.3383684754371643,
-0.4933416247367859,
0.44402506947517395,
0.6563199162483215,
-0.8146987557411194,
-0.843574047088623,
-0.7814387083053589,
0.15696834027767181... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.