id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
pipyp/justweirdimages | 2023-10-17T12:49:10.000Z | [
"region:us"
] | pipyp | null | null | 0 | 0 | 2023-10-17T12:44:51 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
dotyk/italy_streets | 2023-10-17T13:02:47.000Z | [
"region:us"
] | dotyk | null | null | 0 | 0 | 2023-10-17T13:01:49 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
johannes-garstenauer/ENN_masking_embeddings_dim_512 | 2023-10-17T13:05:15.000Z | [
"region:us"
] | johannes-garstenauer | null | null | 0 | 0 | 2023-10-17T13:04:52 | ---
dataset_info:
features:
- name: last_hs
sequence: float32
- name: label
dtype: int64
splits:
- name: train
num_bytes: 138580320
num_examples: 67272
download_size: 177638515
dataset_size: 138580320
---
# Dataset Card for "ENN_masking_embeddings_dim_512"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 419 | [
[
-0.052490234375,
-0.022735595703125,
0.0004343986511230469,
0.028411865234375,
-0.01357269287109375,
0.005153656005859375,
0.0121002197265625,
-0.006671905517578125,
0.08331298828125,
0.042022705078125,
-0.0467529296875,
-0.06292724609375,
-0.046142578125,
-... |
jiangyige/PSP5 | 2023-10-17T13:40:38.000Z | [
"license:unknown",
"region:us"
] | jiangyige | null | null | 0 | 0 | 2023-10-17T13:22:36 | ---
license: unknown
---
---
Description:
The Paraphrased Sentence Pairs - 5 types (PSP-5) dataset comprises five fundamental categories of paraphrased English sentence pairs:
1. Declarative sentences (statements)
2. Interrogative sentences (questions)
3. Imperative sentences (commands)
4. Exclamatory sentences (exclamations)
5. Sentence fragments (oral English).
There are 3 columns in the table:
1. sentence
2. chatGPT_paraphrased
3. type
10000 sentence pairs in all.
--- | 479 | [
[
-0.01534271240234375,
-0.04180908203125,
0.03009033203125,
0.039093017578125,
-0.047088623046875,
0.005584716796875,
0.0194549560546875,
0.00033736228942871094,
0.00945281982421875,
0.0662841796875,
-0.01678466796875,
-0.033782958984375,
-0.0232086181640625,
... |
johannes-garstenauer/ENN_class_embeddings_dim_64 | 2023-10-17T13:25:03.000Z | [
"region:us"
] | johannes-garstenauer | null | null | 0 | 0 | 2023-10-17T13:24:53 | ---
dataset_info:
features:
- name: last_hs
sequence: float32
- name: label
dtype: int64
splits:
- name: train
num_bytes: 18028896
num_examples: 67272
download_size: 24547776
dataset_size: 18028896
---
# Dataset Card for "ENN_class_embeddings_dim_64"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 413 | [
[
-0.04742431640625,
-0.0164947509765625,
0.00852203369140625,
0.011749267578125,
-0.0111083984375,
-0.004688262939453125,
0.00782012939453125,
-0.0006742477416992188,
0.0626220703125,
0.030853271484375,
-0.030242919921875,
-0.0648193359375,
-0.04510498046875,
... |
922-CA/lp2_10172023_test1 | 2023-10-17T13:41:31.000Z | [
"region:us"
] | 922-CA | null | null | 0 | 0 | 2023-10-17T13:41:16 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
autoevaluate/autoeval-eval-aslg_pc12-default-b07e93-95722146456 | 2023-10-17T13:53:16.000Z | [
"autotrain",
"evaluation",
"region:us"
] | autoevaluate | null | null | 0 | 0 | 2023-10-17T13:48:58 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- aslg_pc12
eval_info:
task: translation
model: HamdanXI/t5_small_gloss_merged_dataset
metrics: ['comet', 'bertscore']
dataset_name: aslg_pc12
dataset_config: default
dataset_split: train
col_mapping:
source: gloss
target: text
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Translation
* Model: HamdanXI/t5_small_gloss_merged_dataset
* Dataset: aslg_pc12
* Config: default
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@HamdanXI](https://huggingface.co/HamdanXI) for evaluating this model. | 857 | [
[
-0.0309295654296875,
-0.0063629150390625,
0.01898193359375,
0.0162506103515625,
-0.0153961181640625,
-0.0093841552734375,
-0.014190673828125,
-0.0379638671875,
0.00812530517578125,
0.02532958984375,
-0.072509765625,
-0.024200439453125,
-0.050384521484375,
-0... |
johannes-garstenauer/ENN_masking_embeddings_dim_64 | 2023-10-17T13:52:00.000Z | [
"region:us"
] | johannes-garstenauer | null | null | 0 | 0 | 2023-10-17T13:51:43 | ---
dataset_info:
features:
- name: last_hs
sequence: float32
- name: label
dtype: int64
splits:
- name: train
num_bytes: 18028896
num_examples: 67272
download_size: 24542250
dataset_size: 18028896
---
# Dataset Card for "ENN_masking_embeddings_dim_64"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 415 | [
[
-0.04931640625,
-0.0175018310546875,
0.0019235610961914062,
0.021759033203125,
-0.0190582275390625,
-0.0006289482116699219,
0.01113128662109375,
-0.0079498291015625,
0.07476806640625,
0.042144775390625,
-0.04132080078125,
-0.06719970703125,
-0.054443359375,
... |
mesolitica/translated-unnatural_code_instructions_20M | 2023-10-17T14:04:29.000Z | [
"region:us"
] | mesolitica | null | null | 0 | 0 | 2023-10-17T14:02:13 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
denniswischer/EU-Sustainable-Finance-Taxonomy | 2023-10-17T14:13:09.000Z | [
"region:us"
] | denniswischer | null | null | 0 | 0 | 2023-10-17T14:13:09 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
bnag0312/runpod | 2023-10-17T14:29:00.000Z | [
"region:us"
] | bnag0312 | null | null | 0 | 0 | 2023-10-17T14:28:13 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
louisbrulenaudet/tax-fr | 2023-10-18T10:24:39.000Z | [
"task_categories:text-generation",
"task_categories:table-question-answering",
"task_categories:summarization",
"task_categories:conversational",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:fr",
"license:apache-2.0",
"instruction-finetuning",
"leg... | louisbrulenaudet | null | null | 0 | 0 | 2023-10-17T14:53:32 | ---
license: apache-2.0
language:
- fr
multilinguality:
- monolingual
tags:
- instruction-finetuning
- legal
- tax
- llm
- fiscal
source_datasets:
- original
pretty_name: >-
Instruction fine-tuning Large Language Models for tax practice using
quantization and LoRA: a boilerplate
task_categories:
- text-generation
- table-question-answering
- summarization
- conversational
size_categories:
- n<1K
---
# Instruction fine-tuning Large Language Models for tax practice using quantization and LoRA: a boilerplate
This project focuses on fine-tuning pre-trained language models to create efficient and accurate models for tax practice.
Fine-tuning is the process of adapting a pre-trained model to perform specific tasks or cater to particular domains. It involves adjusting the model's parameters through a further round of training on task-specific or domain-specific data. While conventional fine-tuning strategies involve supervised learning with labeled data, instruction-based fine-tuning introduces a more structured and interpretable approach.
Instruction-based fine-tuning leverages the power of human-provided instructions to guide the model's behavior. These instructions can be in the form of text prompts, prompts with explicit task descriptions, or a combination of both. This approach allows for a more controlled and context-aware interaction with the LLM, making it adaptable to a multitude of specialized tasks.
Instruction-based fine-tuning significantly enhances the performance of LLMs in the following ways:
- Task-Specific Adaptation: LLMs, when fine-tuned with specific instructions, exhibit remarkable adaptability to diverse tasks. They can switch seamlessly between translation, summarization, and question-answering, guided by the provided instructions.
- Reduced Ambiguity: Traditional LLMs might generate ambiguous or contextually inappropriate responses. Instruction-based fine-tuning allows for a clearer and more context-aware generation, reducing the likelihood of nonsensical outputs.
- Efficient Knowledge Transfer: Instructions can encapsulate domain-specific knowledge, enabling LLMs to benefit from expert guidance. This knowledge transfer is particularly valuable in fields like tax practice, law, medicine, and more.
- Interpretability: Instruction-based fine-tuning also makes LLM behavior more interpretable. Since the instructions are human-readable, it becomes easier to understand and control model outputs.
- Adaptive Behavior: LLMs, post instruction-based fine-tuning, exhibit adaptive behavior that is responsive to both explicit task descriptions and implicit cues within the provided text.
## Dataset generation
This JSON file is a list of dictionaries, each dictionary contains the following fields:
- `instruction`: `str`, describes the task the model should perform. Each of the instructions is unique.
- `input`: `str`, optional context or input for the task.
- `output`: `str`, the answer to the instruction.
We used the following prompt for generating the dataset:
```
Objectif : Élaboration d'un ensemble de 5-10 problématiques ou instructions diverses dans un fichier JSON à destination d'un modèle de langage pour un objectif d'entrainement (fine-tuning) aux fins d'assistance du métier de fiscaliste.
Schéma de la liste de dictionnaires souhaitée :
[
{
"instruction" :"xxx",
"input" : "xxx",
"output" : "xxx"
}
]
Exigences à respecter :
1. Élimination de la répétition et utilisation de structures de phrases élaborées. Éviter toute redondance de contenu dans les phrases successives tout en favorisant l'utilisation de structures de phrases complexes qui élargissent la portée de l'expression.
2. Diversité linguistique des instructions. Les directives doivent être formulées de manière variée, en combinant des questions avec des instructions impératives.
3. Variété des types d'instructions. Les types d'instructions doivent être variés, couvrant une gamme de tâches propres à l'activité de fiscaliste, telles que la génération de questions ouvertes, la classification, etc.
4. Qualité linguistique. Les instructions, les entrées et les sorties doivent être rédigées en français sans aucune faute d'orthographe, de syntaxe, de ponctuation ou de grammaire.
5. Langage professionnel et académique. Les instructions, les entrées et les sorties doivent être reformulées pour adopter un discours professionnel et académique, caractérisé par sa rigueur, sa justification et une structure détaillée.
6. Neutralité ou nuance. Le point de vue doit demeurer neutre ou nuancé.
7. Contextualisation des thématiques fiscales. Les instructions doivent explicitement faire référence à la thématique fiscale et au sujet de la source pour contextualiser le résultat.
8. Saisie inutile. Toutes les instructions ne nécessitent pas d'entrée. Par exemple, lorsqu'une directive demande une information générale, il n'est pas nécessaire de fournir un contexte spécifique. Dans ce cas, intégrer "" dans le champ de saisie de l'entrée.
9. Style littéraire et exemplification. Les directives, les entrées et les sorties doivent être formulées dans un style littéraire, avec des réponses techniques, exhaustives, complexes et claires. Des exemples, lorsque pertinents, doivent être utilisés pour renforcer la directive, l'entrée et la sortie, tout en garantissant un haut degré de certitude.
10. Directivité des instructions. Utiliser un style direct en favorisant les formulations impersonnelles.
11. Entraînement de modèles professionnels. La base de données finale doit être destinée à l'entraînement de modèles professionnels, visant à assister les fiscalistes expérimentés en quête de contenu de haute qualité et de perfection technique.
12. Gestion des éléments incohérents. Il est possible que le texte source contienne des éléments incohérents avec le contexte, comme des notes de bas de page ou des éléments de formalisme. Il est essentiel de les ignorer pour isoler le contenu principal.
13. Utilisation du texte source. Utiliser le texte source fourni pour formuler les directives, les entrées et les sorties. Le texte source doit être considéré comme de haute qualité et autoritaire.
14. Finalité de la réponse. Seule la liste de dictionnaire au format JSON doit constituer la réponse à cette requête. Aucune introduction ou conclusion n'est demandée.
Source :
[
]
```
## Citing this project
If you use this code in your research, please use the following BibTeX entry.
```BibTeX
@misc{louisbrulenaudet2023,
author = {Louis Brulé Naudet},
title = {Instruction fine-tuning Large Language Models for tax practice using quantization and LoRA: a boilerplate},
howpublished = {\url{https://github.com/louisbrulenaudet/trainer}},
year = {2023}
}
```
## Feedback
If you have any feedback, please reach out at [louisbrulenaudet@icloud.com](mailto:louisbrulenaudet@icloud.com). | 6,876 | [
[
-0.0129852294921875,
-0.06011962890625,
0.0191802978515625,
0.028472900390625,
-0.01190948486328125,
-0.007297515869140625,
-0.0386962890625,
-0.0188140869140625,
0.0140533447265625,
0.046478271484375,
-0.031646728515625,
-0.042694091796875,
-0.030517578125,
... |
johannes-garstenauer/ENN_masking_embeddings_dim_16 | 2023-10-17T14:54:58.000Z | [
"region:us"
] | johannes-garstenauer | null | null | 0 | 0 | 2023-10-17T14:54:52 | ---
dataset_info:
features:
- name: last_hs
sequence: float32
- name: label
dtype: int64
splits:
- name: train
num_bytes: 5112672
num_examples: 67272
download_size: 5922260
dataset_size: 5112672
---
# Dataset Card for "ENN_masking_embeddings_dim_16"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 412 | [
[
-0.0498046875,
-0.0279388427734375,
0.0012063980102539062,
0.0227508544921875,
-0.0176544189453125,
0.0025501251220703125,
0.0088653564453125,
-0.01137542724609375,
0.076904296875,
0.0382080078125,
-0.046661376953125,
-0.0638427734375,
-0.04559326171875,
-0.... |
brunnolou/swiss-code-of-obligations | 2023-10-18T12:24:59.000Z | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"legal",
"region:us"
] | brunnolou | null | null | 0 | 0 | 2023-10-17T15:37:22 | ---
license: apache-2.0
language:
- en
tags:
- legal
pretty_name: Swiss Code of Obligations
size_categories:
- 1K<n<10K
task_categories:
- question-answering
configs:
- config_name: default
data_files:
- split: data
path: "swiss-code-of-obligations-articles.jsonl"
---
# Swiss Code of Obligations (OR) – Swiss Civil Code
#### (Part Five: The Code of Obligations) of 30 March 1911 (Status as of 1 September 2023)
JSON file generated from the Swiss [publication platform for federal law](https://www.fedlex.admin.ch/en/home)
[Swiss Code of Obligations](https://www.fedlex.admin.ch/eli/cc/27/317_321_377/en)
### Format
You can download the data in two file formats:
- [json](https://huggingface.co/datasets/brunnolou/swiss-code-of-obligations/resolve/main/swiss-code-of-obligations-articles.json)
- [jsonl](https://huggingface.co/datasets/brunnolou/swiss-code-of-obligations/resolve/main/swiss-code-of-obligations-articles.jsonl)
Each article has the following type definition:
```ts
{
headings: string[]
article: string
link: string
content: string
}
```
You can also find the original HTML where the data was extracted from.
- [html](https://huggingface.co/datasets/brunnolou/swiss-code-of-obligations/resolve/main/swiss-code-of-obligations.html)
## Qdrant Vector Database
The embeddings for this snapshot were created with [Xenova/gte-small](https://huggingface.co/Xenova/gte-small)
Unzip before using.
- [Snapshot - Qdrant verstion v1.6.1 (zip)](https://huggingface.co/datasets/brunnolou/swiss-code-of-obligations/resolve/main/swiss-code-of-obligations-articles-gte-small-2023-10-18-12-13-25_qdrant-v1-6-1.snapshot.zip)
<img src="https://cdn-uploads.huggingface.co/production/uploads/65256343a9f5b404762da984/LgxeBf0Bu_IkFtM3niWfq.png" width=480 />
| 1,782 | [
[
-0.0211639404296875,
-0.01418304443359375,
0.0303497314453125,
0.026947021484375,
-0.027008056640625,
0.0128936767578125,
0.023162841796875,
-0.0272674560546875,
0.022003173828125,
0.04913330078125,
-0.04949951171875,
-0.056854248046875,
-0.02764892578125,
0... |
seuprimrose/ccdm-data | 2023-10-17T16:03:06.000Z | [
"region:us"
] | seuprimrose | null | null | 0 | 0 | 2023-10-17T15:37:40 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
johannes-garstenauer/ENN_masking_embeddings_dim_4 | 2023-10-17T15:46:18.000Z | [
"region:us"
] | johannes-garstenauer | null | null | 0 | 0 | 2023-10-17T15:46:14 | ---
dataset_info:
features:
- name: last_hs
sequence: float32
- name: label
dtype: int64
splits:
- name: train
num_bytes: 1883616
num_examples: 67272
download_size: 1465145
dataset_size: 1883616
---
# Dataset Card for "ENN_masking_embeddings_dim_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 411 | [
[
-0.0489501953125,
-0.0219268798828125,
0.008392333984375,
0.02642822265625,
-0.014190673828125,
0.0030384063720703125,
0.0187835693359375,
-0.009521484375,
0.07769775390625,
0.042633056640625,
-0.040802001953125,
-0.06402587890625,
-0.04156494140625,
0.00308... |
CarrotzRule123/crawl-nextrift | 2023-10-17T16:04:46.000Z | [
"region:us"
] | CarrotzRule123 | null | null | 0 | 0 | 2023-10-17T16:03:08 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
johannes-garstenauer/ENN_masking_embeddings_dim_2 | 2023-10-17T16:07:58.000Z | [
"region:us"
] | johannes-garstenauer | null | null | 0 | 0 | 2023-10-17T16:07:55 | ---
dataset_info:
features:
- name: last_hs
sequence: float32
- name: label
dtype: int64
splits:
- name: train
num_bytes: 1345440
num_examples: 67272
download_size: 750654
dataset_size: 1345440
---
# Dataset Card for "ENN_masking_embeddings_dim_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 410 | [
[
-0.0391845703125,
-0.028289794921875,
-0.0011682510375976562,
0.0270843505859375,
-0.0196380615234375,
-0.002410888671875,
0.0164031982421875,
-0.01157379150390625,
0.07342529296875,
0.0406494140625,
-0.039215087890625,
-0.051605224609375,
-0.052154541015625,
... |
johannes-garstenauer/ENN_class_embeddings_dim_1 | 2023-10-17T16:25:03.000Z | [
"region:us"
] | johannes-garstenauer | null | null | 0 | 0 | 2023-10-17T16:25:00 | ---
dataset_info:
features:
- name: last_hs
sequence: float32
- name: label
dtype: int64
splits:
- name: train
num_bytes: 1076352
num_examples: 67272
download_size: 400578
dataset_size: 1076352
---
# Dataset Card for "ENN_class_embeddings_dim_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 408 | [
[
-0.04937744140625,
-0.030853271484375,
0.0017213821411132812,
0.0167999267578125,
-0.0106964111328125,
-0.00769805908203125,
0.01468658447265625,
0.005352020263671875,
0.07635498046875,
0.031707763671875,
-0.040252685546875,
-0.06256103515625,
-0.040313720703125... |
open-llm-leaderboard/details_digitous__Javalion-GPTJ | 2023-10-17T16:30:52.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-17T16:30:44 | ---
pretty_name: Evaluation run of digitous/Javalion-GPTJ
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [digitous/Javalion-GPTJ](https://huggingface.co/digitous/Javalion-GPTJ) on the\
\ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_digitous__Javalion-GPTJ\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-17T16:30:40.510452](https://huggingface.co/datasets/open-llm-leaderboard/details_digitous__Javalion-GPTJ/blob/main/results_2023-10-17T16-30-40.510452.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0008389261744966443,\n\
\ \"em_stderr\": 0.0002964962989801232,\n \"f1\": 0.04887374161073851,\n\
\ \"f1_stderr\": 0.0012121662940147047,\n \"acc\": 0.3347011350709951,\n\
\ \"acc_stderr\": 0.008454252569236846\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0008389261744966443,\n \"em_stderr\": 0.0002964962989801232,\n\
\ \"f1\": 0.04887374161073851,\n \"f1_stderr\": 0.0012121662940147047\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.016679302501895376,\n \
\ \"acc_stderr\": 0.0035275958887224543\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6527229676400947,\n \"acc_stderr\": 0.013380909249751237\n\
\ }\n}\n```"
repo_url: https://huggingface.co/digitous/Javalion-GPTJ
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_17T16_30_40.510452
path:
- '**/details_harness|drop|3_2023-10-17T16-30-40.510452.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-17T16-30-40.510452.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_17T16_30_40.510452
path:
- '**/details_harness|gsm8k|5_2023-10-17T16-30-40.510452.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-17T16-30-40.510452.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_17T16_30_40.510452
path:
- '**/details_harness|winogrande|5_2023-10-17T16-30-40.510452.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-17T16-30-40.510452.parquet'
- config_name: results
data_files:
- split: 2023_10_17T16_30_40.510452
path:
- results_2023-10-17T16-30-40.510452.parquet
- split: latest
path:
- results_2023-10-17T16-30-40.510452.parquet
---
# Dataset Card for Evaluation run of digitous/Javalion-GPTJ
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/digitous/Javalion-GPTJ
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [digitous/Javalion-GPTJ](https://huggingface.co/digitous/Javalion-GPTJ) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_digitous__Javalion-GPTJ",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-17T16:30:40.510452](https://huggingface.co/datasets/open-llm-leaderboard/details_digitous__Javalion-GPTJ/blob/main/results_2023-10-17T16-30-40.510452.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0008389261744966443,
"em_stderr": 0.0002964962989801232,
"f1": 0.04887374161073851,
"f1_stderr": 0.0012121662940147047,
"acc": 0.3347011350709951,
"acc_stderr": 0.008454252569236846
},
"harness|drop|3": {
"em": 0.0008389261744966443,
"em_stderr": 0.0002964962989801232,
"f1": 0.04887374161073851,
"f1_stderr": 0.0012121662940147047
},
"harness|gsm8k|5": {
"acc": 0.016679302501895376,
"acc_stderr": 0.0035275958887224543
},
"harness|winogrande|5": {
"acc": 0.6527229676400947,
"acc_stderr": 0.013380909249751237
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,187 | [
[
-0.0321044921875,
-0.047943115234375,
0.016632080078125,
0.01727294921875,
-0.01540374755859375,
0.006557464599609375,
-0.037933349609375,
-0.00970458984375,
0.03021240234375,
0.04150390625,
-0.04644775390625,
-0.0711669921875,
-0.05145263671875,
0.014694213... |
johannes-garstenauer/ENN_masking_embeddings_dim_1 | 2023-10-28T19:49:29.000Z | [
"region:us"
] | johannes-garstenauer | null | null | 0 | 0 | 2023-10-17T16:32:06 | ---
dataset_info:
features:
- name: last_hs
sequence: float32
- name: label
dtype: int64
splits:
- name: train
num_bytes: 1076352
num_examples: 67272
download_size: 400482
dataset_size: 1076352
---
# Dataset Card for "ENN_masking_embeddings_dim_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 410 | [
[
-0.05096435546875,
-0.03204345703125,
-0.004878997802734375,
0.0273590087890625,
-0.01898193359375,
-0.0037994384765625,
0.0173187255859375,
-0.0019683837890625,
0.088134765625,
0.043548583984375,
-0.05145263671875,
-0.06475830078125,
-0.04937744140625,
-0.0... |
dip67/guanaco-llama2-1k | 2023-10-17T16:38:29.000Z | [
"region:us"
] | dip67 | null | null | 0 | 0 | 2023-10-17T16:38:27 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1654448
num_examples: 1000
download_size: 966693
dataset_size: 1654448
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "guanaco-llama2-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 444 | [
[
-0.0220184326171875,
-0.0128173828125,
0.01739501953125,
0.037689208984375,
-0.03839111328125,
0.000885009765625,
0.0258941650390625,
-0.0190277099609375,
0.0645751953125,
0.0298919677734375,
-0.054718017578125,
-0.06707763671875,
-0.05029296875,
-0.01603698... |
DuongTrongChi/Back-up-education-QA-data | 2023-10-17T16:55:29.000Z | [
"region:us"
] | DuongTrongChi | null | null | 0 | 0 | 2023-10-17T16:55:27 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: title
dtype: string
- name: date
dtype: string
- name: university
dtype: string
- name: code
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 1250212
num_examples: 1169
download_size: 402389
dataset_size: 1250212
---
# Dataset Card for "Back-up-education-QA-data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 662 | [
[
-0.02685546875,
0.0023708343505859375,
0.0190582275390625,
0.00872039794921875,
0.0005831718444824219,
0.01375579833984375,
0.037872314453125,
0.0183563232421875,
0.058502197265625,
0.03826904296875,
-0.06561279296875,
-0.05810546875,
-0.019683837890625,
-0.... |
feynman-integrals-nn/t331ZZZM | 2023-10-24T23:01:02.000Z | [
"license:cc-by-4.0",
"region:us"
] | feynman-integrals-nn | null | null | 0 | 0 | 2023-10-17T17:24:32 | ---
license: cc-by-4.0
---
* [data](https://huggingface.co/datasets/feynman-integrals-nn/t331ZZZM)
* [source](https://gitlab.com/feynman-integrals-nn/feynman-integrals-nn/-/tree/main/t331ZZZM)
| 194 | [
[
-0.0130767822265625,
-0.028656005859375,
0.023193359375,
0.0240020751953125,
-0.0144195556640625,
0.00027632713317871094,
0.023773193359375,
-0.01519775390625,
0.046356201171875,
0.03521728515625,
-0.06195068359375,
-0.0267333984375,
-0.0287628173828125,
-0.... |
mdfrearth/greenwashed_speech | 2023-10-17T17:50:49.000Z | [
"region:us"
] | mdfrearth | null | null | 0 | 0 | 2023-10-17T17:50:49 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
open-llm-leaderboard/details_Aeala__VicUnlocked-alpaca-30b | 2023-10-17T18:02:33.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-17T18:02:24 | ---
pretty_name: Evaluation run of Aeala/VicUnlocked-alpaca-30b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Aeala/VicUnlocked-alpaca-30b](https://huggingface.co/Aeala/VicUnlocked-alpaca-30b)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Aeala__VicUnlocked-alpaca-30b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-17T18:02:20.593503](https://huggingface.co/datasets/open-llm-leaderboard/details_Aeala__VicUnlocked-alpaca-30b/blob/main/results_2023-10-17T18-02-20.593503.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.011849832214765101,\n\
\ \"em_stderr\": 0.0011081721365098474,\n \"f1\": 0.07360528523489944,\n\
\ \"f1_stderr\": 0.0016918412800750494,\n \"acc\": 0.4642427803704344,\n\
\ \"acc_stderr\": 0.010668138318862291\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.011849832214765101,\n \"em_stderr\": 0.0011081721365098474,\n\
\ \"f1\": 0.07360528523489944,\n \"f1_stderr\": 0.0016918412800750494\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.1463229719484458,\n \
\ \"acc_stderr\": 0.00973521055778526\n },\n \"harness|winogrande|5\":\
\ {\n \"acc\": 0.7821625887924231,\n \"acc_stderr\": 0.011601066079939324\n\
\ }\n}\n```"
repo_url: https://huggingface.co/Aeala/VicUnlocked-alpaca-30b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_17T18_02_20.593503
path:
- '**/details_harness|drop|3_2023-10-17T18-02-20.593503.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-17T18-02-20.593503.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_17T18_02_20.593503
path:
- '**/details_harness|gsm8k|5_2023-10-17T18-02-20.593503.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-17T18-02-20.593503.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_17T18_02_20.593503
path:
- '**/details_harness|winogrande|5_2023-10-17T18-02-20.593503.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-17T18-02-20.593503.parquet'
- config_name: results
data_files:
- split: 2023_10_17T18_02_20.593503
path:
- results_2023-10-17T18-02-20.593503.parquet
- split: latest
path:
- results_2023-10-17T18-02-20.593503.parquet
---
# Dataset Card for Evaluation run of Aeala/VicUnlocked-alpaca-30b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Aeala/VicUnlocked-alpaca-30b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [Aeala/VicUnlocked-alpaca-30b](https://huggingface.co/Aeala/VicUnlocked-alpaca-30b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Aeala__VicUnlocked-alpaca-30b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-17T18:02:20.593503](https://huggingface.co/datasets/open-llm-leaderboard/details_Aeala__VicUnlocked-alpaca-30b/blob/main/results_2023-10-17T18-02-20.593503.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.011849832214765101,
"em_stderr": 0.0011081721365098474,
"f1": 0.07360528523489944,
"f1_stderr": 0.0016918412800750494,
"acc": 0.4642427803704344,
"acc_stderr": 0.010668138318862291
},
"harness|drop|3": {
"em": 0.011849832214765101,
"em_stderr": 0.0011081721365098474,
"f1": 0.07360528523489944,
"f1_stderr": 0.0016918412800750494
},
"harness|gsm8k|5": {
"acc": 0.1463229719484458,
"acc_stderr": 0.00973521055778526
},
"harness|winogrande|5": {
"acc": 0.7821625887924231,
"acc_stderr": 0.011601066079939324
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,248 | [
[
-0.036834716796875,
-0.0546875,
0.01271820068359375,
0.0217742919921875,
-0.0134429931640625,
0.0028228759765625,
-0.0178070068359375,
-0.0173492431640625,
0.03692626953125,
0.039276123046875,
-0.048980712890625,
-0.06988525390625,
-0.053131103515625,
0.0191... |
sayan1101/fin_sum | 2023-10-17T18:03:27.000Z | [
"region:us"
] | sayan1101 | null | null | 0 | 0 | 2023-10-17T18:03:27 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
LuckPr4yx/ASMR | 2023-10-17T18:26:32.000Z | [
"region:us"
] | LuckPr4yx | null | null | 0 | 0 | 2023-10-17T18:25:20 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
yaygomii/Processsed_cv_13 | 2023-10-17T19:04:04.000Z | [
"region:us"
] | yaygomii | null | null | 0 | 0 | 2023-10-17T18:47:33 | ---
dataset_info:
features:
- name: speech
sequence: float32
- name: sampling_rate
dtype: int64
- name: target_text
dtype: string
splits:
- name: train
num_bytes: 3243405329
num_examples: 11973
download_size: 3194827949
dataset_size: 3243405329
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Processsed_cv_13"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 541 | [
[
-0.030914306640625,
-0.0177459716796875,
0.020477294921875,
0.0271148681640625,
-0.018341064453125,
-0.0047149658203125,
0.0172882080078125,
0.0009965896606445312,
0.0472412109375,
0.054290771484375,
-0.0743408203125,
-0.05462646484375,
-0.045013427734375,
-... |
Madhukarvenkata/lamini | 2023-10-17T19:01:51.000Z | [
"region:us"
] | Madhukarvenkata | null | null | 0 | 0 | 2023-10-17T19:00:42 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01494598388671875,
0.057159423828125,
0.028839111328125,
-0.0350341796875,
0.04656982421875,
0.052490234375,
0.00504302978515625,
0.0513916015625,
0.016998291015625,
-0.0521240234375,
-0.0149993896484375,
-0.06036376953125,
0.03790283... |
virtuous/ColdQA_warmup | 2023-10-17T19:05:41.000Z | [
"region:us"
] | virtuous | null | null | 0 | 0 | 2023-10-17T19:04:14 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01494598388671875,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.0465087890625,
0.052490234375,
0.00505828857421875,
0.051361083984375,
0.0170135498046875,
-0.05206298828125,
-0.0149993896484375,
-0.06036376953125,
0.0379028320... |
Daniel-Prieto/Dataset-pruebas-2 | 2023-10-19T21:52:00.000Z | [
"region:us"
] | Daniel-Prieto | null | null | 0 | 0 | 2023-10-17T19:05:42 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: words
sequence: string
- name: bbox
sequence:
sequence: int64
- name: ner_tags
sequence: string
splits:
- name: train
num_bytes: 4014086.0
num_examples: 4
- name: test
num_bytes: 4014086.0
num_examples: 4
download_size: 8033148
dataset_size: 8028172.0
---
# Dataset Card for "Dataset-pruebas-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 672 | [
[
-0.0364990234375,
-0.0198974609375,
0.007228851318359375,
0.03387451171875,
-0.0273284912109375,
-0.005893707275390625,
0.01471710205078125,
-0.01264190673828125,
0.066650390625,
0.043914794921875,
-0.04925537109375,
-0.047393798828125,
-0.04644775390625,
-0... |
open-llm-leaderboard/details_Aspik101__tulu-7b-instruct-pl-lora_unload | 2023-10-17T19:08:54.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-17T19:08:46 | ---
pretty_name: Evaluation run of Aspik101/tulu-7b-instruct-pl-lora_unload
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Aspik101/tulu-7b-instruct-pl-lora_unload](https://huggingface.co/Aspik101/tulu-7b-instruct-pl-lora_unload)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Aspik101__tulu-7b-instruct-pl-lora_unload\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-17T19:08:42.181138](https://huggingface.co/datasets/open-llm-leaderboard/details_Aspik101__tulu-7b-instruct-pl-lora_unload/blob/main/results_2023-10-17T19-08-42.181138.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0,\n \"\
em_stderr\": 0.0,\n \"f1\": 0.0,\n \"f1_stderr\": 0.0,\n \"\
acc\": 0.24112075769534333,\n \"acc_stderr\": 0.007021809798087479\n },\n\
\ \"harness|drop|3\": {\n \"em\": 0.0,\n \"em_stderr\": 0.0,\n\
\ \"f1\": 0.0,\n \"f1_stderr\": 0.0\n },\n \"harness|gsm8k|5\"\
: {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.48224151539068666,\n \"acc_stderr\": 0.014043619596174959\n\
\ }\n}\n```"
repo_url: https://huggingface.co/Aspik101/tulu-7b-instruct-pl-lora_unload
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_17T19_08_42.181138
path:
- '**/details_harness|drop|3_2023-10-17T19-08-42.181138.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-17T19-08-42.181138.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_17T19_08_42.181138
path:
- '**/details_harness|gsm8k|5_2023-10-17T19-08-42.181138.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-17T19-08-42.181138.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_17T19_08_42.181138
path:
- '**/details_harness|winogrande|5_2023-10-17T19-08-42.181138.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-17T19-08-42.181138.parquet'
- config_name: results
data_files:
- split: 2023_10_17T19_08_42.181138
path:
- results_2023-10-17T19-08-42.181138.parquet
- split: latest
path:
- results_2023-10-17T19-08-42.181138.parquet
---
# Dataset Card for Evaluation run of Aspik101/tulu-7b-instruct-pl-lora_unload
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Aspik101/tulu-7b-instruct-pl-lora_unload
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [Aspik101/tulu-7b-instruct-pl-lora_unload](https://huggingface.co/Aspik101/tulu-7b-instruct-pl-lora_unload) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Aspik101__tulu-7b-instruct-pl-lora_unload",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-17T19:08:42.181138](https://huggingface.co/datasets/open-llm-leaderboard/details_Aspik101__tulu-7b-instruct-pl-lora_unload/blob/main/results_2023-10-17T19-08-42.181138.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0,
"em_stderr": 0.0,
"f1": 0.0,
"f1_stderr": 0.0,
"acc": 0.24112075769534333,
"acc_stderr": 0.007021809798087479
},
"harness|drop|3": {
"em": 0.0,
"em_stderr": 0.0,
"f1": 0.0,
"f1_stderr": 0.0
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.48224151539068666,
"acc_stderr": 0.014043619596174959
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,044 | [
[
-0.0276641845703125,
-0.049468994140625,
0.0034885406494140625,
0.01485443115234375,
-0.015899658203125,
0.006526947021484375,
-0.032012939453125,
-0.0185699462890625,
0.033477783203125,
0.049102783203125,
-0.05145263671875,
-0.061370849609375,
-0.049072265625,
... |
autoevaluate/autoeval-eval-aslg_pc12-default-041a04-95805146498 | 2023-10-17T19:20:50.000Z | [
"autotrain",
"evaluation",
"region:us"
] | autoevaluate | null | null | 0 | 0 | 2023-10-17T19:16:39 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- aslg_pc12
eval_info:
task: translation
model: HamdanXI/t5_small_aslg_pc12
metrics: ['rouge']
dataset_name: aslg_pc12
dataset_config: default
dataset_split: train
col_mapping:
source: gloss
target: text
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Translation
* Model: HamdanXI/t5_small_aslg_pc12
* Dataset: aslg_pc12
* Config: default
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@HamdanXI](https://huggingface.co/HamdanXI) for evaluating this model. | 822 | [
[
-0.0269775390625,
-0.0062408447265625,
0.0162506103515625,
0.0214691162109375,
-0.0184783935546875,
-0.00848388671875,
-0.01220703125,
-0.038482666015625,
0.0036144256591796875,
0.0233001708984375,
-0.07879638671875,
-0.0214691162109375,
-0.05029296875,
0.00... |
autoevaluate/autoeval-eval-aslg_pc12-default-041a04-95805146499 | 2023-10-17T19:21:11.000Z | [
"autotrain",
"evaluation",
"region:us"
] | autoevaluate | null | null | 0 | 0 | 2023-10-17T19:16:44 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- aslg_pc12
eval_info:
task: translation
model: HamdanXI/t5_small_gloss_merged_dataset
metrics: ['rouge']
dataset_name: aslg_pc12
dataset_config: default
dataset_split: train
col_mapping:
source: gloss
target: text
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Translation
* Model: HamdanXI/t5_small_gloss_merged_dataset
* Dataset: aslg_pc12
* Config: default
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@HamdanXI](https://huggingface.co/HamdanXI) for evaluating this model. | 844 | [
[
-0.0309295654296875,
-0.0063629150390625,
0.01898193359375,
0.0162506103515625,
-0.0153961181640625,
-0.0093841552734375,
-0.014190673828125,
-0.0379638671875,
0.00812530517578125,
0.02532958984375,
-0.072509765625,
-0.024200439453125,
-0.050384521484375,
-0... |
autoevaluate/autoeval-eval-aslg_pc12-default-041a04-95805146500 | 2023-10-17T19:21:09.000Z | [
"autotrain",
"evaluation",
"region:us"
] | autoevaluate | null | null | 0 | 0 | 2023-10-17T19:16:49 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- aslg_pc12
eval_info:
task: translation
model: HamdanXI/t5_small_gloss_merged_dataset_random_0.1
metrics: ['rouge']
dataset_name: aslg_pc12
dataset_config: default
dataset_split: train
col_mapping:
source: gloss
target: text
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Translation
* Model: HamdanXI/t5_small_gloss_merged_dataset_random_0.1
* Dataset: aslg_pc12
* Config: default
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@HamdanXI](https://huggingface.co/HamdanXI) for evaluating this model. | 866 | [
[
-0.0304718017578125,
-0.00605010986328125,
0.01995849609375,
0.0166168212890625,
-0.0149383544921875,
-0.01142120361328125,
-0.01422882080078125,
-0.03857421875,
0.01007080078125,
0.0249176025390625,
-0.0740966796875,
-0.0238494873046875,
-0.047943115234375,
... |
open-llm-leaderboard/details_quantumaikr__llama-2-7b-hf-guanaco-1k | 2023-10-17T19:26:46.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-17T19:26:38 | ---
pretty_name: Evaluation run of quantumaikr/llama-2-7b-hf-guanaco-1k
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [quantumaikr/llama-2-7b-hf-guanaco-1k](https://huggingface.co/quantumaikr/llama-2-7b-hf-guanaco-1k)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_quantumaikr__llama-2-7b-hf-guanaco-1k\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-17T19:26:34.289625](https://huggingface.co/datasets/open-llm-leaderboard/details_quantumaikr__llama-2-7b-hf-guanaco-1k/blob/main/results_2023-10-17T19-26-34.289625.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.002726510067114094,\n\
\ \"em_stderr\": 0.0005340111700415914,\n \"f1\": 0.056623322147651096,\n\
\ \"f1_stderr\": 0.0013885957029727636,\n \"acc\": 0.40100097356766773,\n\
\ \"acc_stderr\": 0.009867271082149756\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.002726510067114094,\n \"em_stderr\": 0.0005340111700415914,\n\
\ \"f1\": 0.056623322147651096,\n \"f1_stderr\": 0.0013885957029727636\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.07429871114480667,\n \
\ \"acc_stderr\": 0.007223844172845574\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7277032359905288,\n \"acc_stderr\": 0.012510697991453937\n\
\ }\n}\n```"
repo_url: https://huggingface.co/quantumaikr/llama-2-7b-hf-guanaco-1k
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_17T19_26_34.289625
path:
- '**/details_harness|drop|3_2023-10-17T19-26-34.289625.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-17T19-26-34.289625.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_17T19_26_34.289625
path:
- '**/details_harness|gsm8k|5_2023-10-17T19-26-34.289625.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-17T19-26-34.289625.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_17T19_26_34.289625
path:
- '**/details_harness|winogrande|5_2023-10-17T19-26-34.289625.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-17T19-26-34.289625.parquet'
- config_name: results
data_files:
- split: 2023_10_17T19_26_34.289625
path:
- results_2023-10-17T19-26-34.289625.parquet
- split: latest
path:
- results_2023-10-17T19-26-34.289625.parquet
---
# Dataset Card for Evaluation run of quantumaikr/llama-2-7b-hf-guanaco-1k
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/quantumaikr/llama-2-7b-hf-guanaco-1k
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [quantumaikr/llama-2-7b-hf-guanaco-1k](https://huggingface.co/quantumaikr/llama-2-7b-hf-guanaco-1k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_quantumaikr__llama-2-7b-hf-guanaco-1k",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-17T19:26:34.289625](https://huggingface.co/datasets/open-llm-leaderboard/details_quantumaikr__llama-2-7b-hf-guanaco-1k/blob/main/results_2023-10-17T19-26-34.289625.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.002726510067114094,
"em_stderr": 0.0005340111700415914,
"f1": 0.056623322147651096,
"f1_stderr": 0.0013885957029727636,
"acc": 0.40100097356766773,
"acc_stderr": 0.009867271082149756
},
"harness|drop|3": {
"em": 0.002726510067114094,
"em_stderr": 0.0005340111700415914,
"f1": 0.056623322147651096,
"f1_stderr": 0.0013885957029727636
},
"harness|gsm8k|5": {
"acc": 0.07429871114480667,
"acc_stderr": 0.007223844172845574
},
"harness|winogrande|5": {
"acc": 0.7277032359905288,
"acc_stderr": 0.012510697991453937
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,353 | [
[
-0.0235748291015625,
-0.053192138671875,
0.017852783203125,
0.01024627685546875,
-0.0221405029296875,
0.0164337158203125,
-0.0190887451171875,
-0.010589599609375,
0.031219482421875,
0.036102294921875,
-0.0439453125,
-0.066162109375,
-0.045562744140625,
0.013... |
anilmam23/lamini_dataset_with_token | 2023-10-17T19:35:19.000Z | [
"region:us"
] | anilmam23 | null | null | 0 | 0 | 2023-10-17T19:35:19 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
sproos/embedding-experiment-data-1 | 2023-10-17T19:44:49.000Z | [
"region:us"
] | sproos | null | null | 0 | 0 | 2023-10-17T19:44:49 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ai4ce/EgoPAT3Dv1 | 2023-10-17T22:06:55.000Z | [
"region:us"
] | ai4ce | null | null | 0 | 0 | 2023-10-17T20:15:49 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
hippocrates/2012i2b2_NER_train | 2023-10-17T20:21:29.000Z | [
"region:us"
] | hippocrates | null | null | 0 | 0 | 2023-10-17T20:21:26 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
hippocrates/2012i2b2_NER_test | 2023-10-17T20:21:36.000Z | [
"region:us"
] | hippocrates | null | null | 0 | 0 | 2023-10-17T20:21:34 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
vincenthuynh/memes-with-captions | 2023-10-17T20:47:51.000Z | [
"region:us"
] | vincenthuynh | null | null | 0 | 0 | 2023-10-17T20:28:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
fzliu/sift1b | 2023-10-31T17:08:32.000Z | [
"region:us"
] | fzliu | null | null | 0 | 0 | 2023-10-17T20:28:29 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
baebee/airoboros-all-unwrapped | 2023-10-17T20:58:21.000Z | [
"region:us"
] | baebee | null | null | 0 | 0 | 2023-10-17T20:40:07 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
autoevaluate/autoeval-eval-aslg_pc12-default-df42af-95819146512 | 2023-10-17T21:21:10.000Z | [
"autotrain",
"evaluation",
"region:us"
] | autoevaluate | null | null | 0 | 0 | 2023-10-17T21:16:52 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- aslg_pc12
eval_info:
task: translation
model: HamdanXI/t5_small_gloss_merged_dataset_adj_adv
metrics: ['bertscore', 'meteor', 'sari']
dataset_name: aslg_pc12
dataset_config: default
dataset_split: train
col_mapping:
source: gloss
target: text
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Translation
* Model: HamdanXI/t5_small_gloss_merged_dataset_adj_adv
* Dataset: aslg_pc12
* Config: default
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@HamdanXI](https://huggingface.co/HamdanXI) for evaluating this model. | 882 | [
[
-0.03131103515625,
-0.00682830810546875,
0.0183258056640625,
0.01239776611328125,
-0.014739990234375,
-0.009674072265625,
-0.0120697021484375,
-0.037872314453125,
0.00766754150390625,
0.026092529296875,
-0.07061767578125,
-0.0234832763671875,
-0.04986572265625,
... |
Vettzada/DelegaRP | 2023-10-17T21:46:50.000Z | [
"region:us"
] | Vettzada | null | null | 0 | 0 | 2023-10-17T21:46:20 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
hmao/cvecpe_multiapi_v0 | 2023-10-17T22:23:16.000Z | [
"region:us"
] | hmao | null | null | 0 | 0 | 2023-10-17T22:23:14 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: fncall
sequence: string
- name: generated_question
dtype: string
splits:
- name: train
num_bytes: 10817
num_examples: 25
download_size: 8627
dataset_size: 10817
---
# Dataset Card for "cvecpe_multiapi_v0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 489 | [
[
-0.048126220703125,
0.00936126708984375,
0.0177154541015625,
0.0274810791015625,
-0.0103607177734375,
0.003559112548828125,
0.0206756591796875,
-0.0180206298828125,
0.06500244140625,
0.0343017578125,
-0.0655517578125,
-0.04205322265625,
-0.0285491943359375,
... |
hmao/cvecpe_apis | 2023-10-17T22:48:58.000Z | [
"region:us"
] | hmao | null | null | 0 | 0 | 2023-10-17T22:35:30 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: args_dicts
list:
- name: default
dtype: string
- name: description
dtype: string
- name: name
dtype: string
- name: required
dtype: bool
- name: type
dtype: string
- name: returns
struct:
- name: description
dtype: string
- name: type
dtype: string
- name: dataset
dtype: string
- name: name
dtype: string
- name: api_type
dtype: string
- name: description
dtype: string
splits:
- name: train
num_bytes: 19587
num_examples: 14
download_size: 19268
dataset_size: 19587
---
# Dataset Card for "cvecpe_apis"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 883 | [
[
-0.0465087890625,
-0.001617431640625,
0.0191650390625,
0.0206756591796875,
-0.0036792755126953125,
0.00562286376953125,
0.01482391357421875,
-0.017486572265625,
0.042877197265625,
0.047882080078125,
-0.0697021484375,
-0.057464599609375,
-0.0245208740234375,
... |
open-llm-leaderboard/details_augtoma__qCammel-70 | 2023-10-17T22:35:47.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-17T22:35:39 | ---
pretty_name: Evaluation run of augtoma/qCammel-70
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [augtoma/qCammel-70](https://huggingface.co/augtoma/qCammel-70) on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_augtoma__qCammel-70\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-17T22:35:35.594396](https://huggingface.co/datasets/open-llm-leaderboard/details_augtoma__qCammel-70/blob/main/results_2023-10-17T22-35-35.594396.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.033766778523489936,\n\
\ \"em_stderr\": 0.001849802869119515,\n \"f1\": 0.10340918624161041,\n\
\ \"f1_stderr\": 0.0022106009828094797,\n \"acc\": 0.5700654570173166,\n\
\ \"acc_stderr\": 0.011407494958111332\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.033766778523489936,\n \"em_stderr\": 0.001849802869119515,\n\
\ \"f1\": 0.10340918624161041,\n \"f1_stderr\": 0.0022106009828094797\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.2971948445792267,\n \
\ \"acc_stderr\": 0.012588685966624186\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8429360694554064,\n \"acc_stderr\": 0.010226303949598479\n\
\ }\n}\n```"
repo_url: https://huggingface.co/augtoma/qCammel-70
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_17T22_35_35.594396
path:
- '**/details_harness|drop|3_2023-10-17T22-35-35.594396.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-17T22-35-35.594396.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_17T22_35_35.594396
path:
- '**/details_harness|gsm8k|5_2023-10-17T22-35-35.594396.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-17T22-35-35.594396.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_17T22_35_35.594396
path:
- '**/details_harness|winogrande|5_2023-10-17T22-35-35.594396.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-17T22-35-35.594396.parquet'
- config_name: results
data_files:
- split: 2023_10_17T22_35_35.594396
path:
- results_2023-10-17T22-35-35.594396.parquet
- split: latest
path:
- results_2023-10-17T22-35-35.594396.parquet
---
# Dataset Card for Evaluation run of augtoma/qCammel-70
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/augtoma/qCammel-70
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [augtoma/qCammel-70](https://huggingface.co/augtoma/qCammel-70) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_augtoma__qCammel-70",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-17T22:35:35.594396](https://huggingface.co/datasets/open-llm-leaderboard/details_augtoma__qCammel-70/blob/main/results_2023-10-17T22-35-35.594396.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.033766778523489936,
"em_stderr": 0.001849802869119515,
"f1": 0.10340918624161041,
"f1_stderr": 0.0022106009828094797,
"acc": 0.5700654570173166,
"acc_stderr": 0.011407494958111332
},
"harness|drop|3": {
"em": 0.033766778523489936,
"em_stderr": 0.001849802869119515,
"f1": 0.10340918624161041,
"f1_stderr": 0.0022106009828094797
},
"harness|gsm8k|5": {
"acc": 0.2971948445792267,
"acc_stderr": 0.012588685966624186
},
"harness|winogrande|5": {
"acc": 0.8429360694554064,
"acc_stderr": 0.010226303949598479
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,125 | [
[
-0.0307769775390625,
-0.0439453125,
0.0171051025390625,
0.0128631591796875,
-0.0162353515625,
0.006557464599609375,
-0.01849365234375,
-0.0109100341796875,
0.0271453857421875,
0.044586181640625,
-0.04718017578125,
-0.06976318359375,
-0.045074462890625,
0.018... |
jsrdhher/upload_use | 2023-10-19T17:41:24.000Z | [
"region:us"
] | jsrdhher | null | null | 0 | 0 | 2023-10-17T22:37:59 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
hmao/all_apis_for_multiapi | 2023-10-19T16:54:52.000Z | [
"region:us"
] | hmao | null | null | 0 | 0 | 2023-10-17T22:54:33 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: dataset
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: returns
struct:
- name: description
dtype: string
- name: type
dtype: string
- name: api_type
dtype: string
- name: args_dicts
list:
- name: default
dtype: string
- name: description
dtype: string
- name: name
dtype: string
- name: required
dtype: bool
- name: type
dtype: string
splits:
- name: train
num_bytes: 38926
num_examples: 40
download_size: 24017
dataset_size: 38926
---
# Dataset Card for "all_apis_for_multiapi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 893 | [
[
-0.045806884765625,
-0.00572967529296875,
0.027069091796875,
0.03564453125,
-0.00920867919921875,
0.0046539306640625,
0.0251312255859375,
-0.01120758056640625,
0.0701904296875,
0.0303955078125,
-0.067626953125,
-0.05511474609375,
-0.0421142578125,
-0.0014877... |
hmao/multiapi_eval_data | 2023-10-19T16:54:55.000Z | [
"region:us"
] | hmao | null | null | 0 | 0 | 2023-10-17T22:55:46 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: fncall
sequence: string
- name: dataset
dtype: string
- name: generated_question
dtype: string
splits:
- name: train
num_bytes: 37075
num_examples: 95
download_size: 17812
dataset_size: 37075
---
# Dataset Card for "multiapi_eval_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 526 | [
[
-0.04327392578125,
-0.027587890625,
0.01523590087890625,
0.022705078125,
-0.00045871734619140625,
0.01910400390625,
0.0112762451171875,
-0.0038890838623046875,
0.059326171875,
0.026397705078125,
-0.0440673828125,
-0.042633056640625,
-0.032989501953125,
-0.01... |
Mutugi/Solana_150 | 2023-10-17T23:10:11.000Z | [
"region:us"
] | Mutugi | null | null | 0 | 0 | 2023-10-17T23:09:52 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01497650146484375,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.016998291015625,
-0.05206298828125,
-0.01496124267578125,
-0.06036376953125,
0.0379... |
jjcalderon/man | 2023-10-19T18:42:03.000Z | [
"region:us"
] | jjcalderon | null | null | 0 | 0 | 2023-10-17T23:54:26 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
open-llm-leaderboard/details_cerebras__Cerebras-GPT-256M | 2023-10-17T23:58:52.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-17T23:58:44 | ---
pretty_name: Evaluation run of cerebras/Cerebras-GPT-256M
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [cerebras/Cerebras-GPT-256M](https://huggingface.co/cerebras/Cerebras-GPT-256M)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_cerebras__Cerebras-GPT-256M\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-17T23:58:40.333054](https://huggingface.co/datasets/open-llm-leaderboard/details_cerebras__Cerebras-GPT-256M/blob/main/results_2023-10-17T23-58-40.333054.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.002202181208053691,\n\
\ \"em_stderr\": 0.00048005108166194305,\n \"f1\": 0.032553481543624224,\n\
\ \"f1_stderr\": 0.0010881632384588218,\n \"acc\": 0.26243093922651933,\n\
\ \"acc_stderr\": 0.007017551441813875\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.002202181208053691,\n \"em_stderr\": 0.00048005108166194305,\n\
\ \"f1\": 0.032553481543624224,\n \"f1_stderr\": 0.0010881632384588218\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\"\
: 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5248618784530387,\n\
\ \"acc_stderr\": 0.01403510288362775\n }\n}\n```"
repo_url: https://huggingface.co/cerebras/Cerebras-GPT-256M
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_17T23_58_40.333054
path:
- '**/details_harness|drop|3_2023-10-17T23-58-40.333054.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-17T23-58-40.333054.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_17T23_58_40.333054
path:
- '**/details_harness|gsm8k|5_2023-10-17T23-58-40.333054.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-17T23-58-40.333054.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_17T23_58_40.333054
path:
- '**/details_harness|winogrande|5_2023-10-17T23-58-40.333054.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-17T23-58-40.333054.parquet'
- config_name: results
data_files:
- split: 2023_10_17T23_58_40.333054
path:
- results_2023-10-17T23-58-40.333054.parquet
- split: latest
path:
- results_2023-10-17T23-58-40.333054.parquet
---
# Dataset Card for Evaluation run of cerebras/Cerebras-GPT-256M
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/cerebras/Cerebras-GPT-256M
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [cerebras/Cerebras-GPT-256M](https://huggingface.co/cerebras/Cerebras-GPT-256M) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_cerebras__Cerebras-GPT-256M",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-17T23:58:40.333054](https://huggingface.co/datasets/open-llm-leaderboard/details_cerebras__Cerebras-GPT-256M/blob/main/results_2023-10-17T23-58-40.333054.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.002202181208053691,
"em_stderr": 0.00048005108166194305,
"f1": 0.032553481543624224,
"f1_stderr": 0.0010881632384588218,
"acc": 0.26243093922651933,
"acc_stderr": 0.007017551441813875
},
"harness|drop|3": {
"em": 0.002202181208053691,
"em_stderr": 0.00048005108166194305,
"f1": 0.032553481543624224,
"f1_stderr": 0.0010881632384588218
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.5248618784530387,
"acc_stderr": 0.01403510288362775
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,164 | [
[
-0.029998779296875,
-0.0496826171875,
0.01812744140625,
0.0203857421875,
-0.004634857177734375,
0.01052093505859375,
-0.031646728515625,
-0.00958251953125,
0.0286712646484375,
0.0400390625,
-0.050079345703125,
-0.06756591796875,
-0.053192138671875,
0.0055847... |
zcczhang/UVD | 2023-10-18T00:37:31.000Z | [
"task_categories:robotics",
"license:mit",
"arxiv:2310.08581",
"region:us"
] | zcczhang | null | null | 1 | 0 | 2023-10-18T00:21:19 | ---
license: mit
task_categories:
- robotics
---
# Dataset for Universal Visual Decomposer (UVD)
## Dataset Description
*Homepage:* https://zcczhang.github.io/UVD/
*Codebase:* https://github.com/zcczhang/UVD
*Paper:* https://arxiv.org/abs/2310.08581
## Dataset Summary
This is the dataset used for the paper [Universal Visual Decomposer: Long-Horizon Manipulation Made Easy](https://arxiv.org/abs/2310.08581). We release the simulation data in FrankaKitchen and data for real-world experiments.
If you find our work useful, please consider citing us!
```bibtex
@misc{zhang2023universal,
title = {Universal Visual Decomposer: Long-Horizon Manipulation Made Easy},
author = {Zichen Zhang and Yunshuang Li and Osbert Bastani and Abhishek Gupta and Dinesh Jayaraman and Yecheng Jason Ma and Luca Weihs},
title = {Universal Visual Decomposer: Long-Horizon Manipulation Made Easy},
year = {2023},
eprint = {arXiv:2310.08581},
}
```
| 953 | [
[
-0.0218048095703125,
-0.057861328125,
0.0288238525390625,
0.016693115234375,
-0.039520263671875,
-0.049102783203125,
-0.008544921875,
-0.03472900390625,
-0.0057373046875,
0.052642822265625,
-0.04449462890625,
-0.052642822265625,
-0.010284423828125,
-0.007385... |
limaxlee/hawala-data | 2023-10-18T00:37:49.000Z | [
"region:us"
] | limaxlee | null | null | 0 | 0 | 2023-10-18T00:36:54 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
open-llm-leaderboard/details_quantumaikr__QuantumLM-7B | 2023-10-18T01:13:20.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-18T01:13:11 | ---
pretty_name: Evaluation run of quantumaikr/QuantumLM-7B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [quantumaikr/QuantumLM-7B](https://huggingface.co/quantumaikr/QuantumLM-7B) on\
\ the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_quantumaikr__QuantumLM-7B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-18T01:13:07.754865](https://huggingface.co/datasets/open-llm-leaderboard/details_quantumaikr__QuantumLM-7B/blob/main/results_2023-10-18T01-13-07.754865.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.003355704697986577,\n\
\ \"em_stderr\": 0.0005922452850005221,\n \"f1\": 0.05992030201342302,\n\
\ \"f1_stderr\": 0.001439479348001652,\n \"acc\": 0.39582407087716237,\n\
\ \"acc_stderr\": 0.0100052755032964\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.003355704697986577,\n \"em_stderr\": 0.0005922452850005221,\n\
\ \"f1\": 0.05992030201342302,\n \"f1_stderr\": 0.001439479348001652\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.07657316148597422,\n \
\ \"acc_stderr\": 0.007324564881451574\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7150749802683505,\n \"acc_stderr\": 0.012685986125141227\n\
\ }\n}\n```"
repo_url: https://huggingface.co/quantumaikr/QuantumLM-7B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_18T01_13_07.754865
path:
- '**/details_harness|drop|3_2023-10-18T01-13-07.754865.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-18T01-13-07.754865.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_18T01_13_07.754865
path:
- '**/details_harness|gsm8k|5_2023-10-18T01-13-07.754865.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-18T01-13-07.754865.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_18T01_13_07.754865
path:
- '**/details_harness|winogrande|5_2023-10-18T01-13-07.754865.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-18T01-13-07.754865.parquet'
- config_name: results
data_files:
- split: 2023_10_18T01_13_07.754865
path:
- results_2023-10-18T01-13-07.754865.parquet
- split: latest
path:
- results_2023-10-18T01-13-07.754865.parquet
---
# Dataset Card for Evaluation run of quantumaikr/QuantumLM-7B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/quantumaikr/QuantumLM-7B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [quantumaikr/QuantumLM-7B](https://huggingface.co/quantumaikr/QuantumLM-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_quantumaikr__QuantumLM-7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-18T01:13:07.754865](https://huggingface.co/datasets/open-llm-leaderboard/details_quantumaikr__QuantumLM-7B/blob/main/results_2023-10-18T01-13-07.754865.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.003355704697986577,
"em_stderr": 0.0005922452850005221,
"f1": 0.05992030201342302,
"f1_stderr": 0.001439479348001652,
"acc": 0.39582407087716237,
"acc_stderr": 0.0100052755032964
},
"harness|drop|3": {
"em": 0.003355704697986577,
"em_stderr": 0.0005922452850005221,
"f1": 0.05992030201342302,
"f1_stderr": 0.001439479348001652
},
"harness|gsm8k|5": {
"acc": 0.07657316148597422,
"acc_stderr": 0.007324564881451574
},
"harness|winogrande|5": {
"acc": 0.7150749802683505,
"acc_stderr": 0.012685986125141227
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,197 | [
[
-0.02520751953125,
-0.0538330078125,
0.0156097412109375,
0.00658416748046875,
-0.0197601318359375,
0.01505279541015625,
-0.0167388916015625,
-0.00771331787109375,
0.023162841796875,
0.036834716796875,
-0.042266845703125,
-0.06427001953125,
-0.040679931640625,
... |
open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-OpenOrca_20w | 2023-10-18T01:15:07.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-18T01:14:59 | ---
pretty_name: Evaluation run of CHIH-HUNG/llama-2-13b-OpenOrca_20w
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [CHIH-HUNG/llama-2-13b-OpenOrca_20w](https://huggingface.co/CHIH-HUNG/llama-2-13b-OpenOrca_20w)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-OpenOrca_20w\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-18T01:14:55.229555](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-OpenOrca_20w/blob/main/results_2023-10-18T01-14-55.229555.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.14953859060402686,\n\
\ \"em_stderr\": 0.0036521078888639676,\n \"f1\": 0.20982382550335602,\n\
\ \"f1_stderr\": 0.003706029190176112,\n \"acc\": 0.44925660000490675,\n\
\ \"acc_stderr\": 0.010476365550372343\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.14953859060402686,\n \"em_stderr\": 0.0036521078888639676,\n\
\ \"f1\": 0.20982382550335602,\n \"f1_stderr\": 0.003706029190176112\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.12661106899166036,\n \
\ \"acc_stderr\": 0.009159715283081094\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7719021310181531,\n \"acc_stderr\": 0.011793015817663592\n\
\ }\n}\n```"
repo_url: https://huggingface.co/CHIH-HUNG/llama-2-13b-OpenOrca_20w
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_18T01_14_55.229555
path:
- '**/details_harness|drop|3_2023-10-18T01-14-55.229555.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-18T01-14-55.229555.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_18T01_14_55.229555
path:
- '**/details_harness|gsm8k|5_2023-10-18T01-14-55.229555.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-18T01-14-55.229555.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_18T01_14_55.229555
path:
- '**/details_harness|winogrande|5_2023-10-18T01-14-55.229555.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-18T01-14-55.229555.parquet'
- config_name: results
data_files:
- split: 2023_10_18T01_14_55.229555
path:
- results_2023-10-18T01-14-55.229555.parquet
- split: latest
path:
- results_2023-10-18T01-14-55.229555.parquet
---
# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-OpenOrca_20w
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/CHIH-HUNG/llama-2-13b-OpenOrca_20w
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [CHIH-HUNG/llama-2-13b-OpenOrca_20w](https://huggingface.co/CHIH-HUNG/llama-2-13b-OpenOrca_20w) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-OpenOrca_20w",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-18T01:14:55.229555](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-OpenOrca_20w/blob/main/results_2023-10-18T01-14-55.229555.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.14953859060402686,
"em_stderr": 0.0036521078888639676,
"f1": 0.20982382550335602,
"f1_stderr": 0.003706029190176112,
"acc": 0.44925660000490675,
"acc_stderr": 0.010476365550372343
},
"harness|drop|3": {
"em": 0.14953859060402686,
"em_stderr": 0.0036521078888639676,
"f1": 0.20982382550335602,
"f1_stderr": 0.003706029190176112
},
"harness|gsm8k|5": {
"acc": 0.12661106899166036,
"acc_stderr": 0.009159715283081094
},
"harness|winogrande|5": {
"acc": 0.7719021310181531,
"acc_stderr": 0.011793015817663592
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,317 | [
[
-0.0274658203125,
-0.05316162109375,
0.01593017578125,
0.020111083984375,
-0.0164031982421875,
0.01227569580078125,
-0.0270538330078125,
-0.0235595703125,
0.0341796875,
0.038482666015625,
-0.04754638671875,
-0.06903076171875,
-0.05010986328125,
0.01589965820... |
gqk/opv2v | 2023-10-18T11:04:03.000Z | [
"region:us"
] | gqk | null | null | 0 | 0 | 2023-10-18T01:33:26 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
mesolitica/translated-WizardLM_evol_instruct_V2_196k | 2023-10-18T02:24:05.000Z | [
"region:us"
] | mesolitica | null | null | 0 | 0 | 2023-10-18T02:02:52 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ouasdg/coco-encoded | 2023-10-18T04:02:26.000Z | [
"region:us"
] | ouasdg | null | null | 0 | 0 | 2023-10-18T02:20:14 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
frosthead/Forest_Depth_Estimation_by_Frost_Head | 2023-10-18T17:42:03.000Z | [
"task_categories:image-to-image",
"task_categories:depth-estimation",
"size_categories:1K<n<10K",
"license:apache-2.0",
"DepthEstimation",
"forestDatasets",
"images",
"monocularDepthEstimation",
"monocular",
"depth",
"frosthead",
"deeplearning",
"Monocular Vision",
"Image Analysis",
"Ope... | frosthead | null | null | 2 | 0 | 2023-10-18T02:52:30 | ---
license: apache-2.0
task_categories:
- image-to-image
- depth-estimation
tags:
- DepthEstimation
- forestDatasets
- images
- monocularDepthEstimation
- monocular
- depth
- frosthead
- deeplearning
- Monocular Vision
- Image Analysis
- Open Data
- complexDepthEstimatino
pretty_name: Forest Depth Estimation by frosthead
size_categories:
- 1K<n<10K
---
# Forest Depth Estimation by Frost Head
## Overview
The Frost Head Forest Depth Estimation Dataset is a comprehensive collection of synthetic forest images generated using Unreal Engine 5. This dataset is specifically designed for advanced forest depth estimation research and related applications in the field of computer vision and environmental analysis.
## Dataset Construction
The Frost Head Forest Depth Estimation Dataset is constructed using advanced rendering techniques in Unreal Engine 5, ensuring the creation of high-fidelity synthetic forest environments. The dataset's synthetic nature enables precise control over environmental parameters, facilitating the generation of diverse and customizable forest landscapes for in-depth analysis and experimentation.
For more information and updates on the Forest Depth Estimation by Frost Head dataset, visit [Frost Head's official website](https://www.frosthead.in).
For inquiries and collaborations, please contact Frost Head at [ed.ayush2003@gmail.com](mailto:ed.ayush2003@gmail.com).
## Preview

## Importance of the Dataset
Understanding the depth characteristics of forest environments is crucial for various applications, including ecological monitoring, landscape analysis, and virtual environment simulations. The Frost Head Forest Depth Estimation Dataset serves as a valuable resource for researchers and developers aiming to improve the accuracy and reliability of depth estimation algorithms in natural environments.
## Use Cases
The dataset can be utilized for various purposes, including:
- Forest depth perception research
- Computer vision algorithm development
- Environmental simulation and analysis
- Training and testing machine learning models for depth estimation
## Data Storage
The dataset follows the following file structure:
```
Forest_Depth_Estimation_by_FrostHead
|__ org
| |__ NewWorld.0001
| |__ NewWorld.0002
| |__ ...
|
|__ depth
| |__ NewWorld.0001
| |__ NewWorld.0002
| |__ ...
```
The **org** folder contains the original synthetic forest images, while the **depth** folder contains corresponding depth maps linked to the original images by their file names. This structure facilitates seamless pairing and analysis of the synthetic forest images and their respective depth maps.
| 2,771 | [
[
-0.04638671875,
-0.0298004150390625,
0.0265350341796875,
0.0229644775390625,
-0.044769287109375,
0.032928466796875,
0.022552490234375,
-0.046112060546875,
0.0169219970703125,
0.026092529296875,
-0.058685302734375,
-0.0582275390625,
-0.024749755859375,
-0.021... |
sunxiyin/test | 2023-10-18T03:07:28.000Z | [
"region:us"
] | sunxiyin | null | null | 0 | 0 | 2023-10-18T03:07:28 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
akmjlohf/pianotrain | 2023-10-18T03:22:32.000Z | [
"region:us"
] | akmjlohf | null | null | 0 | 0 | 2023-10-18T03:22:32 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
hk-kaden-kim/pix2struct-chartcaptioning-v0 | 2023-10-18T03:28:22.000Z | [
"region:us"
] | hk-kaden-kim | null | null | 0 | 0 | 2023-10-18T03:27:56 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
- name: chartType
dtype: string
- name: chartElement
dtype: string
- name: dataType
dtype: string
splits:
- name: train
num_bytes: 72690622.389
num_examples: 1373
- name: validation
num_bytes: 10174935.0
num_examples: 199
- name: test
num_bytes: 20658483.0
num_examples: 399
download_size: 102432750
dataset_size: 103524040.389
---
# Dataset Card for "pix2struct-chartcaptioning"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 839 | [
[
-0.02947998046875,
-0.0087127685546875,
0.00624847412109375,
0.0305328369140625,
-0.00829315185546875,
0.0144805908203125,
0.03314208984375,
-0.00897216796875,
0.042816162109375,
0.030181884765625,
-0.044158935546875,
-0.042449951171875,
-0.053680419921875,
... |
Hoangggggf/TestDataset | 2023-10-18T03:29:33.000Z | [
"region:us"
] | Hoangggggf | null | null | 0 | 0 | 2023-10-18T03:29:33 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
aisings/VILLANO_ANTILLANO__DATASET | 2023-10-18T03:34:18.000Z | [
"region:us"
] | aisings | null | null | 0 | 0 | 2023-10-18T03:30:38 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Ermarrero/riddles_v1 | 2023-10-18T15:51:02.000Z | [
"language:en",
"license:apache-2.0",
"region:us"
] | Ermarrero | null | null | 1 | 0 | 2023-10-18T03:53:09 | ---
license: apache-2.0
language:
- en
---
All credit goes to https://github.com/crawsome/riddles
I had GPT4 process each riddle by giving it the correct answers in order to create a desirable output.
The prompt used to create the output was:
"Answer the users question using the answer. Do not mention you have the answer to the riddle. You are to answer it as if you never got the answer. Prioritize educational insights by breaking down the logic behind your decisions. Make sure to cover all relevant topics in a logical order."
The goal of this dataset was fine-tune a language model in outputing reasoning.
The performance of this small dataset is pretty good, it does a rather good job in breaking down a question in a logical order.
Open Orca Mistral 7B fined tuned with dataset in one epoch:
Question: I am easy to lift, but hard to throw. What am I?
Output: The riddle describes an object that is easy to lift but hard to throw. This suggests that the object is lightweight and not difficult to pick up, but also lacks the properties that would make it suitable for throwing, such as aerodynamics or a solid shape.
One object that fits this description is a feather. Feathers are lightweight and easy to lift, but they are not designed for throwing. Instead, they are used for insulation, flight, and decoration.
So, the answer to the riddle is a feather.
Correct Answer: Feather
Question: Rachel goes to the supermarket and buys 10 tomatoes. Unfortunately, on the way back home, all but 9 get ruined. How many tomatoes are left in a good condition?
Output: Let's break down the problem step by step. Rachel buys 10 tomatoes, but on her way back home, all but 9 get ruined. This means that 10 - 9 = 1 tomato gets ruined. Since there are 9 tomatoes left in good condition, the answer is 9.
So, Rachel has 9 tomatoes left in a good condition after her journey back home.
Correct Answer: 9
| 1,920 | [
[
-0.004764556884765625,
-0.07745361328125,
0.0287017822265625,
-0.00533294677734375,
-0.0108795166015625,
-0.005313873291015625,
-0.0025043487548828125,
-0.044708251953125,
0.02105712890625,
0.0258941650390625,
-0.05401611328125,
0.0030269622802734375,
-0.0541992... |
nekofura/collection | 2023-10-24T16:51:50.000Z | [
"region:us"
] | nekofura | null | null | 0 | 0 | 2023-10-18T04:00:43 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
scriptshub/guanaco-llama2-1k | 2023-10-18T04:11:51.000Z | [
"region:us"
] | scriptshub | null | null | 0 | 0 | 2023-10-18T04:11:47 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1654448
num_examples: 1000
download_size: 966693
dataset_size: 1654448
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "guanaco-llama2-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 444 | [
[
-0.0220184326171875,
-0.0128173828125,
0.01739501953125,
0.037689208984375,
-0.03839111328125,
0.000885009765625,
0.0258941650390625,
-0.0190277099609375,
0.0645751953125,
0.0298919677734375,
-0.054718017578125,
-0.06707763671875,
-0.05029296875,
-0.01603698... |
nekofura/scripting | 2023-10-18T04:23:58.000Z | [
"region:us"
] | nekofura | null | null | 0 | 0 | 2023-10-18T04:15:52 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
open-llm-leaderboard/details_Open-Orca__LlongOrca-7B-16k | 2023-10-18T04:31:36.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-18T04:31:27 | ---
pretty_name: Evaluation run of Open-Orca/LlongOrca-7B-16k
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Open-Orca/LlongOrca-7B-16k](https://huggingface.co/Open-Orca/LlongOrca-7B-16k)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Open-Orca__LlongOrca-7B-16k\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-18T04:31:23.491817](https://huggingface.co/datasets/open-llm-leaderboard/details_Open-Orca__LlongOrca-7B-16k/blob/main/results_2023-10-18T04-31-23.491817.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.016988255033557047,\n\
\ \"em_stderr\": 0.0013234068882109723,\n \"f1\": 0.08061136744966452,\n\
\ \"f1_stderr\": 0.001896831507875326,\n \"acc\": 0.4100619744335266,\n\
\ \"acc_stderr\": 0.009753220057431532\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.016988255033557047,\n \"em_stderr\": 0.0013234068882109723,\n\
\ \"f1\": 0.08061136744966452,\n \"f1_stderr\": 0.001896831507875326\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.07505686125852919,\n \
\ \"acc_stderr\": 0.007257633145486642\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.745067087608524,\n \"acc_stderr\": 0.012248806969376422\n\
\ }\n}\n```"
repo_url: https://huggingface.co/Open-Orca/LlongOrca-7B-16k
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_18T04_31_23.491817
path:
- '**/details_harness|drop|3_2023-10-18T04-31-23.491817.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-18T04-31-23.491817.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_18T04_31_23.491817
path:
- '**/details_harness|gsm8k|5_2023-10-18T04-31-23.491817.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-18T04-31-23.491817.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_18T04_31_23.491817
path:
- '**/details_harness|winogrande|5_2023-10-18T04-31-23.491817.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-18T04-31-23.491817.parquet'
- config_name: results
data_files:
- split: 2023_10_18T04_31_23.491817
path:
- results_2023-10-18T04-31-23.491817.parquet
- split: latest
path:
- results_2023-10-18T04-31-23.491817.parquet
---
# Dataset Card for Evaluation run of Open-Orca/LlongOrca-7B-16k
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Open-Orca/LlongOrca-7B-16k
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [Open-Orca/LlongOrca-7B-16k](https://huggingface.co/Open-Orca/LlongOrca-7B-16k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Open-Orca__LlongOrca-7B-16k",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-18T04:31:23.491817](https://huggingface.co/datasets/open-llm-leaderboard/details_Open-Orca__LlongOrca-7B-16k/blob/main/results_2023-10-18T04-31-23.491817.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.016988255033557047,
"em_stderr": 0.0013234068882109723,
"f1": 0.08061136744966452,
"f1_stderr": 0.001896831507875326,
"acc": 0.4100619744335266,
"acc_stderr": 0.009753220057431532
},
"harness|drop|3": {
"em": 0.016988255033557047,
"em_stderr": 0.0013234068882109723,
"f1": 0.08061136744966452,
"f1_stderr": 0.001896831507875326
},
"harness|gsm8k|5": {
"acc": 0.07505686125852919,
"acc_stderr": 0.007257633145486642
},
"harness|winogrande|5": {
"acc": 0.745067087608524,
"acc_stderr": 0.012248806969376422
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,221 | [
[
-0.0279083251953125,
-0.05242919921875,
0.01023101806640625,
0.0144500732421875,
-0.0136260986328125,
0.004085540771484375,
-0.0274200439453125,
-0.0237579345703125,
0.035888671875,
0.042510986328125,
-0.04669189453125,
-0.07647705078125,
-0.046051025390625,
... |
tierdesafinante/sasuke_uchiha | 2023-10-18T04:53:18.000Z | [
"region:us"
] | tierdesafinante | null | null | 0 | 0 | 2023-10-18T04:50:42 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
satirth/coviddata | 2023-10-18T04:57:16.000Z | [
"region:us"
] | satirth | null | null | 0 | 0 | 2023-10-18T04:57:16 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
autoevaluate/autoeval-eval-samsum-samsum-ec1044-95879146522 | 2023-10-18T05:05:13.000Z | [
"autotrain",
"evaluation",
"region:us"
] | autoevaluate | null | null | 0 | 0 | 2023-10-18T04:59:53 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: Joemgu/mlong-t5-large-sumstew
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Joemgu/mlong-t5-large-sumstew
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@baohuynhbk14](https://huggingface.co/baohuynhbk14) for evaluating this model. | 822 | [
[
-0.0279388427734375,
-0.01067352294921875,
0.01103973388671875,
0.01849365234375,
-0.00762176513671875,
-0.01076507568359375,
0.0017633438110351562,
-0.0294952392578125,
0.026519775390625,
0.036529541015625,
-0.0712890625,
-0.0102081298828125,
-0.051910400390625... |
autoevaluate/autoeval-eval-samsum-samsum-fda4ec-95880146523 | 2023-10-18T05:01:41.000Z | [
"autotrain",
"evaluation",
"region:us"
] | autoevaluate | null | null | 0 | 0 | 2023-10-18T04:59:58 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: Joemgu/long-t5-base-sumstew
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Joemgu/long-t5-base-sumstew
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@baohuynhbk14](https://huggingface.co/baohuynhbk14) for evaluating this model. | 818 | [
[
-0.0290985107421875,
-0.00893402099609375,
0.0149688720703125,
0.0191802978515625,
-0.01202392578125,
-0.005725860595703125,
0.00347900390625,
-0.0303802490234375,
0.024444580078125,
0.032562255859375,
-0.0750732421875,
-0.015167236328125,
-0.0504150390625,
... |
shivanikerai/keyword_mapping_v1_500 | 2023-10-18T05:01:19.000Z | [
"region:us"
] | shivanikerai | null | null | 0 | 0 | 2023-10-18T05:00:37 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
anubhavmaity/my-awesome-dataset | 2023-10-18T05:40:42.000Z | [
"region:us"
] | anubhavmaity | null | null | 0 | 0 | 2023-10-18T05:40:42 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
anubhavmaity/github-issues | 2023-10-20T04:00:40.000Z | [
"region:us"
] | anubhavmaity | null | null | 0 | 0 | 2023-10-18T05:43:02 | ---
dataset_info:
features:
- name: url
dtype: string
- name: repository_url
dtype: string
- name: labels_url
dtype: string
- name: comments_url
dtype: string
- name: events_url
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: number
dtype: int64
- name: title
dtype: string
- name: user
dtype: string
- name: labels
dtype: string
- name: state
dtype: string
- name: locked
dtype: bool
- name: assignee
dtype: string
- name: assignees
dtype: string
- name: milestone
dtype: string
- name: comments
dtype: string
- name: created_at
dtype: string
- name: updated_at
dtype: string
- name: closed_at
dtype: string
- name: author_association
dtype: string
- name: active_lock_reason
dtype: float64
- name: body
dtype: string
- name: reactions
dtype: string
- name: timeline_url
dtype: string
- name: performed_via_github_app
dtype: float64
- name: state_reason
dtype: string
- name: draft
dtype: float64
- name: pull_request
dtype: string
- name: is_pull_request
dtype: bool
splits:
- name: train
num_bytes: 35370223
num_examples: 6279
download_size: 9128830
dataset_size: 35370223
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
annotations_creators:
- other
language:
- en
language_creators:
- other
license: []
multilinguality:
- monolingual
pretty_name: Github Issues
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- github-issues
- huggingface-nlp-course
- datasets
task_categories:
- text-classification
- text-retrieval
task_ids:
- multi-class-classification
- multi-label-classification
- document-retrieval | 1,829 | [
[
-0.031097412109375,
-0.0292510986328125,
0.005939483642578125,
0.0283203125,
-0.017791748046875,
0.02996826171875,
-0.026397705078125,
-0.036712646484375,
0.061798095703125,
0.03582763671875,
-0.032745361328125,
-0.046112060546875,
-0.052490234375,
0.0310058... |
haurajahra/SQUAD-ID | 2023-10-18T05:53:07.000Z | [
"task_categories:question-answering",
"size_categories:100K<n<1M",
"language:id",
"license:other",
"region:us"
] | haurajahra | null | null | 0 | 0 | 2023-10-18T05:44:11 | ---
license: other
license_name: marian-mt
license_link: LICENSE
task_categories:
- question-answering
language:
- id
size_categories:
- 100K<n<1M
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | 4,511 | [
[
-0.04034423828125,
-0.0419921875,
0.009765625,
0.0178070068359375,
-0.0300445556640625,
-0.00893402099609375,
-0.0026874542236328125,
-0.048431396484375,
0.043212890625,
0.059478759765625,
-0.05938720703125,
-0.069580078125,
-0.042205810546875,
0.00993347167... |
mesolitica/translated-glaive_coder_raw_text | 2023-10-18T05:56:26.000Z | [
"region:us"
] | mesolitica | null | null | 0 | 0 | 2023-10-18T05:54:39 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
QEU/QEU-initialize-300-ja | 2023-10-18T07:05:01.000Z | [
"license:apache-2.0",
"region:us"
] | QEU | null | null | 0 | 0 | 2023-10-18T07:02:02 | ---
license: apache-2.0
---
## このデータセットはLLMのfine-tuningにおける、**「初期化(initialize)」**を目的としたものです。
### データのレコード数は300件程度しかありません。
## 目的:非日本語で最適化されたLLMをFine-tuningで日本語を使いこなせるようにするための「DS群」
### (注意1:あくまで著者個人のテスト用です。DSを使うのは自由ですが、自己責任で・・・。)
### (注意2:代替10エポックの「初期化」のために使うことを考えています。)
## 使い方:
##(1) 以下の3種のデータセットを「シリアルに(後述)」使用します。
-1. 初期化用データセット(本DS)
-2. 著者がdatabrick-15k-jaを4分割させたものの一つ
-3. ユーザが準備した日本語のデータセット(本当に学習させたい情報)
##(2) ファインチューニングのシリアル学習の方案(自分で工夫してください。)
-1. 初期化用データセット(本DS): 10エポック程度
-2. 著者がdatabrick-15k-ja: 10エポック程度
-3. 日本語のデータセット:ご自分の納得がいくまで学習させてください。
詳しい情報は[こちらのブログ](https://jpnqeur23lmqsw.blogspot.com/2023/09/qeur23llmdss10-databricks15k.html)を参照してください。
| 659 | [
[
-0.058746337890625,
-0.06610107421875,
0.0288543701171875,
0.0313720703125,
-0.04791259765625,
0.0010700225830078125,
-0.01348114013671875,
-0.0006160736083984375,
0.019378662109375,
0.01381683349609375,
-0.063720703125,
-0.0511474609375,
-0.0399169921875,
0... |
open-llm-leaderboard/details_psmathur__model_420_preview | 2023-10-18T07:05:14.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-18T07:05:06 | ---
pretty_name: Evaluation run of psmathur/model_420_preview
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [psmathur/model_420_preview](https://huggingface.co/psmathur/model_420_preview)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_psmathur__model_420_preview\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-18T07:05:02.354385](https://huggingface.co/datasets/open-llm-leaderboard/details_psmathur__model_420_preview/blob/main/results_2023-10-18T07-05-02.354385.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0016778523489932886,\n\
\ \"em_stderr\": 0.0004191330178826867,\n \"f1\": 0.06602034395973153,\n\
\ \"f1_stderr\": 0.0013713725074901318,\n \"acc\": 0.5827673137371175,\n\
\ \"acc_stderr\": 0.011721630765571481\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0016778523489932886,\n \"em_stderr\": 0.0004191330178826867,\n\
\ \"f1\": 0.06602034395973153,\n \"f1_stderr\": 0.0013713725074901318\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.33206974981046244,\n \
\ \"acc_stderr\": 0.012972465034361861\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8334648776637726,\n \"acc_stderr\": 0.0104707964967811\n\
\ }\n}\n```"
repo_url: https://huggingface.co/psmathur/model_420_preview
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_18T07_05_02.354385
path:
- '**/details_harness|drop|3_2023-10-18T07-05-02.354385.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-18T07-05-02.354385.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_18T07_05_02.354385
path:
- '**/details_harness|gsm8k|5_2023-10-18T07-05-02.354385.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-18T07-05-02.354385.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_18T07_05_02.354385
path:
- '**/details_harness|winogrande|5_2023-10-18T07-05-02.354385.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-18T07-05-02.354385.parquet'
- config_name: results
data_files:
- split: 2023_10_18T07_05_02.354385
path:
- results_2023-10-18T07-05-02.354385.parquet
- split: latest
path:
- results_2023-10-18T07-05-02.354385.parquet
---
# Dataset Card for Evaluation run of psmathur/model_420_preview
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/psmathur/model_420_preview
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [psmathur/model_420_preview](https://huggingface.co/psmathur/model_420_preview) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_psmathur__model_420_preview",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-18T07:05:02.354385](https://huggingface.co/datasets/open-llm-leaderboard/details_psmathur__model_420_preview/blob/main/results_2023-10-18T07-05-02.354385.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0016778523489932886,
"em_stderr": 0.0004191330178826867,
"f1": 0.06602034395973153,
"f1_stderr": 0.0013713725074901318,
"acc": 0.5827673137371175,
"acc_stderr": 0.011721630765571481
},
"harness|drop|3": {
"em": 0.0016778523489932886,
"em_stderr": 0.0004191330178826867,
"f1": 0.06602034395973153,
"f1_stderr": 0.0013713725074901318
},
"harness|gsm8k|5": {
"acc": 0.33206974981046244,
"acc_stderr": 0.012972465034361861
},
"harness|winogrande|5": {
"acc": 0.8334648776637726,
"acc_stderr": 0.0104707964967811
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,227 | [
[
-0.02947998046875,
-0.04205322265625,
0.0224151611328125,
0.015045166015625,
-0.020843505859375,
0.0122222900390625,
-0.02392578125,
-0.00469207763671875,
0.027557373046875,
0.038604736328125,
-0.055877685546875,
-0.06488037109375,
-0.050506591796875,
0.0158... |
laiyliod/asdasdasdasd | 2023-10-18T07:30:10.000Z | [
"region:us"
] | laiyliod | null | null | 0 | 0 | 2023-10-18T07:30:10 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
valtyheaterclickhere/valtyheaterclickhere | 2023-10-18T07:34:38.000Z | [
"region:us"
] | valtyheaterclickhere | null | null | 0 | 0 | 2023-10-18T07:33:03 | ### Device Name: [Valty Heater](https://snoppymart.com/valty-heater/)
➤ Benefit: Easy to install and efficiently warm up the room.
➤ Specifications: 500W Ceramic Heater
➤ Small & Portable: LED Display & Timer
➤ Overheated Protection Guarantee: 14-days money back guarantee
➤ Price: $69.99
➤ Rating: 4.8/5.0
➤ Official Website: [https://snoppymart.com/valty-heater/](https://snoppymart.com/valty-heater/)
➤ Country for Sale: USA, CA
➤ Availability: In Stock
➤ Discount: [Extra discounts available on the website](https://snoppymart.com/take-valty-heater)
### [Valty Heater](https://snoppymart.com/valty-heater/) Reviews, USA, CA
Winters can be extremely chilling for all the people who live any far from the equator. Winters are the time that people can go out in the snow, enjoy the time getting slower and also spend a lot of time with their family. But the only problem with this problem is the unbearable cold that comes along with it. At places, the temperature can even go in minus, and thus people must keep themselves warm as the body can catch flu very easily. Pneumonia and hypothermia are serious concerns related to winters too. The only way that the human body can survive without issues is if people keep themselves warm enough. People can wear a lot of winter clothes outside their home but at home, they need some kind of heater to get warm as wearing a lot of clothes at home are very uncomfortable for people. Conventional heaters or radiators take a lot of space and are very costly too. The energy consumption by such big appliances is a lot too. Thus, people try to find a better gadget that can keep them warm and is also affordable for them.
### [**\=> Click Here To Get Valty Heater From The Official Website!**](https://snoppymart.com/take-valty-heater)
[**Valty Portable Heater**](https://snoppymart.com/valty-heater/) in US, CA, the perfect product for people who want such a gadget that is small and can heat a room efficiently in very little time. This product makes use of heat radiation by heating the air extremely. It is a small device that takes in air from one side, heats it, and then sends it out from the other side for it to circulate the room and warm the environment. It can heat the temperature to 30 degrees Celsius and can be controlled using a smart remote too. Valty Heater is affordable for people and is easy to use too.
### How is [Valty Heater](https://snoppymart.com/valty-heater/) useful for people?
Valty Heater is a product that can be called the right innovation at the right time. People have been asking for things that they can keep with themselves at all times and are compact and can-do major works. Getting a heater which is the size of a small gadget and can be carried around anywhere is very important.
**[Valty Heater](https://snoppymart.com/valty-heater/)** in CA, USA gives pleasure to people for getting their rooms heated up in 15 minutes with just a simple button touch on the console of this gadget. It has to be placed at one corner of the room and then turned on with a set temperature. It then ventilates hot air all around the room and makes sure that the complete room is warmed up. The best thing about this gadget is that it has a smart sensor that automatically turns it off as soon as the set temperature is set around the room. It is affordable for people and can be used very easily and thus has gained a lot of popularity. **[Valty Heater](https://snoppymart.com/valty-heater/)** US, CA is, therefore, a very helpful product for students and corporate people who live alone and also for people who do not have heaters in their offices.
### **[\=> (SPECIAL OFFER) Click Here To Buy Valty Heater From The Official Website!](https://snoppymart.com/take-valty-heater)**
### [Valty Heater](https://snoppymart.com/valty-heater/) Specifications:
**The portable heater is very efficient. Check all specs below:**
* 500W Ceramic Heater
* Small & Portable
* LED Display & Timer
* Overheated Protection
### What is the price of [Valty Heater](https://snoppymart.com/valty-heater/)?
The cost of Valty Heater in USA, Canada, is very affordable. You can get single **[Valty Heater](https://snoppymart.com/valty-heater/)** is $69.99 only. You can get this portable heater in bulk with discount prices. Check packages below:
2xValty Heater - $62.99/each
3xValty Heater - $55.99/each (BEST SELLER)
5xValty Heater - $48.99/each (MOST VALUED)
#### **Valty Heater Official Website - [https://snoppymart.com/take-valty-heater](https://snoppymart.com/take-valty-heater)**
### What is the mechanism of [Valty Heater](https://snoppymart.com/valty-heater/)?
Valty Heater has been made with the help of advanced machines and a lot of research overheat transfer. It is observable that hot air floats higher in the air and the colder air gets settled down and this is the reason why this gadget first sucks in air from the environment heats it and radiates it in the room with the help of its fan and then when that hot air gets higher in the room, it again sucks it in and radiates it around the room by further heating it. This way the air molecules supply thermal energy to neighbouring molecules and the whole room gets heated up very easily. The sensor used in this product helps the gadget to know what temperature is to be set around the room and when to turn it off.
### **[\=> Buy Valty Heater Before Stock Runs Out!](https://snoppymart.com/take-valty-heater)**
The whole outer body is made of shockproof hard plastic and the inside is made from metal. There is a copper centre in the product which gets heated up by electricity and then heats the air. **[Valty Heater](https://snoppymart.com/valty-heater/)** is therefore an amazing example of how the radiation of heat can be used effectively. It is thus suggested for people and is gaining popularity in the market too.
Check [**Valty Heater**](https://snoppymart.com/valty-heater/) bewertungen, Valty Heater stromverbrauch, Valty Heater test Deutsch. Besuchen Sie uns hier und holen Sie sich ein tragbares Heizgerät von Valty und lesen Sie Rezensionen in deutscher Sprache.
### How to fit Valty Heater in the room?
[**Valty Heater**](https://snoppymart.com/valty-heater/) Canada, USA is a compact gadget and people can easily use it. For installing the gadget, all that a user needs is a double-sided tape or a proper mount that is available separately for the gadget. That mount or the tape has to be stuck on the wall at one corner of the room and then the user has to just put the gadget on this mounting and make sure that it does not fall or sticks out. The product can be then controlled using the console given on it or the smart remote that is provided by the makers. The connection for this device is manual and the plug has to be put in a normal supply plug. Remote works with the help of 2 AA batteries that are removable and can be changed.
### Where to buy [Valty Heater](https://snoppymart.com/valty-heater/)?
The most selling portable heater is available at the official website of the Valty Heater. People can go and order it at their address. Many payment options are available for the users to choose from.The Valty Heater is available in USA, Canada. [Besuchen Sie die offizielle deutsche Website](https://snoppymart.com/valty-heater/) von Valty Heater and check Valty Heater erfahrungen, Valty Heater stiftung warentest.
### **[EXCLUSIVE OFFER \*Now On Sale\* Click Here to Buy Valty Heater at the Best Price Online](https://snoppymart.com/take-valty-heater)**
### Disclaimer:
This is promotional content. Must consult an expert before using the device. This post contains an affiliate link and we receive a commission on every sale from this post (at no cost to you). Check the final price on the official website. Read T&C carefully before making any purchase. | 7,865 | [
[
-0.0416259765625,
-0.05572509765625,
0.035675048828125,
0.0136260986328125,
-0.037750244140625,
-0.0139312744140625,
0.023956298828125,
-0.02374267578125,
0.0552978515625,
0.0228271484375,
0.00899505615234375,
0.0030536651611328125,
0.0056915283203125,
-0.00... |
mesolitica/translated-mini-math23k-v1 | 2023-10-18T08:05:06.000Z | [
"region:us"
] | mesolitica | null | null | 0 | 0 | 2023-10-18T08:04:15 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
mesolitica/translated-math_qa | 2023-10-18T08:13:22.000Z | [
"region:us"
] | mesolitica | null | null | 0 | 0 | 2023-10-18T08:12:50 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Fahsai2323/forty | 2023-10-18T08:26:55.000Z | [
"region:us"
] | Fahsai2323 | null | null | 0 | 0 | 2023-10-18T08:26:55 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ShuoShuoShuo/public-dataset | 2023-10-18T08:27:22.000Z | [
"region:us"
] | ShuoShuoShuo | null | null | 0 | 0 | 2023-10-18T08:27:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
NabilaTasnia/bn_common_voice_text | 2023-10-18T08:45:11.000Z | [
"region:us"
] | NabilaTasnia | null | null | 0 | 0 | 2023-10-18T08:45:11 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
mesolitica/translated-MathInstruct | 2023-10-19T03:48:50.000Z | [
"region:us"
] | mesolitica | null | null | 0 | 0 | 2023-10-18T08:47:56 | Entry not found | 15 | [
[
-0.021392822265625,
-0.0149688720703125,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.046539306640625,
0.052520751953125,
0.005046844482421875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.01495361328125,
-0.060333251953125,
0.03... |
DigitalUmuganda/ShonaDataset | 2023-10-18T09:20:08.000Z | [
"region:us"
] | DigitalUmuganda | null | null | 0 | 0 | 2023-10-18T09:00:25 | Entry not found | 15 | [
[
-0.021392822265625,
-0.0149688720703125,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.046539306640625,
0.052520751953125,
0.005046844482421875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.01495361328125,
-0.060333251953125,
0.03... |
jackcao2023/test | 2023-10-18T09:08:19.000Z | [
"region:us"
] | jackcao2023 | null | null | 0 | 0 | 2023-10-18T09:08:19 | Entry not found | 15 | [
[
-0.021392822265625,
-0.0149688720703125,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.046539306640625,
0.052520751953125,
0.005046844482421875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.01495361328125,
-0.060333251953125,
0.03... |
ciaomandi/vassoi | 2023-10-18T09:10:04.000Z | [
"region:us"
] | ciaomandi | null | null | 0 | 0 | 2023-10-18T09:09:28 | Entry not found | 15 | [
[
-0.021392822265625,
-0.0149688720703125,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.046539306640625,
0.052520751953125,
0.005046844482421875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.01495361328125,
-0.060333251953125,
0.03... |
harvey2333/Omni_MLLM | 2023-10-18T09:18:52.000Z | [
"region:us"
] | harvey2333 | null | null | 0 | 0 | 2023-10-18T09:18:52 | Entry not found | 15 | [
[
-0.021392822265625,
-0.0149688720703125,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.046539306640625,
0.052520751953125,
0.005046844482421875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.01495361328125,
-0.060333251953125,
0.03... |
open-llm-leaderboard/details_h2oai__h2ogpt-gm-oasst1-en-2048-open-llama-7b-preview-300bt | 2023-10-18T10:05:58.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-18T10:05:49 | ---
pretty_name: Evaluation run of h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b-preview-300bt
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b-preview-300bt](https://huggingface.co/h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b-preview-300bt)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_h2oai__h2ogpt-gm-oasst1-en-2048-open-llama-7b-preview-300bt\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-18T10:05:46.000869](https://huggingface.co/datasets/open-llm-leaderboard/details_h2oai__h2ogpt-gm-oasst1-en-2048-open-llama-7b-preview-300bt/blob/main/results_2023-10-18T10-05-46.000869.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.09406459731543625,\n\
\ \"em_stderr\": 0.0029895193407219744,\n \"f1\": 0.1653156459731545,\n\
\ \"f1_stderr\": 0.003297300596545349,\n \"acc\": 0.27466456195737965,\n\
\ \"acc_stderr\": 0.00699196443452012\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.09406459731543625,\n \"em_stderr\": 0.0029895193407219744,\n\
\ \"f1\": 0.1653156459731545,\n \"f1_stderr\": 0.003297300596545349\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\"\
: 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5493291239147593,\n\
\ \"acc_stderr\": 0.01398392886904024\n }\n}\n```"
repo_url: https://huggingface.co/h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b-preview-300bt
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_18T10_05_46.000869
path:
- '**/details_harness|drop|3_2023-10-18T10-05-46.000869.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-18T10-05-46.000869.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_18T10_05_46.000869
path:
- '**/details_harness|gsm8k|5_2023-10-18T10-05-46.000869.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-18T10-05-46.000869.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_18T10_05_46.000869
path:
- '**/details_harness|winogrande|5_2023-10-18T10-05-46.000869.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-18T10-05-46.000869.parquet'
- config_name: results
data_files:
- split: 2023_10_18T10_05_46.000869
path:
- results_2023-10-18T10-05-46.000869.parquet
- split: latest
path:
- results_2023-10-18T10-05-46.000869.parquet
---
# Dataset Card for Evaluation run of h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b-preview-300bt
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b-preview-300bt
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b-preview-300bt](https://huggingface.co/h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b-preview-300bt) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_h2oai__h2ogpt-gm-oasst1-en-2048-open-llama-7b-preview-300bt",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-18T10:05:46.000869](https://huggingface.co/datasets/open-llm-leaderboard/details_h2oai__h2ogpt-gm-oasst1-en-2048-open-llama-7b-preview-300bt/blob/main/results_2023-10-18T10-05-46.000869.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.09406459731543625,
"em_stderr": 0.0029895193407219744,
"f1": 0.1653156459731545,
"f1_stderr": 0.003297300596545349,
"acc": 0.27466456195737965,
"acc_stderr": 0.00699196443452012
},
"harness|drop|3": {
"em": 0.09406459731543625,
"em_stderr": 0.0029895193407219744,
"f1": 0.1653156459731545,
"f1_stderr": 0.003297300596545349
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.5493291239147593,
"acc_stderr": 0.01398392886904024
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,526 | [
[
-0.02484130859375,
-0.04931640625,
0.0160980224609375,
0.01544189453125,
-0.01123809814453125,
0.0128173828125,
-0.0232696533203125,
-0.0208282470703125,
0.0266571044921875,
0.032623291015625,
-0.048370361328125,
-0.063720703125,
-0.051025390625,
0.009117126... |
SUSTech/prm800k | 2023-10-18T10:45:40.000Z | [
"region:us"
] | SUSTech | null | null | 0 | 0 | 2023-10-18T10:45:15 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: labeler
dtype: string
- name: timestamp
dtype: timestamp[ns]
- name: generation
dtype: int64
- name: is_quality_control_question
dtype: bool
- name: is_initial_screening_question
dtype: bool
- name: question
struct:
- name: ground_truth_answer
dtype: string
- name: ground_truth_solution
dtype: string
- name: pre_generated_answer
dtype: string
- name: pre_generated_steps
sequence: string
- name: pre_generated_verifier_score
dtype: float64
- name: problem
dtype: string
- name: label
struct:
- name: finish_reason
dtype: string
- name: steps
list:
- name: chosen_completion
dtype: int64
- name: completions
list:
- name: flagged
dtype: bool
- name: rating
dtype: int64
- name: text
dtype: string
- name: human_completion
dtype: 'null'
- name: total_time
dtype: int64
splits:
- name: train
num_bytes: 342584845
num_examples: 97782
- name: test
num_bytes: 9103403
num_examples: 2762
download_size: 132017611
dataset_size: 351688248
---
# Dataset Card for "prm800k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,513 | [
[
-0.04583740234375,
-0.008331298828125,
0.01148223876953125,
0.018463134765625,
-0.026580810546875,
-0.013519287109375,
0.02801513671875,
-0.0013990402221679688,
0.062469482421875,
0.047454833984375,
-0.0606689453125,
-0.048828125,
-0.041107177734375,
-0.0138... |
birdhouse5/shrek | 2023-10-18T11:08:24.000Z | [
"region:us"
] | birdhouse5 | null | null | 0 | 0 | 2023-10-18T10:56:22 | ---
# For reference on dataset card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | 4,563 | [
[
-0.04034423828125,
-0.0419921875,
0.009765625,
0.0178070068359375,
-0.0300445556640625,
-0.00893402099609375,
-0.0026874542236328125,
-0.048431396484375,
0.043212890625,
0.059478759765625,
-0.05938720703125,
-0.069580078125,
-0.042205810546875,
0.00993347167... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.