datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
autoevaluate/autoeval-staging-eval-project-6fbfec76-7855036 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: jpcorb20/pegasus-large-reddit_tifu-samsum-256
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: test
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: jpcorb20/pegasus-large-reddit_tifu-samsum-256
* Dataset: samsum
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
sagorhishab/demo_data | ---
# For reference on dataset card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
license: mit
task_categories:
- text-generation
language:
- bn
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
gregvascaino/fabricio | ---
license: openrail
---
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/44583635 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1330
dataset_size: 182
---
# Dataset Card for "44583635"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
julep-ai/openai-community-posts | ---
dataset_info:
features:
- name: post_discussion_id
dtype: int64
- name: post_discussion_tags
sequence: string
- name: post_discussion_title
dtype: string
- name: post_discussion_created_at
dtype: timestamp[ns, tz=UTC]
- name: post_category_id
dtype: int64
- name: post_discussion_views
dtype: int64
- name: post_discussion_reply_count
dtype: int64
- name: post_discussion_like_count
dtype: int64
- name: post_discussion_participant_count
dtype: int64
- name: post_discussion_word_count
dtype: float64
- name: post_id
dtype: int64
- name: post_created_at
dtype: string
- name: post_content
dtype: string
- name: post_read_count
dtype: int64
- name: post_reply_count
dtype: int64
- name: post_author_id
dtype: string
- name: post_number
dtype: int64
- name: post_discussion_related_topics
sequence: int64
- name: accepted_answer_post
dtype: float64
- name: post_content_raw
dtype: string
- name: post_category_name
dtype: string
- name: post_sentiment
dtype: string
- name: post_sentiment_score
dtype: float64
- name: post_content_cluster_embedding
sequence: float64
- name: post_content_classification_embedding
sequence: float64
- name: post_content_search_document_embedding
sequence: float64
- name: tag1
dtype: string
- name: tag2
dtype: string
- name: tag3
dtype: string
- name: tag4
dtype: string
- name: post_discussion_url
dtype: string
- name: post_url
dtype: string
- name: topic_model_medium
dtype: string
- name: topic_model_broad
dtype: string
splits:
- name: train
num_bytes: 1959958888
num_examples: 97033
download_size: 1928991796
dataset_size: 1959958888
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# OpenAI Community Posts
This dataset is curated from the posts of the OpenAI Community Forum (https://community.openai.com).

## Dataset Details
### Dataset Description
The OpenAI Community Posts dataset comprises discussions, posts, and metadata from the OpenAI Community Forum.
It includes details such as discussion titles, tags, views, reply counts, post content, sentiment scores, vector embeddings for content analysis, and identifiers linking posts to discussions.
The dataset aims to facilitate analysis on community engagement, content sentiment, and discussion dynamics.
_The dataset includes post from the creation of the forum till Feb 28th, 2024_
The dataset was primarily gathered to understand the sentiment of different OpenAI products amongst the users as well as to gather feedback, complaints and common problems users faced.
Posts from the following [categories](https://community.openai.com/categories) and their relevant sub-categories are included:
- [API](https://community.openai.com/c/api/7)
- API/Bugs
- API/Deprecations
- API/Feedback
- [GPT Builders](https://community.openai.com/c/gpts-builders/33)
- GPT Builders/Chat-Plugins
- GPT Builders/Plugin-Store
- [Prompting](https://community.openai.com/c/prompting/8)
- [Community](https://community.openai.com/c/community/21)
- [Documentation](https://community.openai.com/c/documentation/14)
- **Curated by:** Julep AI
- **Language(s) (NLP):** English
### Dataset Sources [optional]
- **Forum:** https://community.openai.com
---
## Dataset Structure
The OpenAI Community Posts dataset is structured around two primary entities: discussions and posts. Each discussion comprises multiple posts, including an initiating post and subsequent replies.
The dataset includes various features capturing the characteristics and metrics of both discussions and posts, as well as sentiment analyses and vector embeddings for advanced content analysis.
### Fields Description
- **Discussion-Level Features**:
- `post_discussion_id`: Unique identifier for the discussion.
- `post_discussion_tags`: Tags or keywords associated with the discussion.
- `post_discussion_title`: Title of the discussion.
- `post_discussion_created_at`: Timestamp indicating when the discussion was created.
- `post_category_id`: Identifier for the category under which the discussion falls.
- `post_discussion_views`: Number of views the discussion has received.
- `post_discussion_reply_count`: Count of replies or posts within the discussion.
- `post_discussion_like_count`: Number of likes the discussion has accumulated.
- `post_discussion_participant_count`: Number of unique participants in the discussion.
- `post_discussion_word_count`: Total word count of all posts within the discussion.
- `post_discussion_related_topics`: Related topics or discussions.
- `post_discussion_url`: Web URL of the discussion.
- **Post-Level Features**:
- `post_id`: Unique identifier for the post.
- `post_author`: Name or identifier of the post's author.
- `post_created_at`: Timestamp indicating when the post was created.
- `post_content`: HTML content of the post.
- `post_read_count`: Number of times the post has been read.
- `post_reply_count`: Number of replies to the post.
- `post_author_id`: Unique identifier for the post's author.
- `post_number`: Sequential number of the post within the discussion.
- `accepted_answer_post`: Boolean indicating if the post is marked as the accepted answer to the discussion.
- `post_content_raw`: Markdown formatted content of the post.
- `post_category_name`: Name of the category to which the post/discussion belongs.
- `post_sentiment`: Sentiment of the post (e.g., positive, negative, neutral).
- `post_sentiment_score`: Numerical score representing the sentiment of the post.
- `post_content_cluster_embedding`: Vector embedding for clustering purposes.
- `post_content_classification_embedding`: Vector embedding for classification.
- `post_content_search_document_embedding`: Vector embedding intended for enhancing search functionalities.
- `post_url`: Web URL of the post.
### Additional Notes
- **Relationships**: Each post is linked to a discussion through `post_discussion_id`, facilitating analyses that require context from the discussion level or aggregations at the discussion level.
- **Vector Embeddings**: The inclusion of vector embeddings (`post_content_cluster_embedding`, `post_content_classification_embedding`, `post_content_search_document_embedding`) enables advanced NLP tasks, including but not limited to clustering, classification, and enhanced search capabilities within the dataset.
- **Sentiment Analysis**: Sentiment scores (`post_sentiment`, `post_sentiment_score`) provide insights into the emotional tone of posts, useful for content analysis, community mood tracking, and identifying discussions that may require moderator attention.
This structure supports a wide range of analyses, from basic statistical summaries to complex machine learning models, by providing comprehensive metadata, content, and derived metrics for each post and discussion in the OpenAI Community Forum.
## Dataset Creation
### Curation Rationale
The OpenAI Community Posts dataset consists of discussions and posts from the OpenAI Community Forum, specifically curated to analyze developer sentiment, identify common problems, and gather feedback on OpenAI products. It includes detailed metadata for discussions and posts, sentiment scores, and vector embeddings for content, facilitating a comprehensive analysis of community engagement and response to OpenAI's offerings. This dataset serves as a valuable resource for understanding the needs, challenges, and perceptions of developers using OpenAI technologies, contributing to product improvement and community support.
#### Personal and Sensitive Information
Efforts were made to anonymize personal information where possible, excluding direct identifiers but including publicly shared content and metadata for analysis.
Specifically, `post_author` field was dropped and `post_author_id` was converted to a SHA256 hash to preserve user identification.
|
mihirinamdar/finqa | ---
license: mit
---
|
hpprc/jawiki | ---
language:
- ja
license:
- cc-by-sa-3.0
- gfdl
pretty_name: jawik
dataset_info:
features:
- name: id
dtype: int64
- name: title
dtype: string
- name: text
dtype: string
- name: paragraphs
list:
- name: paragraph_id
dtype: int64
- name: tag
dtype: string
- name: text
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
- name: wikitext
dtype: string
- name: date_created
dtype: string
- name: date_modified
dtype: string
- name: is_disambiguation_page
dtype: bool
- name: is_sexual_page
dtype: bool
- name: is_violent_page
dtype: bool
- name: templates
sequence: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 21992139146
num_examples: 1399160
download_size: 11689147520
dataset_size: 21992139146
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# JaWiki
Wikipediaの[HTML形式のダンプファイル](https://dumps.wikimedia.org/other/enterprise_html/)から抽出したテキストデータセットです。
Wikiextractorによって抽出したテキストデータと異なり、段落などの文書構造を維持したまま、不要なマークアップのないテキストが利用できます。
ダンプファイルは、2024年1月1日に公開されたものを利用しています。
また、各種NLPタスクに利用しやすいよう、様々なデータを同梱しています。
各種前処理スクリプトは[GitHubのリポジトリ](https://github.com/hppRC/jawiki)をご参照ください。
## データ構造
各レコードはWikipediaの記事一つに対応しています。
大まかなデータ構造と説明を以下に示します。
- id (int)
- title (str)
- 記事タイトルです。
- text (str)
- 各段落の文章(`paragraphs`の`text`)を改行で結合したテキストです。
- paragraphs (list[dict[str, int | str]])
- 記事中の段落の集合です。各段落は辞書型で表現されており、以下のデータ構造に基づきます。
- paragraph_id (int)
- 記事中で何番目の段落かを示す番号です。
- tag (str)
- 当該段落をマークアップしていたHTMLタグの名称です。
- title (str | None)
- 当該段落を含むセクションのタイトルです。
- 存在しない場合もあります。
- text (str)
- 段落のテキスト本文です。
- abstract (str | None)
- 記事の要約です。
- ない場合もあります。
- wikitext (str)
- wikitextによって抽出された記事本文です。比較・解析精度向上に資する目的で`text`と併存しています。
- date_created (str)
- 記事が作成された日付です。
- date_modified (str)
- 記事が最後に編集された日付です。
- is_disambiguation_page (bool)
- 曖昧さ回避のためのページかどうかを表す値です。`templates`に含まれる文字列から判別しています。
- is_sexual_page (bool)
- 性的な内容を含むページかどうかを表す値です。`templates`に含まれる文字列から判別しています。
- is_violent_page (bool)
- 暴力的な内容を含むページかどうかを表す値です。`templates`に含まれる文字列から判別しています。
- templates (list[str])
- 記事を作成する際に利用されたテンプレートのリストです。
- url (str)
データセットの作成にあたり、[singletongue/wikipedia-utils](https://github.com/singletongue/wikipedia-utils)を参考に実装を行いました。
この場を借りて感謝申し上げます。
|
kaitchup/opus-Italian-to-English | ---
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: validation
num_bytes: 296354
num_examples: 2000
- name: train
num_bytes: 99243787
num_examples: 960042
download_size: 73634748
dataset_size: 99540141
---
# Dataset Card for "opus-it-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
naxalpha/stable-icons-128 | ---
dataset_info:
features:
- name: image
dtype: image
- name: tags
dtype: string
splits:
- name: train
num_bytes: 16579464.375
num_examples: 5525
download_size: 16290486
dataset_size: 16579464.375
---
# Dataset Card for "stable-icons-128"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pranaydeeps/CAMEO | ---
license: mit
task_categories:
- text-classification
language:
- en
tags:
- emotion
- complexity
- readability
- sentiment
pretty_name: CAMEO
size_categories:
- 10K<n<100K
---
# Dataset Card for CAMEO
<!-- Provide a quick summary of the dataset. -->
Dataset to accompany the EMNLP'23 paper titled: "Misery Loves Complexity: Exploring Linguistic Complexity in the Context of Emotion Detection".
## Dataset Details
50,000 subset from the GoEmotions Dataset automatically annotated with the following linguistic complexity measures:
- idt: Incomplete Dependency Theory
- dlt: Dependency Locality Theory
- nnd: Nested-Nouns Distance
- le: Left-embededness
- percentage_polysyllable_words: % of polysyllable words
- avg_conn_doc: Average connectives per sentence
- number_of_uniq_entities: Number of unique named entities
- average_word_len: Average word length
- dale_word_frequency_score: DALE Word Frequency Score
- avgtfidf: Average TF-IDF of all words based on the background corpus
- avgll: Average Log-likelihood of all words based on the background corpus
- type_token_ratio_perc: % Type-token ratio
Please refer to the paper for further details on the metrics or other information.
For details on how the data was collected or annotated for emotions. Please refer to the original [GoEmotions dataset](https://github.com/google-research/google-research/tree/master/goemotions).
|
jlbaker361/evaluation | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: image
dtype: image
- name: model
dtype: string
splits:
- name: train
num_bytes: 1848372.0
num_examples: 3
download_size: 1850574
dataset_size: 1848372.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SahilSN/DataSet_v4 | ---
license: unknown
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 20202
num_examples: 91
download_size: 10340
dataset_size: 20202
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_v5-mathemak-2bec9f-2053467113 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test_v5
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-2.7b_eval
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test_v5
dataset_config: mathemakitten--winobias_antistereotype_test_v5
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
cardiffnlp/databench | ---
language:
- en
- es
pretty_name: " 💾🏋️💾 DataBench 💾🏋️💾"
tags:
- table-question-answering
- table
- qa
license: mit
task_categories:
- table-question-answering
- question-answering
---
# 💾🏋️💾 DataBench 💾🏋️💾
This repository contains the original 65 datasets used for the paper [Question Answering over Tabular Data with DataBench:
A Large-Scale Empirical Evaluation of LLMs](https://huggingface.co/datasets/cardiffnlp/databench/resolve/main/Databench-LREC-Coling-2024.pdf) which appeared in LREC-COLING 2024.
Large Language Models (LLMs) are showing emerging abilities, and one of the latest recognized ones is tabular
reasoning in question answering on tabular data. Although there are some available datasets to assess question
answering systems on tabular data, they are not large and diverse enough to evaluate this new ability of LLMs.
To this end, we provide a corpus of 65 real world datasets, with 3,269,975 and 1615 columns in total, and 1300 questions to evaluate your models for the task of QA over Tabular Data.
## 📚 Datasets
By clicking on each name in the table below, you will be able to explore each dataset.
| | Name | Rows | Cols | Domain | Source (Reference) |
|---:|:-------------------------------|-------:|-------:|:---------------------------|:-----------------------------------------------------------------------------------------------------------------------------------|
| 1 | [Forbes](https://public.graphext.com/0b211530c7e213d3/index.html?section=data) | 2668 | 17 | Business | [Forbes](https://www.forbes.com/billionaires/)|
| 2 | [Titanic](https://public.graphext.com/8577225c5ffd88fd/index.html) | 887 | 8 | Travel and Locations | [Kaggle](https://www.kaggle.com/competitions/titanic/data)|
| 3 | [Love](https://public.graphext.com/be7a566b0c485916/index.html) | 373 | 35 | Social Networks and Surveys | [Graphext](https://public.graphext.com/1de78f6820cfd5ba/index.html) |
| 4 | [Taxi](https://public.graphext.com/bcee13c23070f333/index.html) | 100000 | 20 | Travel and Locations | [Kaggle](https://www.kaggle.com/competitions/nyc-taxi-trip-duration/overview) |
| 5 | [NYC Calls](https://public.graphext.com/1ce2f5fae408621e/index.html) | 100000 | 46 | Business | [City of New York](https://data.cityofnewyork.us/Social-Services/NYC-311-Data/jrb2-thup) |
| 6 | [London Airbnbs](https://public.graphext.com/6bbf4bbd3ff279c0/index.html) | 75241 | 74 | Travel and Locations | [Kaggle](https://www.kaggle.com/datasets/labdmitriy/airbnb) |
| 7 | [Fifa](https://public.graphext.com/37bca51494c10a79/index.html) | 14620 | 59 | Sports and Entertainment | [Kaggle](https://www.kaggle.com/datasets/stefanoleone992/fifa-21-complete-player-dataset) |
| 8 | [Tornados](https://public.graphext.com/4be9872e031199c3/index.html) | 67558 | 14 | Health | [Kaggle](https://www.kaggle.com/datasets/danbraswell/us-tornado-dataset-1950-2021) |
| 9 | [Central Park](https://public.graphext.com/7b3d3a4d7bf1e9b5/index.html) | 56245 | 6 | Travel and Locations | [Kaggle](https://www.kaggle.com/datasets/danbraswell/new-york-city-weather-18692022) |
| 10 | [ECommerce Reviews](https://public.graphext.com/a5b8911b215958ad/index.html) | 23486 | 10 | Business | [Kaggle](https://www.kaggle.com/datasets/nicapotato/womens-ecommerce-clothing-reviews) |
| 11 | [SF Police](https://public.graphext.com/ab815ab14f88115c/index.html) | 713107 | 35 | Social Networks and Surveys | [US Gov](https://catalog.data.gov/dataset/police-department-incident-reports-2018-to-present) |
| 12 | [Heart Failure](https://public.graphext.com/245cec64075f5542/index.html) | 918 | 12 | Health | [Kaggle](https://www.kaggle.com/datasets/fedesoriano/heart-failure-prediction) |
| 13 | [Roller Coasters](https://public.graphext.com/1e550e6c24fc1930/index.html) | 1087 | 56 | Sports and Entertainment | [Kaggle](https://www.kaggle.com/datasets/robikscube/rollercoaster-database) |
| 14 | [Madrid Airbnbs](https://public.graphext.com/77265ea3a63e650f/index.html) | 20776 | 75 | Travel and Locations | [Inside Airbnb](http://data.insideairbnb.com/spain/comunidad-de-madrid/madrid/2023-09-07/data/listings.csv.gz) |
| 15 | [Food Names](https://public.graphext.com/5aad4c5d6ef140b3/index.html) | 906 | 4 | Business | [Data World](https://data.world/alexandra/generic-food-database) |
| 16 | [Holiday Package Sales](https://public.graphext.com/fbc34d3f24282e46/index.html) | 4888 | 20 | Travel and Locations | [Kaggle](https://www.kaggle.com/datasets/susant4learning/holiday-package-purchase-prediction) |
| 17 | [Hacker News](https://public.graphext.com/f20501a9d616b5a5/index.html) | 9429 | 20 | Social Networks and Surveys | [Kaggle](https://www.kaggle.com/datasets/hacker-news/hacker-news) |
| 18 | [Staff Satisfaction](https://public.graphext.com/6822ac1ce6307fec/index.html) | 14999 | 11 | Business | [Kaggle](https://www.kaggle.com/datasets/mohamedharris/employee-satisfaction-index-dataset) |
| 19 | [Aircraft Accidents](https://public.graphext.com/1802117b1b14f5c5/index.html) | 23519 | 23 | Health | [Kaggle](https://www.kaggle.com/datasets/ramjasmaurya/aviation-accidents-history1919-april-2022) |
| 20 | [Real Estate Madrid](https://public.graphext.com/5f83ec219a7ea84f/index.html) | 26026 | 59 | Business | [Idealista](https://public.graphext.com/5f83ec219a7ea84f/index.html) |
| 21 | [Telco Customer Churn](https://public.graphext.com/362cd8e3e96f70d4/index.html) | 7043 | 21 | Business | [Kaggle](https://www.kaggle.com/datasets/blastchar/telco-customer-churn) |
| 22 | [Airbnbs Listings NY](https://public.graphext.com/77265ea3a63e650f/index.html) | 37012 | 33 | Travel and Locations | [Kaggle](https://www.kaggle.com/datasets/dgomonov/new-york-city-airbnb-open-data) |
| 23 | [Climate in Madrid](https://public.graphext.com/83a75b4f1cea8df4/index.html?section=data) | 36858 | 26 | Travel and Locations | [AEMET](https://public.graphext.com/83a75b4f1cea8df4/index.html?section=data) |
| 24 | [Salary Survey Spain 2018](https://public.graphext.com/24d1e717ba01aa3d/index.html) | 216726 | 29 | Business | [INE](ine.es) |
| 25 | [Data Driven SEO ](https://public.graphext.com/4e5b1cac9ebdfa44/index.html) | 62 | 5 | Business | [Graphext](https://www.graphext.com/post/data-driven-seo-a-keyword-optimization-guide-using-web-scraping-co-occurrence-analysis-graphext-deepnote-adwords) |
| 26 | [Predicting Wine Quality](https://public.graphext.com/de04acf5d18a9aea/index.html) | 1599 | 12 | Business | [Kaggle](https://www.kaggle.com/datasets/yasserh/wine-quality-dataset) |
| 27 | [Supermarket Sales](https://public.graphext.com/9a6742da6a8d8f7f/index.html) | 1000 | 17 | Business | [Kaggle](https://www.kaggle.com/datasets/aungpyaeap/supermarket-sales) |
| 28 | [Predict Diabetes](https://public.graphext.com/def4bada27af324c/index.html) | 768 | 9 | Health | [Kaggle](https://www.kaggle.com/datasets/iammustafatz/diabetes-prediction-dataset) |
| 29 | [NYTimes World In 2021](https://public.graphext.com/af4c8eef1757973c/index.html?section=data) | 52588 | 5 | Travel and Locations | [New York Times](https://public.graphext.com/af4c8eef1757973c/index.html) |
| 30 | [Professionals Kaggle Survey](https://public.graphext.com/3a2e87f90363a85d/index.html) | 19169 | 64 | Business | [Kaggle](https://www.kaggle.com/c/kaggle-survey-2021/data) |
| 31 | [Trustpilot Reviews](https://public.graphext.com/367e29432331fbfd/index.html?section=data) | 8020 | 6 | Business | [TrustPilot](https://public.graphext.com/367e29432331fbfd/index.html?section=data) |
| 32 | [Delicatessen Customers](https://public.graphext.com/a1687589fbde07bc/index.html) | 2240 | 29 | Business | [Kaggle](https://www.kaggle.com/datasets/rodsaldanha/arketing-campaign) |
| 33 | [Employee Attrition](https://public.graphext.com/07a91a15ecf2b8f6/index.html) | 14999 | 11 | Business | [Kaggle(modified)](https://www.kaggle.com/datasets/pavan9065/predicting-employee-attrition) |
| 34 | [World Happiness Report 2020](https://public.graphext.com/754c83ff0a7ba087/index.html) | 153 | 20 | Social Networks and Surveys | [World Happiness](https://worldhappiness.report/data/) |
| 35 | [Billboard Lyrics](https://public.graphext.com/7e0b009e8d0af719/index.html) | 5100 | 6 | Sports and Entertainment | [Brown University](https://cs.brown.edu/courses/cs100/students/project11/) |
| 36 | [US Migrations 2012-2016](https://public.graphext.com/dbdadf87a5c21695/index.html) | 288300 | 9 | Social Networks and Surveys | [US Census](https://www.census.gov/topics/population/migration/guidance/county-to-county-migration-flows.html) |
| 37 | [Ted Talks](https://public.graphext.com/07e48466fb670904/index.html) | 4005 | 19 | Social Networks and Surveys | [Kaggle](https://www.kaggle.com/datasets/ashishjangra27/ted-talks) |
| 38 | [Stroke Likelihood](https://public.graphext.com/20ccfee9e84948e3/index.html) | 5110 | 12 | Health | [Kaggle](https://www.kaggle.com/datasets/kamilpytlak/personal-key-indicators-of-heart-disease) |
| 39 | [Happy Moments](https://public.graphext.com/9b86efff48989701/index.html) | 100535 | 11 | Social Networks and Surveys | [Kaggle](https://www.kaggle.com/datasets/ritresearch/happydb) |
| 40 | [Speed Dating](https://public.graphext.com/f1912daad7870be0/index.html) | 8378 | 123 | Social Networks and Surveys | [Kaggle](https://www.kaggle.com/datasets/ulrikthygepedersen/speed-dating) |
| 41 | [Airline Mentions X (former Twitter)](https://public.graphext.com/29cb7f73f6e17a38/index.html) | 14640 | 15 | Social Networks and Surveys | [X (former Twitter)](https://public.graphext.com/7e6999327d1f83fd/index.html) |
| 42 | [Predict Student Performance](https://public.graphext.com/def4bada27af324c/index.html) | 395 | 33 | Business | [Kaggle](https://www.kaggle.com/datasets/impapan/student-performance-data-set) |
| 43 | [Loan Defaults](https://public.graphext.com/0c7fb68ab8071a1f/index.html) | 83656 | 20 | Business | [SBA](https://www.kaggle.com/datasets/mirbektoktogaraev/should-this-loan-be-approved-or-denied) |
| 44 | [IMDb Movies](https://public.graphext.com/e23e33774872c496/index.html) | 85855 | 22 | Sports and Entertainment | [Kaggle](https://www.kaggle.com/datasets/harshitshankhdhar/imdb-dataset-of-top-1000-movies-and-tv-shows) |
| 45 | [Spotify Song Popularity](https://public.graphext.com/def4bada27af324c/index.html) | 21000 | 19 | Sports and Entertainment | [Spotify](https://www.kaggle.com/datasets/tomigelo/spotify-audio-features) |
| 46 | [120 Years Olympics](https://public.graphext.com/e57d5e2f172c9a99/index.html) | 271116 | 15 | Sports and Entertainment | [Kaggle](https://www.kaggle.com/datasets/heesoo37/120-years-of-olympic-history-athletes-and-results) |
| 47 | [Bank Customer Churn](https://public.graphext.com/e8f7aeacd209f74a/index.html) | 7088 | 15 | Business | [Kaggle](https://www.kaggle.com/datasets/mathchi/churn-for-bank-customers) |
| 48 | [Data Science Salary Data](https://public.graphext.com/4e5b1cac9ebdfa44/index.html) | 742 | 28 | Business | [Kaggle](https://www.kaggle.com/datasets/ruchi798/data-science-job-salaries) |
| 49 | [Boris Johnson UK PM Tweets](https://public.graphext.com/f6623a1ca0f41c8e/index.html) | 3220 | 34 | Social Networks and Surveys | [X (former Twitter)](https://public.graphext.com/f6623a1ca0f41c8e/index.html) |
| 50 | [ING 2019 X Mentions](https://public.graphext.com/075030310aa702c6/index.html) | 7244 | 22 | Social Networks and Surveys | [X (former Twitter)](https://public.graphext.com/075030310aa702c6/index.html) |
| 51 | [Pokemon Features](https://public.graphext.com/f30d4d863a2e6b01/index.html) | 1072 | 13 | Business | [Kaggle](https://www.kaggle.com/datasets/rounakbanik/pokemon) |
| 52 | [Professional Map](https://public.graphext.com/70af2240cb751968/index.html) | 1227 | 12 | Business | [Kern et al, PNAS'20](https://github.com/behavioral-ds/VocationMap) |
| 53 | [Google Patents](https://public.graphext.com/a262300e31874716/index.html) | 9999 | 20 | Business | [BigQuery](https://www.kaggle.com/datasets/bigquery/patents/data) |
| 54 | [Joe Biden Tweets](https://public.graphext.com/33fa2efa41541ab1/index.html) | 491 | 34 | Social Networks and Surveys | [X (former Twitter)](https://public.graphext.com/339cee259f0a9b32/index.html?section=data) |
55 | [German Loans](https://public.graphext.com/d3f5e425e9d4b0a1/index.html) | 1000 | 18 | Business | [Kaggle](https://www.kaggle.com/datasets/uciml/german-credit/data) |
| 56 | [Emoji Diet](https://public.graphext.com/e721cc7d790c06d4/index.html) | 58 | 35 | Health | [Kaggle](https://www.kaggle.com/datasets/ofrancisco/emoji-diet-nutritional-data-sr28) |
| 57 | [Spain Survey 2015](https://public.graphext.com/90ca7539b160fdfa/index.html?section=data) | 20000 | 45 | Social Networks and Surveys | [CIS](https://public.graphext.com/90ca7539b160fdfa/index.html?section=data) |
| 58 | [US Polls 2020](https://public.graphext.com/dbdadf87a5c21695/index.html) | 3523 | 52 | Social Networks and Surveys | [Brandwatch](https://www.brandwatch.com/p/us-election-raw-polling-data/) |
| 59 | [Second Hand Cars](https://public.graphext.com/543d0c49d7120ca0/index.html) | 50000 | 21 | Business | [DataMarket](https://www.kaggle.com/datasets/datamarket/venta-de-coches) |
| 60 | [Bakery Purchases](https://public.graphext.com/6f2102e80f47a192/index.html) | 20507 | 5 | Business | [Kaggle](https://www.kaggle.com/code/xvivancos/market-basket-analysis/report) |
| 61 | [Disneyland Customer Reviews](https://public.graphext.com/b1037bb566b7b316/index.html) | 42656 | 6 | Travel and Locations | [Kaggle](https://www.kaggle.com/datasets/arushchillar/disneyland-reviews) |
| 62 | [Trump Tweets](https://public.graphext.com/7aff94c3b7f159fc/index.html) | 15039 | 20 | Social Networks and Surveys | [X (former Twitter)](https://public.graphext.com/be903c098a90e46f/index.html?section=data) |
| 63 | [Influencers](https://public.graphext.com/e097f1ea03d761a9/index.html) | 1039 | 14 | Social Networks and Surveys | [X (former Twitter)](https://public.graphext.com/e097f1ea03d761a9/index.html) |
| 64 | [Clustering Zoo Animals](https://public.graphext.com/d1b66902e46a712a/index.html) | 101 | 18 | Health | [Kaggle](https://www.kaggle.com/datasets/jirkadaberger/zoo-animals) |
| 65 | [RFM Analysis](https://public.graphext.com/4db2e54e29006a21/index.html) | 541909 | 8 | Business | [UCI ML](https://www.kaggle.com/datasets/carrie1/ecommerce-data) |
## 🏗️ Folder structure
Each folder represents one dataset. You will find the following files within:
* all.parquet: the processed data, with each column tagged with our typing system, in [parquet](https://arrow.apache.org/docs/python/parquet.html).
* qa.csv: contains the human-made set of questions, tagged by type and columns used, for the dataset (sample_answer indicates the answers for DataBench lite)
* sample.csv: sample containing 20 rows of the original dataset (DataBench lite)
* info.yml: additional information about the dataset
## 🗂️ Column typing system
In an effort to map the stage for later analysis, we have categorized the columns by type. This information allows us to segment different kinds of data so that we can subsequently analyze the model's behavior on each column type separately. All parquet files have been casted to their smallest viable data type using the open source [Lector](https://github.com/graphext/lector) reader.
What this means is that in the data types we have more granular information that allows us to know if the column contains NaNs or not (following panda’s convention of Int vs int), as well as whether small numerical values contain negatives (Uint vs int) and their range. We also have dates with potential timezone information (although for now they’re all UTC), as well as information about categories’ cardinality coming from the arrow types.
In the table below you can see all the data types assigned to each column, as well as the number of columns for each type. The most common data types are numbers and categories with 1336 columns of the total of 1615 included in DataBench. These are followed by some other more rare types as urls, booleans, dates or lists of elements.
| Type | Columns | Example |
| -------------- | ------- | ----------------------- |
| number | 788 | 55 |
| category | 548 | apple |
| date | 50 | 1970-01-01 |
| text | 46 | A red fox ran... |
| url | 31 | google.com |
| boolean | 18 | True |
| list[number] | 14 | [1,2,3] |
| list[category] | 112 | [apple, orange, banana] |
| list[url] | 8 | [google.com, apple.com] |
## 🔗 Reference
You can download the paper [here](https://huggingface.co/datasets/cardiffnlp/databench/resolve/main/Databench-LREC-Coling-2024.pdf).
If you use this resource, please use the following reference:
```
@inproceedings{oses-etal-2024-databench,
title = "Question Answering over Tabular Data with DataBench: A Large-Scale Empirical Evaluation of LLMs",
author = "Jorge Jorge Osés Grijalba and Luis Alfonso Ureña-López and
Eugenio Martínez Cámara and Jose Camacho-Collados",
booktitle = "Proceedings of LREC-COLING 2024",
year = "2024",
address = "Turin, Italy"
}
```
|
George-Zhuang/BFT | ---
license: cc-by-nc-4.0
---
|
molamin/Kinyarwanda_Engligh_Multilingual_ASR | ---
language:
- rw
- en
license:
- cc-by-4.0
size_categories:
- 700K<n<800K
- ~3120 hours
---
This dataset was created from Mozilla's Common Voice dataset for the purposes of Multilingual ASR on Kinyarwanda and English.
The dataset contains 3000 hours of multilingual training samples, 300 hours of validation samples and 200 of testing samples.
|
Skarut1945/Markus | ---
license: openrail
---
|
JotDe/data-nonmembers | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 2366233677.114
num_examples: 18862
download_size: 2351059467
dataset_size: 2366233677.114
---
# Dataset Card for "data-nonmembers"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tharun-6743/criket-546 | ---
license: openrail
---
|
anirudhlakhotia/KannadaPreTraining | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 20021818455
num_examples: 33663977
download_size: 8419151855
dataset_size: 20021818455
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Ezell/testing | ---
task_categories:
- text-classification
--- |
maomlab/ToxoCEN | ---
license: mit
task_categories:
- tabular-regression
tags:
- biology
pretty_name: Toxoplasma gondii Coexpression Network
size_categories:
- 10M<n<100M
---
# ToxoCEN: A Co-expression network for *Toxoplasma gondii*
Elucidating gene function is a major goal in biology, especially among non-model organisms.
However, doing so is complicated by the fact that molecular conservation does not always
mirror functional conservation, and that complex relationships among genes are responsible
for encoding pathways and higher-order biological processes. Co-expression, a promising
approach for predicting gene function, relies on the general principal that genes with
similar expression patterns across multiple conditions will likely be involved in the
same biological process. For Toxoplasma gondii, a prevalent human eukaryotic pathogen
greatly diverged from malaria, approximately 47% of the predicted genes in the genome
lack functional annotations. Here, we leveraged a large amount of publicly available
transcriptomic data to generate a T. gondii Co-Expression Network (ToxoCEN),
recapitulating known protein networks, predicting gene function, and
enabling insights into the principles influencing co-expression. Overall, co-expression
is a powerful tool for uncovering gene function, and decreases the experimental tests
needed to identify functions for currently under-annotated genes.
CS Arnold, Y Wang, VB Carruthers, MJ O'Meara
ToxoCEN: A Co-Expression Network for Toxoplasma gondii
Code available at https://github.com/maomlab/CalCEN/tree/master/vignettes/ToxoCEN
**TGME49_transcript_annotations.tsv**
* [Toxoplasma gondii ME49](https://toxodb.org/toxo/app/record/dataset/NCBITAXON_508771) (NCBI Taxon:508771) annotated protein features collected from [ToxoDB](https://toxodb.org/toxo/app) Release 64
**top_coexp_hits.tsv**
* top 50 ToxoCEN associations for each gene
**top_coexp_hits_0.15.tsv**
* top ToxoCEN associations for each gene filtered by score > 0.85 and at most 50 per gene
**Data/estimated_expression_meta.tsv**
* Metadata for RNAseq estimated expression runs
**Data/estimated_expression.tsv**
* gene by RNA-seq run estimated expression
**Networks/ToxoCEN_network.tsv**
* ToxoCEN Co-expression network
**Networks/BlastP_network.tsv**
* Protein sequence similarity network
|
projectbaraat/hin-eng-Mathematical-0.1 | ---
dataset_info:
features:
- name: input
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 410546922
num_examples: 337044
download_size: 149177940
dataset_size: 410546922
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kotoba-speech/ThuVienThanhPhoBacGiang_tscribed_testing_whisper-large-v3 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
- name: duration
dtype: float64
- name: ratio
dtype: float64
- name: videoid
dtype: string
- name: key
dtype: string
- name: dataset_id
dtype: string
- name: lang
dtype: string
- name: start
dtype: float64
- name: end
dtype: float64
splits:
- name: train
num_bytes: 1363037430.723
num_examples: 1707
download_size: 1052957365
dataset_size: 1363037430.723
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Ram07/Emp-dialog-w-new-instruct-1 | ---
license: mit
---
###Instruction:
1)You're an empathy therapist and you Help with addiction issues, encourage healthy coping.
2)refer to professionals as needed.
3)Your primary function is to reply like identifying, understanding, and challenging their cognitive distortions and unhealthy addiction over drugs & alcohol.
4)Keep the response short, simple like (assistant below mentioned) and much more human-like.
5) Try to use some strategy like {strategy}
###input from user:
{Conversation} |
Soxcr/Soxcr | ---
license: creativeml-openrail-m
---
|
CyberHarem/nagatsuki_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of nagatsuki/長月/长月 (Azur Lane)
This is the dataset of nagatsuki/長月/长月 (Azur Lane), containing 21 images and their tags.
The core tags of this character are `animal_ears, long_hair, brown_hair, dog_ears, purple_eyes, hair_ornament, tail, dog_tail, crescent_hair_ornament, fang, hat, ribbon, school_hat, hairclip, side_ponytail, bangs, dog_girl, very_long_hair, yellow_headwear, bow, hair_between_eyes, hair_bow, candy_hair_ornament`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:--------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 21 | 24.20 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nagatsuki_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 21 | 14.61 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nagatsuki_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 50 | 30.45 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nagatsuki_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 21 | 22.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nagatsuki_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 50 | 42.38 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nagatsuki_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/nagatsuki_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 21 |  |  |  |  |  | blush, open_mouth, 1girl, crescent, smile, solo, looking_at_viewer, blue_shirt, kindergarten_uniform, long_sleeves, pantyhose, blue_skirt, school_uniform |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | blush | open_mouth | 1girl | crescent | smile | solo | looking_at_viewer | blue_shirt | kindergarten_uniform | long_sleeves | pantyhose | blue_skirt | school_uniform |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------------|:--------|:-----------|:--------|:-------|:--------------------|:-------------|:-----------------------|:---------------|:------------|:-------------|:-----------------|
| 0 | 21 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
open-llm-leaderboard/details_togethercomputer__RedPajama-INCITE-Chat-7B-v0.1 | ---
pretty_name: Evaluation run of togethercomputer/RedPajama-INCITE-Chat-7B-v0.1
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [togethercomputer/RedPajama-INCITE-Chat-7B-v0.1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-7B-v0.1)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_togethercomputer__RedPajama-INCITE-Chat-7B-v0.1\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-19T01:29:17.433845](https://huggingface.co/datasets/open-llm-leaderboard/details_togethercomputer__RedPajama-INCITE-Chat-7B-v0.1/blob/main/results_2023-10-19T01-29-17.433845.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.00985738255033557,\n\
\ \"em_stderr\": 0.001011740962658439,\n \"f1\": 0.06564072986577182,\n\
\ \"f1_stderr\": 0.0016570971110147965,\n \"acc\": 0.3014062577602678,\n\
\ \"acc_stderr\": 0.007815997155326552\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.00985738255033557,\n \"em_stderr\": 0.001011740962658439,\n\
\ \"f1\": 0.06564072986577182,\n \"f1_stderr\": 0.0016570971110147965\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.004548900682335102,\n \
\ \"acc_stderr\": 0.0018535550440036204\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.5982636148382005,\n \"acc_stderr\": 0.013778439266649482\n\
\ }\n}\n```"
repo_url: https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-7B-v0.1
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|arc:challenge|25_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_19T01_29_17.433845
path:
- '**/details_harness|drop|3_2023-10-19T01-29-17.433845.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-19T01-29-17.433845.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_19T01_29_17.433845
path:
- '**/details_harness|gsm8k|5_2023-10-19T01-29-17.433845.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-19T01-29-17.433845.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hellaswag|10_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T16:36:55.305122.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T16:36:55.305122.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-07-19T16:36:55.305122.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_19T01_29_17.433845
path:
- '**/details_harness|winogrande|5_2023-10-19T01-29-17.433845.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-19T01-29-17.433845.parquet'
- config_name: results
data_files:
- split: 2023_07_19T16_36_55.305122
path:
- results_2023-07-19T16:36:55.305122.parquet
- split: 2023_10_19T01_29_17.433845
path:
- results_2023-10-19T01-29-17.433845.parquet
- split: latest
path:
- results_2023-10-19T01-29-17.433845.parquet
---
# Dataset Card for Evaluation run of togethercomputer/RedPajama-INCITE-Chat-7B-v0.1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-7B-v0.1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [togethercomputer/RedPajama-INCITE-Chat-7B-v0.1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-7B-v0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_togethercomputer__RedPajama-INCITE-Chat-7B-v0.1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-19T01:29:17.433845](https://huggingface.co/datasets/open-llm-leaderboard/details_togethercomputer__RedPajama-INCITE-Chat-7B-v0.1/blob/main/results_2023-10-19T01-29-17.433845.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.00985738255033557,
"em_stderr": 0.001011740962658439,
"f1": 0.06564072986577182,
"f1_stderr": 0.0016570971110147965,
"acc": 0.3014062577602678,
"acc_stderr": 0.007815997155326552
},
"harness|drop|3": {
"em": 0.00985738255033557,
"em_stderr": 0.001011740962658439,
"f1": 0.06564072986577182,
"f1_stderr": 0.0016570971110147965
},
"harness|gsm8k|5": {
"acc": 0.004548900682335102,
"acc_stderr": 0.0018535550440036204
},
"harness|winogrande|5": {
"acc": 0.5982636148382005,
"acc_stderr": 0.013778439266649482
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
Nicolas-BZRD/English_French_Webpages_Scraped_Translated | ---
language:
- en
- fr
license: odbl
size_categories:
- 10M<n<100M
task_categories:
- translation
tags:
- webpages
- parallel
- parallel data
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: en
dtype: string
- name: fr
dtype: string
splits:
- name: train
num_bytes: 6811772380
num_examples: 17161263
download_size: 640497280
dataset_size: 6811772380
---
# English French Webpages Scraped Translated
### Dataset Summary
French/English parallel texts for training translation models. Over 17.1 million sentences in French and English. Dataset created by Chris Callison-Burch, who crawled millions of web pages and then used a set of simple heuristics to transform French URLs onto English URLs, and assumed that these documents are translations of each other. This is the main dataset of Workshop on Statistical Machine Translation (WML) 2015 Dataset that can be used for Machine Translation and Language Models. Refer to the paper here: http://www.statmt.org/wmt15/pdf/WMT01.pdf
### Post-process
This dataset has been post-processed to remove all duplicates, empty fields and phrases containing less than 5 words.
### Original Dataset Citation
```
@InProceedings{bojar-EtAl:2015:WMT,
author = {Bojar, Ond\v{r}ej and Chatterjee, Rajen and Federmann, Christian and Haddow, Barry and Huck, Matthias and Hokamp, Chris and Koehn, Philipp and Logacheva, Varvara and Monz, Christof and Negri, Matteo and Post, Matt and Scarton, Carolina and Specia, Lucia and Turchi, Marco},
title = {Findings of the 2015 Workshop on Statistical Machine Translation},
booktitle = {Proceedings of the Tenth Workshop on Statistical Machine Translation},
month = {September},
year = {2015},
address = {Lisbon, Portugal},
publisher = {Association for Computational Linguistics},
pages = {1--46},
url = {http://aclweb.org/anthology/W15-3001}
}
``` |
liuyanchen1015/MULTI_VALUE_mnli_myself_coordinate_subjects | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: dev_matched
num_bytes: 1751
num_examples: 7
- name: dev_mismatched
num_bytes: 6335
num_examples: 26
- name: test_matched
num_bytes: 5593
num_examples: 17
- name: test_mismatched
num_bytes: 4160
num_examples: 17
- name: train
num_bytes: 138564
num_examples: 549
download_size: 75383
dataset_size: 156403
---
# Dataset Card for "MULTI_VALUE_mnli_myself_coordinate_subjects"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
wisenut-nlp-team/query_generation_v2 | ---
dataset_info:
features:
- name: title
dtype: string
- name: question
dtype: string
- name: context
sequence: string
splits:
- name: train
num_bytes: 123288519.54612225
num_examples: 125492
- name: validation
num_bytes: 30787437.24413799
num_examples: 31402
download_size: 300033308
dataset_size: 154075956.79026023
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
almandsky/openassistant-tiny | ---
license: mit
---
|
bigbio/scai_disease |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: SCAI Disease
homepage: https://www.scai.fraunhofer.de/en/business-research-areas/bioinformatics/downloads/corpus-for-disease-names-and-adverse-effects.html
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for SCAI Disease
## Dataset Description
- **Homepage:** https://www.scai.fraunhofer.de/en/business-research-areas/bioinformatics/downloads/corpus-for-disease-names-and-adverse-effects.html
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
SCAI Disease is a dataset annotated in 2010 with mentions of diseases and
adverse effects. It is a corpus containing 400 randomly selected MEDLINE
abstracts generated using ‘Disease OR Adverse effect’ as a PubMed query. This
evaluation corpus was annotated by two individuals who hold a Master’s degree
in life sciences.
## Citation Information
```
@inproceedings{gurulingappa:lrec-ws10,
author = {Harsha Gurulingappa and Roman Klinger and Martin Hofmann-Apitius and Juliane Fluck},
title = {An Empirical Evaluation of Resources for the Identification of Diseases and Adverse Effects in Biomedical Literature},
booktitle = {LREC Workshop on Building and Evaluating Resources for Biomedical Text Mining},
year = {2010},
}
```
|
DanielSongShen/CLIP-food101-image-dataset-tiny_latents_hidden_states | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': apple_pie
'1': baby_back_ribs
'2': baklava
'3': beef_carpaccio
'4': beef_tartare
'5': beet_salad
'6': beignets
'7': bibimbap
'8': bread_pudding
'9': breakfast_burrito
'10': bruschetta
'11': caesar_salad
'12': cannoli
'13': caprese_salad
'14': carrot_cake
'15': ceviche
'16': cheesecake
'17': cheese_plate
'18': chicken_curry
'19': chicken_quesadilla
'20': chicken_wings
'21': chocolate_cake
'22': chocolate_mousse
'23': churros
'24': clam_chowder
'25': club_sandwich
'26': crab_cakes
'27': creme_brulee
'28': croque_madame
'29': cup_cakes
'30': deviled_eggs
'31': donuts
'32': dumplings
'33': edamame
'34': eggs_benedict
'35': escargots
'36': falafel
'37': filet_mignon
'38': fish_and_chips
'39': foie_gras
'40': french_fries
'41': french_onion_soup
'42': french_toast
'43': fried_calamari
'44': fried_rice
'45': frozen_yogurt
'46': garlic_bread
'47': gnocchi
'48': greek_salad
'49': grilled_cheese_sandwich
'50': grilled_salmon
'51': guacamole
'52': gyoza
'53': hamburger
'54': hot_and_sour_soup
'55': hot_dog
'56': huevos_rancheros
'57': hummus
'58': ice_cream
'59': lasagna
'60': lobster_bisque
'61': lobster_roll_sandwich
'62': macaroni_and_cheese
'63': macarons
'64': miso_soup
'65': mussels
'66': nachos
'67': omelette
'68': onion_rings
'69': oysters
'70': pad_thai
'71': paella
'72': pancakes
'73': panna_cotta
'74': peking_duck
'75': pho
'76': pizza
'77': pork_chop
'78': poutine
'79': prime_rib
'80': pulled_pork_sandwich
'81': ramen
'82': ravioli
'83': red_velvet_cake
'84': risotto
'85': samosa
'86': sashimi
'87': scallops
'88': seaweed_salad
'89': shrimp_and_grits
'90': spaghetti_bolognese
'91': spaghetti_carbonara
'92': spring_rolls
'93': steak
'94': strawberry_shortcake
'95': sushi
'96': tacos
'97': takoyaki
'98': tiramisu
'99': tuna_tartare
'100': waffles
- name: CLIP_image_latent
sequence:
sequence: float32
- name: CLIP_hidden_states
sequence:
sequence: float32
splits:
- name: train
num_bytes: 108882075.0
num_examples: 80
- name: test
num_bytes: 27239667.0
num_examples: 20
download_size: 137566981
dataset_size: 136121742.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
DhimanBose/small_bangla_newspaper_dataset | ---
dataset_info:
features:
- name: content
dtype: string
splits:
- name: train
num_bytes: 1108067
num_examples: 2000
download_size: 451995
dataset_size: 1108067
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
cmu-mlsp/encodec_24khz-opt-125m-pretrained-ft-librispeech_asr_dummy-validation-features | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 24000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
- name: audio_codes
sequence:
sequence: int64
splits:
- name: validation
num_bytes: 23693835.0
num_examples: 73
download_size: 22836090
dataset_size: 23693835.0
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "encodec_24khz-opt-125m-pretrained-ft-librispeech_asr_dummy-validation-features"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MohammedNasri/cv_11_arabic_test_denoisy | ---
dataset_info:
features:
- name: audio
sequence: float64
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 5817636498
num_examples: 10440
download_size: 2823357222
dataset_size: 5817636498
---
# Dataset Card for "cv_11_arabic_test_denoisy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HuggingFaceM4/LocalizedNarratives | ---
license: cc-by-4.0
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://google.github.io/localized-narratives/(https://google.github.io/localized-narratives/)
- **Repository:**: [https://github.com/google/localized-narratives](https://github.com/google/localized-narratives)
- **Paper:** [Connecting Vision and Language with Localized Narratives](https://arxiv.org/pdf/1912.03098.pdf)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Localized Narratives, a new form of multimodal image annotations connecting vision and language.
We ask annotators to describe an image with their voice while simultaneously hovering their mouse over the region they are describing.
Since the voice and the mouse pointer are synchronized, we can localize every single word in the description.
This dense visual grounding takes the form of a mouse trace segment per word and is unique to our data.
We annotated 849k images with Localized Narratives: the whole COCO, Flickr30k, and ADE20K datasets, and 671k images of Open Images, all of which we make publicly available.
As of now, there is only the `OpenImages` subset, but feel free to contribute the other subset of Localized Narratives!
`OpenImages_captions` is similar to the `OpenImages` subset. The differences are that captions are groupped per image (images can have multiple captions). For this subset, `timed_caption`, `traces` and `voice_recording` are not available.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
Each instance has the following structure:
```
{
dataset_id: 'mscoco_val2017',
image_id: '137576',
annotator_id: 93,
caption: 'In this image there are group of cows standing and eating th...',
timed_caption: [{'utterance': 'In this', 'start_time': 0.0, 'end_time': 0.4}, ...],
traces: [[{'x': 0.2086, 'y': -0.0533, 't': 0.022}, ...], ...],
voice_recording: 'coco_val/coco_val_137576_93.ogg'
}
```
### Data Fields
Each line represents one Localized Narrative annotation on one image by one annotator and has the following fields:
- `dataset_id`: String identifying the dataset and split where the image belongs, e.g. mscoco_val2017.
- `image_id` String identifier of the image, as specified on each dataset.
- `annotator_id` Integer number uniquely identifying each annotator.
- `caption` Image caption as a string of characters.
- `timed_caption` List of timed utterances, i.e. {utterance, start_time, end_time} where utterance is a word (or group of words) and (start_time, end_time) is the time during which it was spoken, with respect to the start of the recording.
- `traces` List of trace segments, one between each time the mouse pointer enters the image and goes away from it. Each trace segment is represented as a list of timed points, i.e. {x, y, t}, where x and y are the normalized image coordinates (with origin at the top-left corner of the image) and t is the time in seconds since the start of the recording. Please note that the coordinates can go a bit beyond the image, i.e. <0 or >1, as we recorded the mouse traces including a small band around the image.
- `voice_recording` Relative URL path with respect to https://storage.googleapis.com/localized-narratives/voice-recordings where to find the voice recording (in OGG format) for that particular image.
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset.
|
BaorBaor/14k_data_multichoice | ---
dataset_info:
features:
- name: input_ids
sequence:
sequence: int32
- name: token_type_ids
sequence:
sequence: int8
- name: attention_mask
sequence:
sequence: int8
- name: label
dtype: int64
splits:
- name: train
num_bytes: 412680494
num_examples: 14467
download_size: 66160105
dataset_size: 412680494
---
# Dataset Card for "14k_data_multichoice"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nlp-brin-id/unsup-fact-all | ---
license: mit
task_categories:
- text-classification
language:
- id
size_categories:
- 10K<n<100K
---
This dataset infers a contradiction case between facts and contents from HOAX class subset in nlp-brin-id/id-hoax-report-merge-v2. </br>
The subsets can be utilized as samples for interleaving batch sampling during training stage of contrastive learning modles. </br>
Attributes used = 'Content', 'Fact'.</br>
See 'Files and Versions' for inspecting the subset independently: </br>
- nonhoax_fct_* is 'Fact' subset from online reporting data (class=NON-HOAX)
- pair_hoax_fct_* is 'Fact' subset from online reporting data (class=HOAX)
- pair_hoax_cnt_* is 'Content' subset from online reporting data (class=HOAX)
|
bnithish/question_difficulty_1 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: category
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 29245
num_examples: 68
download_size: 10541
dataset_size: 29245
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jonathandechert/DEPlainAPA | ---
language:
- de
--- |
Boxit372/wheatley-voicelines | ---
pretty_name: Wheatley Voicelines
--- |
RafaelBds1/Valentina | ---
license: openrail
---
|
HaiLong9901/VNeseTextSum | ---
task_categories:
- text2text-generation
language:
- vi
pretty_name: VNeseTextSum
--- |
brianarbuckle/cocktail_recipes | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text2text-generation
- text-generation
- fill-mask
- text-retrieval
- summarization
task_ids:
- document-retrieval
- entity-linking-retrieval
- explanation-generation
- language-modeling
- masked-language-modeling
pretty_name: Cocktail Recipes
dataset_info:
features:
- name: title
dtype: string
- name: ingredients
sequence: string
- name: directions
sequence: string
- name: misc
sequence: string
- name: source
dtype: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 301501
num_examples: 875
download_size: 96915
dataset_size: 301501
---
# Dataset Card for Cocktail Recipes
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
## Dataset Description
### Dataset Summary
Cocktail Recipes Dataset for Semi-Structured Text Generation.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset is in English.
## Dataset Structure
### Data Instances
```json
{"title": "Final Ward",
"ingredients": ["0.75 oz. Rye Whiskey",
"0.75 oz. Lemon Juice",
"0.75 oz. Maraschino Liqueur",
"0.75 oz. Green Chartreuse"],
"directions": ["shake on ice and strain"],
"misc":[],
"source": "Death & Co.",
"ner":["whiskey",
"chartreuse",
"maraschino liqueur"]}
```
### Data Fields
- `title` (`str`): Title of the recipe.
- `ingredients` (`list` of `str`): Ingredients.
- `directions` (`list` of `str`): Instruction steps.
- `source` (`str`): Origin of each recipe
- `ner` (`list` of `str`): NER entities.
### Data Splits
The dataset contains a single `train` split.
## Dataset Creation
[More Information Needed]
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
|
bahiags/Briggs_flow | ---
license: openrail
---
|
igorwang/citecls | ---
dataset_info:
features:
- name: output
dtype: string
- name: history
sequence: 'null'
- name: instruction
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 24576249
num_examples: 9882
download_size: 3487697
dataset_size: 24576249
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Dippi9845/interval_tree_arxiv_long | ---
license: cc-by-nc-nd-4.0
---
|
BangumiBase/machinedollwakizutsukanai | ---
license: mit
tags:
- art
size_categories:
- n<1K
---
# Bangumi Image Base of Machine-doll Wa Kizutsukanai
This is the image base of bangumi Machine-Doll wa Kizutsukanai, we detected 18 characters, 964 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 190 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 26 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 264 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 14 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 8 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 123 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 12 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 77 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 14 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 13 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 9 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 67 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 14 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 6 | [Download](13/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| 14 | 11 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 13 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 29 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 74 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
LDJnr/Puffin | ---
license: apache-2.0
task_categories:
- conversational
- question-answering
- text-generation
language:
- en
tags:
- Physics
- Biology
- Math
- Chemistry
- Culture
- Logic
- Roleplay
pretty_name: Puffin
size_categories:
- 1K<n<10K
---
## This is the Official Puffin dataset. Exactly 3,000 examples with each response created using GPT-4.
## PLEASE USE THE NEWER VERSION OF PUFFIN CALLED PURE-DOVE, IT IS NO LONGER RECCOMENDED TO USE PUFFIN
- Comprised of over 2,000 multi-turn conversations between GPT-4 and real humans.
- Average context length per conversation is over 1,000 tokens. (will measure this more accurately soon)
- Average turns per conversation is more than 10. (will measure this more accurately soon)
- The other portion of Puffin is made of manually curated subsets of the following (All responses synthesized using GPT-4):
CamelAI/Physics
CamelAI/Math
CamelAI/Biology
CamelAI/Chemistry
A majority of the real multi-turn conversations are made up of a curated subset of the original ShareGPT dataset.
- Extensive cleaning was done to filter out instances of overt AI moralizing or related behaviour, such as "As an AI language model" and "September 2021"
- Most importantly, we narrowed down the ShareGPT dataset to strictly only GPT-4 examples. Knowing which ShareGPT examples were GPT-4 vs GPT-3.5 was a task that would've been much more arduous if it wasn't for the help of folks over at OpenChat, whom annoteated the neccessary examples.
During the curation process, there can be some relatively arduos steps when it comes to actually executing on the best experimentation or concepts for how to filter examples out. Luckily there is folks over at NousResearch that helped expedite this process with little to no sacrifices in quality, big thank you to J-Supha specifically for making these types of significant contributions.
Along with J-Supha, some other people are worth mentioning, these are the folks that helped on long late night calls to help debug and/or get Puffin training on Llama-2 Asap, all within 12 hours of Llama-2 being announced.
- Emozilla, Teknium, Caseus. And of course thank you to RedmondAI for sponsoring the training compute!
## Future Plans & How you can help!
This is a relatively early build amongst the grand plans for the future of what I plan to work on!
In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from our training curations.
If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord!
|
Codec-SUPERB/fluent_speech_commands_test_subset_synth | ---
configs:
- config_name: default
data_files:
- split: original
path: data/original-*
- split: academicodec_hifi_16k_320d
path: data/academicodec_hifi_16k_320d-*
- split: academicodec_hifi_16k_320d_large_uni
path: data/academicodec_hifi_16k_320d_large_uni-*
- split: academicodec_hifi_24k_320d
path: data/academicodec_hifi_24k_320d-*
- split: audiodec_24k_320d
path: data/audiodec_24k_320d-*
- split: dac_16k
path: data/dac_16k-*
- split: dac_24k
path: data/dac_24k-*
- split: dac_44k
path: data/dac_44k-*
- split: encodec_24k_12bps
path: data/encodec_24k_12bps-*
- split: encodec_24k_1_5bps
path: data/encodec_24k_1_5bps-*
- split: encodec_24k_24bps
path: data/encodec_24k_24bps-*
- split: encodec_24k_3bps
path: data/encodec_24k_3bps-*
- split: encodec_24k_6bps
path: data/encodec_24k_6bps-*
- split: funcodec_en_libritts_16k_gr1nq32ds320
path: data/funcodec_en_libritts_16k_gr1nq32ds320-*
- split: funcodec_en_libritts_16k_gr8nq32ds320
path: data/funcodec_en_libritts_16k_gr8nq32ds320-*
- split: funcodec_en_libritts_16k_nq32ds320
path: data/funcodec_en_libritts_16k_nq32ds320-*
- split: funcodec_en_libritts_16k_nq32ds640
path: data/funcodec_en_libritts_16k_nq32ds640-*
- split: funcodec_zh_en_16k_nq32ds320
path: data/funcodec_zh_en_16k_nq32ds320-*
- split: funcodec_zh_en_16k_nq32ds640
path: data/funcodec_zh_en_16k_nq32ds640-*
- split: speech_tokenizer_16k
path: data/speech_tokenizer_16k-*
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: id
dtype: string
splits:
- name: original
num_bytes: 139532548.81443265
num_examples: 1888
- name: academicodec_hifi_16k_320d
num_bytes: 139018996.22381252
num_examples: 1888
- name: academicodec_hifi_16k_320d_large_uni
num_bytes: 139018996.22381252
num_examples: 1888
- name: academicodec_hifi_24k_320d
num_bytes: 208776661.60742936
num_examples: 1888
- name: audiodec_24k_320d
num_bytes: 209829612.96381852
num_examples: 1888
- name: dac_16k
num_bytes: 139596740.81443265
num_examples: 1888
- name: dac_24k
num_bytes: 209247859.22471124
num_examples: 1888
- name: dac_44k
num_bytes: 384244176.85264456
num_examples: 1888
- name: encodec_24k_12bps
num_bytes: 209247859.22471124
num_examples: 1888
- name: encodec_24k_1_5bps
num_bytes: 209247859.22471124
num_examples: 1888
- name: encodec_24k_24bps
num_bytes: 209247859.22471124
num_examples: 1888
- name: encodec_24k_3bps
num_bytes: 209247859.22471124
num_examples: 1888
- name: encodec_24k_6bps
num_bytes: 209247859.22471124
num_examples: 1888
- name: funcodec_en_libritts_16k_gr1nq32ds320
num_bytes: 139458633.9569284
num_examples: 1888
- name: funcodec_en_libritts_16k_gr8nq32ds320
num_bytes: 139458633.9569284
num_examples: 1888
- name: funcodec_en_libritts_16k_nq32ds320
num_bytes: 139596740.81443265
num_examples: 1888
- name: funcodec_en_libritts_16k_nq32ds640
num_bytes: 139596740.81443265
num_examples: 1888
- name: funcodec_zh_en_16k_nq32ds320
num_bytes: 139596740.81443265
num_examples: 1888
- name: funcodec_zh_en_16k_nq32ds640
num_bytes: 139596740.81443265
num_examples: 1888
- name: speech_tokenizer_16k
num_bytes: 140168434.60479978
num_examples: 1888
download_size: 3070365672
dataset_size: 3592977554.625037
---
# Dataset Card for "fluent_speech_commands_test_subset_synth"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TheBritishLibrary/blbooks | ---
annotations_creators:
- no-annotation
language_creators:
- machine-generated
language:
- de
- en
- es
- fr
- it
- nl
license:
- cc0-1.0
multilinguality:
- multilingual
pretty_name: British Library Books
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
- other
task_ids:
- language-modeling
- masked-language-modeling
tags:
- digital-humanities-research
dataset_info:
- config_name: all
features:
- name: record_id
dtype: string
- name: date
dtype: int32
- name: raw_date
dtype: string
- name: title
dtype: string
- name: place
dtype: string
- name: empty_pg
dtype: bool
- name: text
dtype: string
- name: pg
dtype: int32
- name: mean_wc_ocr
dtype: float32
- name: std_wc_ocr
dtype: float64
- name: name
dtype: string
- name: all_names
dtype: string
- name: Publisher
dtype: string
- name: Country of publication 1
dtype: string
- name: all Countries of publication
dtype: string
- name: Physical description
dtype: string
- name: Language_1
dtype: string
- name: Language_2
dtype: string
- name: Language_3
dtype: string
- name: Language_4
dtype: string
- name: multi_language
dtype: bool
splits:
- name: train
num_bytes: 30394267732
num_examples: 14011953
download_size: 10486035662
dataset_size: 30394267732
- config_name: 1800s
features:
- name: record_id
dtype: string
- name: date
dtype: int32
- name: raw_date
dtype: string
- name: title
dtype: string
- name: place
dtype: string
- name: empty_pg
dtype: bool
- name: text
dtype: string
- name: pg
dtype: int32
- name: mean_wc_ocr
dtype: float32
- name: std_wc_ocr
dtype: float64
- name: name
dtype: string
- name: all_names
dtype: string
- name: Publisher
dtype: string
- name: Country of publication 1
dtype: string
- name: all Countries of publication
dtype: string
- name: Physical description
dtype: string
- name: Language_1
dtype: string
- name: Language_2
dtype: string
- name: Language_3
dtype: string
- name: Language_4
dtype: string
- name: multi_language
dtype: bool
splits:
- name: train
num_bytes: 30020434670
num_examples: 13781747
download_size: 10348577602
dataset_size: 30020434670
- config_name: 1700s
features:
- name: record_id
dtype: string
- name: date
dtype: int32
- name: raw_date
dtype: string
- name: title
dtype: string
- name: place
dtype: string
- name: empty_pg
dtype: bool
- name: text
dtype: string
- name: pg
dtype: int32
- name: mean_wc_ocr
dtype: float32
- name: std_wc_ocr
dtype: float64
- name: name
dtype: string
- name: all_names
dtype: string
- name: Publisher
dtype: string
- name: Country of publication 1
dtype: string
- name: all Countries of publication
dtype: string
- name: Physical description
dtype: string
- name: Language_1
dtype: string
- name: Language_2
dtype: string
- name: Language_3
dtype: string
- name: Language_4
dtype: string
- name: multi_language
dtype: bool
splits:
- name: train
num_bytes: 266382657
num_examples: 178224
download_size: 95137895
dataset_size: 266382657
- config_name: '1510_1699'
features:
- name: record_id
dtype: string
- name: date
dtype: timestamp[s]
- name: raw_date
dtype: string
- name: title
dtype: string
- name: place
dtype: string
- name: empty_pg
dtype: bool
- name: text
dtype: string
- name: pg
dtype: int32
- name: mean_wc_ocr
dtype: float32
- name: std_wc_ocr
dtype: float64
- name: name
dtype: string
- name: all_names
dtype: string
- name: Publisher
dtype: string
- name: Country of publication 1
dtype: string
- name: all Countries of publication
dtype: string
- name: Physical description
dtype: string
- name: Language_1
dtype: string
- name: Language_2
dtype: string
- name: Language_3
dtype: string
- name: Language_4
dtype: string
- name: multi_language
dtype: bool
splits:
- name: train
num_bytes: 107667469
num_examples: 51982
download_size: 42320165
dataset_size: 107667469
- config_name: '1500_1899'
features:
- name: record_id
dtype: string
- name: date
dtype: timestamp[s]
- name: raw_date
dtype: string
- name: title
dtype: string
- name: place
dtype: string
- name: empty_pg
dtype: bool
- name: text
dtype: string
- name: pg
dtype: int32
- name: mean_wc_ocr
dtype: float32
- name: std_wc_ocr
dtype: float64
- name: name
dtype: string
- name: all_names
dtype: string
- name: Publisher
dtype: string
- name: Country of publication 1
dtype: string
- name: all Countries of publication
dtype: string
- name: Physical description
dtype: string
- name: Language_1
dtype: string
- name: Language_2
dtype: string
- name: Language_3
dtype: string
- name: Language_4
dtype: string
- name: multi_language
dtype: bool
splits:
- name: train
num_bytes: 30452067039
num_examples: 14011953
download_size: 10486035662
dataset_size: 30452067039
- config_name: '1800_1899'
features:
- name: record_id
dtype: string
- name: date
dtype: timestamp[s]
- name: raw_date
dtype: string
- name: title
dtype: string
- name: place
dtype: string
- name: empty_pg
dtype: bool
- name: text
dtype: string
- name: pg
dtype: int32
- name: mean_wc_ocr
dtype: float32
- name: std_wc_ocr
dtype: float64
- name: name
dtype: string
- name: all_names
dtype: string
- name: Publisher
dtype: string
- name: Country of publication 1
dtype: string
- name: all Countries of publication
dtype: string
- name: Physical description
dtype: string
- name: Language_1
dtype: string
- name: Language_2
dtype: string
- name: Language_3
dtype: string
- name: Language_4
dtype: string
- name: multi_language
dtype: bool
splits:
- name: train
num_bytes: 30077284377
num_examples: 13781747
download_size: 10348577602
dataset_size: 30077284377
- config_name: '1700_1799'
features:
- name: record_id
dtype: string
- name: date
dtype: timestamp[s]
- name: raw_date
dtype: string
- name: title
dtype: string
- name: place
dtype: string
- name: empty_pg
dtype: bool
- name: text
dtype: string
- name: pg
dtype: int32
- name: mean_wc_ocr
dtype: float32
- name: std_wc_ocr
dtype: float64
- name: name
dtype: string
- name: all_names
dtype: string
- name: Publisher
dtype: string
- name: Country of publication 1
dtype: string
- name: all Countries of publication
dtype: string
- name: Physical description
dtype: string
- name: Language_1
dtype: string
- name: Language_2
dtype: string
- name: Language_3
dtype: string
- name: Language_4
dtype: string
- name: multi_language
dtype: bool
splits:
- name: train
num_bytes: 267117831
num_examples: 178224
download_size: 95137895
dataset_size: 267117831
---
# Dataset Card for British Library Books
## Table of Contents
- [Dataset Card for British Library Books](#dataset-card-for-British-Library-Books)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Language model training](#language-model-training)
- [Supervised tasks](#supervised-tasks)
- [Languages](#languages)
- [Language change](#language-change)
- [Optical Character Recognition](#optical-character-recognition)
- [OCR word confidence](#ocr-word-confidence)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Date normalization](#date-normalization)
- [Metadata included](#metadata-included)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Colonialism](#colonialism)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.bl.uk/collection-guides/digitised-printed-books
- **Repository:** https://doi.org/10.21250/db14
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** labs@bl.uk
### Dataset Summary
This dataset consists of books digitised by the British Library in partnership with Microsoft. The dataset includes ~25 million pages of out of copyright texts. The majority of the texts were published in the 18th and 19th Century, but the collection also consists of a smaller number of books from earlier periods. Items within this collection cover a wide range of subject areas, including geography, philosophy, history, poetry and literature and are published in various languages.
While the books are predominately from the 18th and 19th Centuries, there are fewer books from earlier periods. The number of pages in the corpus by decade:
| | page count |
| ---- | ---------- |
| 1510 | 94 |
| 1520 | 32 |
| 1540 | 184 |
| 1550 | 16 |
| 1580 | 276 |
| 1590 | 540 |
| 1600 | 1117 |
| 1610 | 1132 |
| 1620 | 1856 |
| 1630 | 9274 |
| 1640 | 4232 |
| 1650 | 2944 |
| 1660 | 5858 |
| 1670 | 11415 |
| 1680 | 8348 |
| 1690 | 13756 |
| 1700 | 10160 |
| 1710 | 9556 |
| 1720 | 10314 |
| 1730 | 13282 |
| 1740 | 10778 |
| 1750 | 12001 |
| 1760 | 21415 |
| 1770 | 28490 |
| 1780 | 32676 |
| 1790 | 50014 |
| 1800 | 307806 |
| 1810 | 478008 |
| 1820 | 589419 |
| 1830 | 681212 |
| 1840 | 1113473 |
| 1850 | 1726108 |
| 1860 | 1725407 |
| 1870 | 2069089 |
| 1880 | 2585159 |
| 1890 | 3365031 |
[More Information Needed]
### Supported Tasks and Leaderboards
This collection has been previously used across various digital history and humanities projects since being published.
The dataset consists of text and a range of metadata associated with this text. This metadata includes:
- date of publication
- place of publication
- country of publication
- language
- OCR quality
- physical description of the original physical item
#### Language model training
As a relatively large dataset, `blbooks` provides a source dataset for training language models. The presence of this metadata also offers interesting opportunities to use this dataset as a source for training language models based on:
- specific time-periods
- specific languages
- certain OCR quality thresholds
The above is not an exhaustive list but offer some suggestions of how the dataset can be used to explore topics such as the impact of OCR quality on language models, the ‘transferability’ of language models across time or the impact of training multilingual language models on historical languages.
#### Supervised tasks
Whilst this dataset does not have annotations for a specific NLP task, such as Named Entity Recognition, it does include a wide variety of metadata. This metadata has the potential to be used for training and/or evaluating a variety of supervised tasks predicting this metadata.
### Languages
This dataset consists of books published in several languages. The breakdown of the languages included (at the page level) is:
| Language | Pages |
| --------------------- | -------- |
| English | 10039463 |
| French | 1442929 |
| German | 1172793 |
| Spanish | 286778 |
| Italian | 214255 |
| Dutch | 204759 |
| Russian | 193347 |
| Danish | 93366 |
| Hungarian | 88094 |
| Swedish | 76225 |
| Polish | 58901 |
| Greek, Modern (1453-) | 26104 |
| Latin | 25611 |
| Portuguese | 25410 |
| Czech | 20160 |
| Bulgarian | 7891 |
| Finnish | 5677 |
| Irish | 2743 |
| Serbian | 1975 |
| Romanian | 1544 |
| Norwegian Nynorsk | 1398 |
| Croatian | 1306 |
| Norwegian | 1227 |
| Icelandic | 902 |
| Slovak | 840 |
| Lithuanian | 714 |
| Welsh | 580 |
| Slovenian | 545 |
| Indonesian | 418 |
| Cornish | 223 |
This breakdown was derived from the first language in the associated metadata field. Some books include multiple languages. Some of the languages codes for this data were also derived using computational methods. Therefore, the language fields in the dataset should be treated with some caution (discussed in more detail below).
#### Language change
The publication dates of books in the data cover a broad period of time (1500-1900). For languages in the dataset with broad temporal coverage, significant [language change](https://en.wikipedia.org/wiki/Language_change) might be found. The ability to study this change by taking reasonably large samples of languages covering different time periods is one of the opportunities offered by this dataset. The fact that the text in this dataset was produced via Optical Character Recognition (OCR) causes some challenges for this type of research (see below).
#### Optical Character Recognition
The digitised books in this collection were transformed into machine-readable text using Optical Character Recognition (OCR) software. The text produced via OCR software will usually include some errors. These errors include; mistakes at the character level; for example, an `i` is mistaken for an `l`, at the word level or across significant passages of text.
The books in this dataset can pose some additional challenges for OCR software. OCR errors can stem from:
- the quality of the original printing: printing technology was a developing technology during the time period covered by this corpus; some of the original book text will include misprints, blurred or faded ink that is hard to read
- damage to the page: some of the books will have become damaged over time, this can obscure all or parts of the text on a page
- poor quality scans: scanning books can be challenging; for example, if the book has tight bindings, it can be hard to capture text that has fallen into the [gutter](https://www.abaa.org/glossary/entry/gutter) of the book.
- the language used in the books may differ from the languages OCR software is predominantly trained to recognise.
##### OCR word confidence
Many OCR engines produce some form of confidence score alongside the predicted text. These confidence scores are usually at the character or word level. The word confidence score was given for each word in the original ALTO XML versions of the text in this dataset in this dataset. The OCR confidence scores should be treated with some scepticism. For historical text or in a lower resource language, for example, a low confidence score may be more likely for words not included in a modern dictionary but may be accurate transcriptions of the original text. With that said, the confidence scores do give some sense of the OCR quality.
An example of text with a high (over 90% mean word confidence score):
```
8 direction to the Conduit, round which is a wide open space, and a good broad pavement called the Parade. It commands a pleasant peep of the slopes and terrace throughout its entire length. The street continuing from the Conduit, in the same general direction, was known anciently as Lodborne Lane, and is now named South Street. From the Conduit two other streets, at right angles to these, are Long Street, leading Eastwards, and Half-Moon Street (formerly Lodborne), leading to Westbury, Trendle Street, and the Horsecastles Road.
```
An example of text with a score below 40%:
```
Hannover. Schrift und Druck von Fr. CultniTmn,',
"LeMNs'utluirui.",
'ü 8u«llim» M^äalßwi 01de!lop 1<M.',
'p^dnalmw vom Xr^u/e, lpiti>»**Kmm lie« !»^2!M kleine lii!<! (,«>* ttünee!<»e^ v»n tndzt Lievclum, 1872,
```
The quality of OCR - as measured by mean OCR confidence for a page - across the dataset correlates with other features. A groupby of publication decade and mean word confidence:
| decade | mean_wc_ocr |
| ------ | ----------- |
| 1510 | 0.499151 |
| 1520 | 0.544818 |
| 1540 | 0.511589 |
| 1550 | 0.4505 |
| 1580 | 0.321858 |
| 1590 | 0.461282 |
| 1600 | 0.467318 |
| 1610 | 0.495895 |
| 1620 | 0.501257 |
| 1630 | 0.49766 |
| 1640 | 0.512095 |
| 1650 | 0.528534 |
| 1660 | 0.521014 |
| 1670 | 0.592575 |
| 1680 | 0.583901 |
| 1690 | 0.567202 |
| 1700 | 0.575175 |
| 1710 | 0.61436 |
| 1720 | 0.627725 |
| 1730 | 0.658534 |
| 1740 | 0.64214 |
| 1750 | 0.657357 |
| 1760 | 0.6389 |
| 1770 | 0.651883 |
| 1780 | 0.632326 |
| 1790 | 0.664279 |
| 1800 | 0.682338 |
| 1810 | 0.708915 |
| 1820 | 0.730015 |
| 1830 | 0.730973 |
| 1840 | 0.713886 |
| 1850 | 0.697106 |
| 1860 | 0.696701 |
| 1870 | 0.717233 |
| 1880 | 0.733331 |
| 1890 | 0.762364 |
As might be expected, the earlier periods have lower mean word confidence scores. Again, all of this should be treated with some scepticism, especially as the size of the data grows over time.
As with time, the mean word confidence of the OCR software varies across languages:
| Language_1 | mean_wc_ocr |
| --------------------- | ----------- |
| Croatian | 0.755565 |
| Welsh | 0.7528 |
| Norwegian Nynorsk | 0.751648 |
| Slovenian | 0.746007 |
| French | 0.740772 |
| Finnish | 0.738032 |
| Czech | 0.737849 |
| Hungarian | 0.736076 |
| Dutch | 0.734977 |
| Cornish | 0.733682 |
| Danish | 0.733106 |
| English | 0.733037 |
| Irish | 0.732658 |
| Portuguese | 0.727746 |
| Spanish | 0.725111 |
| Icelandic | 0.724427 |
| Italian | 0.715839 |
| Swedish | 0.715633 |
| Polish | 0.715133 |
| Lithuanian | 0.700003 |
| Bulgarian | 0.694657 |
| Romanian | 0.692957 |
| Latin | 0.689022 |
| Russian | 0.685847 |
| Serbian | 0.674329 |
| Slovak | 0.66739 |
| Greek, Modern (1453-) | 0.632195 |
| German | 0.631457 |
| Indonesian | 0.6155 |
| Norwegian | 0.597987 |
Again, these numbers should be treated sceptically since some languages appear very infrequently. For example, the above table suggests the mean word confidence for Welsh is relatively high. However, there isn’t much Welsh in the dataset. Therefore, it is unlikely that this data will be particularly useful for training (historic) Welsh language models.
[More Information Needed]
## Dataset Structure
The dataset has a number of configurations relating to the different dates of publication in the underlying data:
- `1500_1899`: this configuration covers all years
- `1800_1899`: this configuration covers the years between 1800 and 1899
- `1700_1799`: this configuration covers the years between 1700 and 1799
- `1510_1699`: this configuration covers the years between 1510 and 1699
### Configuration option
All of the configurations have an optional keyword argument `skip_empty_pages` which is set to `True` by default. The underlying dataset includes some pages where there is no text. This could either be because the underlying book page didn't have any text or the OCR software failed to detect this text.
For many uses of this dataset it doesn't make sense to include empty pages so these are skipped by default. However, for some uses you may prefer to retain a representation of the data that includes these empty pages. Passing `skip_empty_pages=False` when loading the dataset will enable this option.
### Data Instances
An example data instance:
```python
{'Country of publication 1': 'England',
'Language_1': 'English',
'Language_2': None,
'Language_3': None,
'Language_4': None,
'Physical description': None,
'Publisher': None,
'all Countries of publication': 'England',
'all names': 'Settle, Elkanah [person]',
'date': 1689,
'empty_pg': True,
'mean_wc_ocr': 0.0,
'multi_language': False,
'name': 'Settle, Elkanah',
'pg': 1,
'place': 'London',
'raw_date': '1689',
'record_id': '001876770',
'std_wc_ocr': 0.0,
'text': None,
‘title’: ‘The Female Prelate: being the history and the life and death of Pope Joan. A tragedy [in five acts and in verse] . Written by a Person of Quality [i.e. Elkanah Settle]’}
```
Each instance in the dataset represents a single page from an original digitised book.
### Data Fields
Included in this dataset are:
| Field | Data Type | Description |
| ---------------------------- | --------- | ------------------------------------------------------------------------------------------------------------- |
| record_id | string | British Library ID for the item |
| date | int | parsed/normalised year for the item. i.e. 1850 |
| raw_date | string | the original raw date for an item i.e. 1850- |
| title | string | title of the book |
| place | string | Place of publication, i.e. London |
| empty_pg | bool | whether page contains text |
| text | string | OCR generated text for a page |
| pg | int | page in original book the instance refers to |
| mean_wc_ocr | float | mean word confidence values for the page |
| std_wc_ocr | float | standard deviation of the word confidence values for the page |
| name | string | name associated with the item (usually author) |
| all names | string | all names associated with a publication |
| Publisher | string | publisher of the book |
| Country of publication 1 | string | first country associated with publication |
| all Countries of publication | string | all countries associated with a publication |
| Physical description | string | physical description of the item (size). This requires some normalisation before use and isn’t always present |
| Language_1 | string | first language associated with the book, this is usually present |
| Language_2 | string | |
| Language_3 | string | |
| Language_4 | string | |
| multi_language | bool | |
Some of these fields are not populated a large proportion of the time. You can get some sense of this from this [Pandas Profiling](https://github.com/pandas-profiling/pandas-profiling) [report](https://davanstrien.github.io/BL-datasets-pandas-profile-reports/pandas_profile_report_MS_digitised_books_2021-01-09.html)
The majority of these fields relate to metadata about the books. Most of these fields were created by staff working for the British Library. The notable exception is the “Languages” fields that have sometimes been determined using computational methods. This work is reported in more detail in [Automated Language Identification of Bibliographic Resources](https://doi.org/10.1080/01639374.2019.1700201). It is important to note that metadata is neither perfect nor static. The metadata associated with this book was generated based on export from the British Library catalogue in 2021.
[More Information Needed]
### Data Splits
This dataset contains a single split `train`.
## Dataset Creation
**Note** this section is a work in progress.
### Curation Rationale
The books in this collection were digitised as part of a project partnership between the British Library and Microsoft. [Mass digitisation](https://en.wikipedia.org/wiki/Category:Mass_digitization), i.e. projects intending to quickly digitise large volumes of materials shape the selection of materials to include in several ways. Some considerations which are often involved in the decision of whether to include items for digitisation include (but are not limited to):
- copyright status
- preservation needs
- the size of an item, very large and very small items are often hard to digitise quickly
These criteria can have knock-on effects on the makeup of a collection. For example, systematically excluding large books may result in some types of book content not being digitised. Large volumes are likely to be correlated to content to at least some extent, so excluding them from digitisation will mean that material is underrepresented. Similarly, copyright status is often (but not only) determined by publication date. This can often lead to a rapid fall in the number of items in a collection after a certain cut-off date.
All of the above is largely to make clear that this collection was not curated to create a representative sample of the British Library’s holdings. Some material will be over-represented, and others under-represented. Similarly, the collection should not be considered a representative sample of what was published across the period covered by the dataset (nor that the relative proportions of the data for each time period represent a proportional sample of publications from that period). Finally, and this probably does not need stating, the language included in the text should not be considered representative of either written or spoken language(s) from that time period.
[More Information Needed]
### Source Data
The source data (physical items) includes a variety of resources (predominantly monographs) held by the [British Library](bl.uk/](https://bl.uk/). The British Library is a [Legal Deposit](https://www.bl.uk/legal-deposit/about-legal-deposit) library. “Legal deposit requires publishers to provide a copy of every work they publish in the UK to the British Library. It’s existed in English law since 1662.” [source](https://www.bl.uk/legal-deposit/about-legal-deposit).
The source data for this version of the data is derived from the original ALTO XML files and a recent metadata export #TODO add links
[More Information Needed]
#### Initial Data Collection and Normalization
This version of the dataset was created using the original ALTO XML files and, where a match was found, updating the metadata associated with that item with more recent metadata using an export from the British Library catalogue. The process of creating this new dataset is documented here #TODO add link.
There are a few decisions made in the above processing steps worth highlighting in particular:
##### Date normalization
The metadata around date of publication for an item is not always exact. It often is represented as a date range e.g. `1850-1860`. The `date` field above takes steps to normalise this date to a single integer value. In most cases, this is taking the mean of the values associated with the item. The `raw_date` field includes the unprocessed date string.
##### Metadata included
The metadata associated with each item includes most of the fields available via the ALTO XML. However, the data doesn’t include some metadata fields from the metadata export file. The reason fields were excluded because they are frequently not populated. A cut off of 50% was chosen, i.e. values from the metadata which are missing above 50% of the time were not included. This is slightly arbitrary, but since the aim of this version of the data was to support computational research using the collection it was felt that these fields with frequent missing values would be less valuable.
#### Who are the source language producers?
[More Information Needed]
### Annotations
This dataset does not include annotations as usually understood in the context of NLP. The data does include metadata associated with the books.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
There a range of considerations around using the data. These include the representativeness of the dataset, the OCR quality and the language used. Depending on your use case, these may be more or less important. For example, the impact of OCR quality on downstream tasks will depend on the target task. It may also be possible to mitigate this negative impact from OCR through tokenizer choice, Language Model training objectives, oversampling high-quality OCR, etc.
[More Information Needed]
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
The text in this collection is derived from historical text. As a result, the text will reflect this time period's social beliefs and attitudes. The books include both fiction and non-fiction books.
Examples of book titles that appear in the data (these are randomly sampled from all titles):
- ‘Rhymes and Dreams, Legends of Pendle Forest, and other poems’,
- “Précis of Information concerning the Zulu Country, with a map. Prepared in the Intelligence Branch of the Quarter-Master-General’s Department, Horse Guards, War Office, etc”,
- ‘The fan. A poem’,
- ‘Grif; a story of Australian Life’,
- ‘Calypso; a masque: in three acts, etc’,
- ‘Tales Uncle told [With illustrative woodcuts.]’,
- 'Questings',
- 'Home Life on an Ostrich Farm. With ... illustrations’,
- ‘Bulgarya i Bulgarowie’,
- 'Εἰς τα βαθη της Ἀφρικης [In darkest Africa.] ... Μεταφρασις Γεωρ. Σ. Βουτσινα, etc',
- ‘The Corsair, a tale’,
‘Poems ... With notes [With a portrait.]’,
- ‘Report of the Librarian for the year 1898 (1899, 1901, 1909)’,
- “The World of Thought. A novel. By the author of ‘Before I began to speak.’”,
- 'Amleto; tragedia ... recata in versi italiani da M. Leoni, etc']
While using titles alone is insufficient to integrate bias in this collection, it gives some insight into the topics covered by books. Further, the tiles highlight some particular types of bias we might find in the collection. This should in no way be considered an exhaustive list.
#### Colonialism
Even in the above random sample of titles examples of colonial attitudes, we can see examples of titles. We can try and interrogate this further by searching for the name of places that were part of the British Empire when many of these books were published.
Searching for the string `India` in the titles and randomly sampling 10 titles returns:
- “Travels in India in the Seventeenth Century: by Sir Thomas Roe and Dr. John Fryer. Reprinted from the ‘Calcutta Weekly Englishman.’”,
- ‘A Winter in India and Malaysia among the Methodist Missions’,
- “The Tourist’s Guide to all the principal stations on the railways of Northern India [By W. W.] ... Fifth edition”,
- ‘Records of Sport and Military Life in Western India ... With an introduction by ... G. B. Malleson’,
- "Lakhmi, the Rájpút's Bride. A tale of Gujarát in Western India [A poem.]”,
- ‘The West India Commonplace Book: compiled from parliamentary and official documents; shewing the interest of Great Britain in its Sugar Colonies’,
- “From Tonkin to India : by the sources of the Irawadi, January’ 95-January ’96”,
- ‘Case of the Ameers of Sinde : speeches of Mr. John Sullivan, and Captain William Eastwick, at a special court held at the India House, ... 26th January, 1844’,
- ‘The Andaman Islands; their colonisation, etc. A correspondence addressed to the India Office’,
- ‘Ancient India as described by Ptolemy; being a translation of the chapters which describe India and Eastern Asia in the treatise on Geography written by Klaudios Ptolemaios ... with introduction, commentary, map of India according to Ptolemy, and ... index, by J. W. McCrindle’]
Searching form the string `Africa` in the titles and randomly sampling 10 titles returns:
- ['De Benguella ás Terras de Iácca. Descripção de uma viagem na Africa Central e Occidental ... Expedição organisada nos annos de 1877-1880. Edição illustrada',
- ‘To the New Geographical Society of Edinburgh [An address on Africa by H. M. Stanley.]’,
- ‘Diamonds and Gold in South Africa ... With maps, etc’,
- ‘Missionary Travels and Researches in South Africa ... With notes by F. S. Arnot. With map and illustrations. New edition’,
- ‘A Narrative of a Visit to the Mauritius and South Africa ... Illustrated by two maps, sixteen etchings and twenty-eight wood-cuts’,
- ‘Side Lights on South Africa ... With a map, etc’,
- ‘My Second Journey through Equatorial Africa ... in ... 1886 and 1887 ... Translated ... by M. J. A. Bergmann. With a map ... and ... illustrations, etc’,
- ‘Missionary Travels and Researches in South Africa ... With portrait and fullpage illustrations’,
- ‘[African sketches.] Narrative of a residence in South Africa ... A new edition. To which is prefixed a biographical sketch of the author by J. Conder’,
- ‘Lake Ngami; or, Explorations and discoveries during four years wandering in the wilds of South Western Africa ... With a map, and numerous illustrations, etc’]
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The books are licensed under the [CC Public Domain Mark 1.0](https://creativecommons.org/publicdomain/mark/1.0/) license.
### Citation Information
```bibtext
@misc{bBritishLibraryBooks2021,
author = {British Library Labs},
title = {Digitised Books. c. 1510 - c. 1900. JSONL (OCR derived text + metadata)},
year = {2021},
publisher = {British Library},
howpublished={https://doi.org/10.23636/r7w6-zy15}
```
### Contributions
Thanks to [@davanstrien](https://github.com/davanstrien) for adding this dataset. |
liuyanchen1015/parsed_sst2 | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
0: negative
1: positive
- name: idx
dtype: int32
- name: parse_tree
dtype: string
- name: pure_parse_tree
dtype: string
splits:
- name: train
num_bytes: 22647332
num_examples: 67349
- name: validation
num_bytes: 560160
num_examples: 872
- name: test
num_bytes: 1155733
num_examples: 1821
download_size: 10913172
dataset_size: 24363225
---
# Dataset Card for "parsed_sst2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
DBQ/Balenciaga.Product.prices.Hong.Kong | ---
annotations_creators:
- other
language_creators:
- other
language:
- en
license:
- unknown
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- text-classification
- image-classification
- feature-extraction
- image-segmentation
- image-to-image
- image-to-text
- object-detection
- summarization
- zero-shot-image-classification
pretty_name: Hong Kong - Balenciaga - Product-level price list
tags:
- webscraping
- ecommerce
- Balenciaga
- fashion
- fashion product
- image
- fashion image
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: website_name
dtype: string
- name: competence_date
dtype: string
- name: country_code
dtype: string
- name: currency_code
dtype: string
- name: brand
dtype: string
- name: category1_code
dtype: string
- name: category2_code
dtype: string
- name: category3_code
dtype: string
- name: product_code
dtype: string
- name: title
dtype: string
- name: itemurl
dtype: string
- name: imageurl
dtype: string
- name: full_price
dtype: float64
- name: price
dtype: float64
- name: full_price_eur
dtype: float64
- name: price_eur
dtype: float64
- name: flg_discount
dtype: int64
splits:
- name: train
num_bytes: 858709
num_examples: 2307
download_size: 274910
dataset_size: 858709
---
# Balenciaga web scraped data
## About the website
Balenciaga operates within the **luxury fashion industry** in the **Asia Pacific region**, specifically in **Hong Kong**, which is known for its strong demand for high-end fashion. As part of the global trend, the industry has shifted towards **Ecommerce**, which has been growing significantly in the past few years. The dataset observed includes **Ecommerce product-list page (PLP) data on Balenciaga in Hong Kong**, providing valuable insights into the market. **Product listings**, pricing, and availability data are examples of the information included. Other geographic details or types of data can be found on the [Balenciaga main page](https://www.databoutique.com/buy-data-list-subset/Balenciaga web scraped data/r/rec0EGCU96DEBdTOE).
## Link to **dataset**
[Hong Kong - Balenciaga - Product-level price list dataset](https://www.databoutique.com/buy-data-page/Balenciaga%20Product-prices%20Hong%20Kong/r/recR8PANJMgN5obaw)
|
SarthakG/123_smart | ---
license: apache-2.0
---
|
open-llm-leaderboard/details_OpenBuddy__openbuddy-deepseek-67b-v15.1 | ---
pretty_name: Evaluation run of OpenBuddy/openbuddy-deepseek-67b-v15.1
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [OpenBuddy/openbuddy-deepseek-67b-v15.1](https://huggingface.co/OpenBuddy/openbuddy-deepseek-67b-v15.1)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_OpenBuddy__openbuddy-deepseek-67b-v15.1\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-12-10T20:13:41.089487](https://huggingface.co/datasets/open-llm-leaderboard/details_OpenBuddy__openbuddy-deepseek-67b-v15.1/blob/main/results_2023-12-10T20-13-41.089487.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7036058129176036,\n\
\ \"acc_stderr\": 0.03028453159020021,\n \"acc_norm\": 0.705307528908225,\n\
\ \"acc_norm_stderr\": 0.030895027239583782,\n \"mc1\": 0.39167686658506734,\n\
\ \"mc1_stderr\": 0.017087795881769625,\n \"mc2\": 0.5441532764532347,\n\
\ \"mc2_stderr\": 0.015072690852418868\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6527303754266212,\n \"acc_stderr\": 0.013913034529620451,\n\
\ \"acc_norm\": 0.6766211604095563,\n \"acc_norm_stderr\": 0.013669421630012127\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6784505078669588,\n\
\ \"acc_stderr\": 0.004661165425661981,\n \"acc_norm\": 0.8648675562636925,\n\
\ \"acc_norm_stderr\": 0.0034116630716511135\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \
\ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6296296296296297,\n\
\ \"acc_stderr\": 0.041716541613545426,\n \"acc_norm\": 0.6296296296296297,\n\
\ \"acc_norm_stderr\": 0.041716541613545426\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7894736842105263,\n \"acc_stderr\": 0.03317672787533157,\n\
\ \"acc_norm\": 0.7894736842105263,\n \"acc_norm_stderr\": 0.03317672787533157\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.81,\n\
\ \"acc_stderr\": 0.039427724440366234,\n \"acc_norm\": 0.81,\n \
\ \"acc_norm_stderr\": 0.039427724440366234\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7509433962264151,\n \"acc_stderr\": 0.02661648298050171,\n\
\ \"acc_norm\": 0.7509433962264151,\n \"acc_norm_stderr\": 0.02661648298050171\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8263888888888888,\n\
\ \"acc_stderr\": 0.03167473383795717,\n \"acc_norm\": 0.8263888888888888,\n\
\ \"acc_norm_stderr\": 0.03167473383795717\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \
\ \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.59,\n \"acc_stderr\": 0.04943110704237101,\n \"acc_norm\": 0.59,\n\
\ \"acc_norm_stderr\": 0.04943110704237101\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001974,\n \
\ \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001974\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6820809248554913,\n\
\ \"acc_stderr\": 0.0355068398916558,\n \"acc_norm\": 0.6820809248554913,\n\
\ \"acc_norm_stderr\": 0.0355068398916558\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.37254901960784315,\n \"acc_stderr\": 0.04810840148082635,\n\
\ \"acc_norm\": 0.37254901960784315,\n \"acc_norm_stderr\": 0.04810840148082635\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.82,\n \"acc_stderr\": 0.03861229196653695,\n \"acc_norm\": 0.82,\n\
\ \"acc_norm_stderr\": 0.03861229196653695\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.6595744680851063,\n \"acc_stderr\": 0.03097669299853443,\n\
\ \"acc_norm\": 0.6595744680851063,\n \"acc_norm_stderr\": 0.03097669299853443\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5350877192982456,\n\
\ \"acc_stderr\": 0.046920083813689104,\n \"acc_norm\": 0.5350877192982456,\n\
\ \"acc_norm_stderr\": 0.046920083813689104\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.6620689655172414,\n \"acc_stderr\": 0.039417076320648906,\n\
\ \"acc_norm\": 0.6620689655172414,\n \"acc_norm_stderr\": 0.039417076320648906\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4947089947089947,\n \"acc_stderr\": 0.02574986828855657,\n \"\
acc_norm\": 0.4947089947089947,\n \"acc_norm_stderr\": 0.02574986828855657\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5079365079365079,\n\
\ \"acc_stderr\": 0.044715725362943486,\n \"acc_norm\": 0.5079365079365079,\n\
\ \"acc_norm_stderr\": 0.044715725362943486\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620332,\n \
\ \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620332\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8225806451612904,\n\
\ \"acc_stderr\": 0.021732540689329286,\n \"acc_norm\": 0.8225806451612904,\n\
\ \"acc_norm_stderr\": 0.021732540689329286\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5911330049261084,\n \"acc_stderr\": 0.03459058815883233,\n\
\ \"acc_norm\": 0.5911330049261084,\n \"acc_norm_stderr\": 0.03459058815883233\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.8,\n \"acc_stderr\": 0.04020151261036846,\n \"acc_norm\"\
: 0.8,\n \"acc_norm_stderr\": 0.04020151261036846\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.8,\n \"acc_stderr\": 0.031234752377721175,\n \
\ \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.031234752377721175\n \
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.8838383838383839,\n \"acc_stderr\": 0.022828881775249377,\n \"\
acc_norm\": 0.8838383838383839,\n \"acc_norm_stderr\": 0.022828881775249377\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9430051813471503,\n \"acc_stderr\": 0.016731085293607558,\n\
\ \"acc_norm\": 0.9430051813471503,\n \"acc_norm_stderr\": 0.016731085293607558\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.717948717948718,\n \"acc_stderr\": 0.022815813098896607,\n \
\ \"acc_norm\": 0.717948717948718,\n \"acc_norm_stderr\": 0.022815813098896607\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.37037037037037035,\n \"acc_stderr\": 0.02944316932303154,\n \
\ \"acc_norm\": 0.37037037037037035,\n \"acc_norm_stderr\": 0.02944316932303154\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.8025210084033614,\n \"acc_stderr\": 0.025859164122051456,\n\
\ \"acc_norm\": 0.8025210084033614,\n \"acc_norm_stderr\": 0.025859164122051456\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.44370860927152317,\n \"acc_stderr\": 0.04056527902281732,\n \"\
acc_norm\": 0.44370860927152317,\n \"acc_norm_stderr\": 0.04056527902281732\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.9009174311926605,\n \"acc_stderr\": 0.012809780081878929,\n \"\
acc_norm\": 0.9009174311926605,\n \"acc_norm_stderr\": 0.012809780081878929\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.6111111111111112,\n \"acc_stderr\": 0.03324708911809117,\n \"\
acc_norm\": 0.6111111111111112,\n \"acc_norm_stderr\": 0.03324708911809117\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.9215686274509803,\n \"acc_stderr\": 0.01886951464665893,\n \"\
acc_norm\": 0.9215686274509803,\n \"acc_norm_stderr\": 0.01886951464665893\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.9029535864978903,\n \"acc_stderr\": 0.01926932302564026,\n \
\ \"acc_norm\": 0.9029535864978903,\n \"acc_norm_stderr\": 0.01926932302564026\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7892376681614349,\n\
\ \"acc_stderr\": 0.02737309550054019,\n \"acc_norm\": 0.7892376681614349,\n\
\ \"acc_norm_stderr\": 0.02737309550054019\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8015267175572519,\n \"acc_stderr\": 0.034981493854624714,\n\
\ \"acc_norm\": 0.8015267175572519,\n \"acc_norm_stderr\": 0.034981493854624714\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8099173553719008,\n \"acc_stderr\": 0.03581796951709282,\n \"\
acc_norm\": 0.8099173553719008,\n \"acc_norm_stderr\": 0.03581796951709282\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7962962962962963,\n\
\ \"acc_stderr\": 0.03893542518824847,\n \"acc_norm\": 0.7962962962962963,\n\
\ \"acc_norm_stderr\": 0.03893542518824847\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7730061349693251,\n \"acc_stderr\": 0.03291099578615771,\n\
\ \"acc_norm\": 0.7730061349693251,\n \"acc_norm_stderr\": 0.03291099578615771\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5714285714285714,\n\
\ \"acc_stderr\": 0.04697113923010213,\n \"acc_norm\": 0.5714285714285714,\n\
\ \"acc_norm_stderr\": 0.04697113923010213\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.9029126213592233,\n \"acc_stderr\": 0.02931596291881347,\n\
\ \"acc_norm\": 0.9029126213592233,\n \"acc_norm_stderr\": 0.02931596291881347\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9102564102564102,\n\
\ \"acc_stderr\": 0.01872430174194166,\n \"acc_norm\": 0.9102564102564102,\n\
\ \"acc_norm_stderr\": 0.01872430174194166\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768079,\n \
\ \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768079\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8914431673052363,\n\
\ \"acc_stderr\": 0.011124283175851183,\n \"acc_norm\": 0.8914431673052363,\n\
\ \"acc_norm_stderr\": 0.011124283175851183\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7658959537572254,\n \"acc_stderr\": 0.022797110278071128,\n\
\ \"acc_norm\": 0.7658959537572254,\n \"acc_norm_stderr\": 0.022797110278071128\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.45139664804469276,\n\
\ \"acc_stderr\": 0.016643307372315872,\n \"acc_norm\": 0.45139664804469276,\n\
\ \"acc_norm_stderr\": 0.016643307372315872\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7581699346405228,\n \"acc_stderr\": 0.024518195641879334,\n\
\ \"acc_norm\": 0.7581699346405228,\n \"acc_norm_stderr\": 0.024518195641879334\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7909967845659164,\n\
\ \"acc_stderr\": 0.02309314039837422,\n \"acc_norm\": 0.7909967845659164,\n\
\ \"acc_norm_stderr\": 0.02309314039837422\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.8333333333333334,\n \"acc_stderr\": 0.020736358408060006,\n\
\ \"acc_norm\": 0.8333333333333334,\n \"acc_norm_stderr\": 0.020736358408060006\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.475177304964539,\n \"acc_stderr\": 0.02979071924382972,\n \
\ \"acc_norm\": 0.475177304964539,\n \"acc_norm_stderr\": 0.02979071924382972\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5391134289439374,\n\
\ \"acc_stderr\": 0.012731102790504519,\n \"acc_norm\": 0.5391134289439374,\n\
\ \"acc_norm_stderr\": 0.012731102790504519\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.7279411764705882,\n \"acc_stderr\": 0.027033041151681456,\n\
\ \"acc_norm\": 0.7279411764705882,\n \"acc_norm_stderr\": 0.027033041151681456\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.7679738562091504,\n \"acc_stderr\": 0.017077373377856926,\n \
\ \"acc_norm\": 0.7679738562091504,\n \"acc_norm_stderr\": 0.017077373377856926\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n\
\ \"acc_stderr\": 0.04525393596302505,\n \"acc_norm\": 0.6636363636363637,\n\
\ \"acc_norm_stderr\": 0.04525393596302505\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7795918367346939,\n \"acc_stderr\": 0.02653704531214529,\n\
\ \"acc_norm\": 0.7795918367346939,\n \"acc_norm_stderr\": 0.02653704531214529\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8706467661691543,\n\
\ \"acc_stderr\": 0.023729830881018526,\n \"acc_norm\": 0.8706467661691543,\n\
\ \"acc_norm_stderr\": 0.023729830881018526\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.89,\n \"acc_stderr\": 0.03144660377352203,\n \
\ \"acc_norm\": 0.89,\n \"acc_norm_stderr\": 0.03144660377352203\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5783132530120482,\n\
\ \"acc_stderr\": 0.038444531817709175,\n \"acc_norm\": 0.5783132530120482,\n\
\ \"acc_norm_stderr\": 0.038444531817709175\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8713450292397661,\n \"acc_stderr\": 0.02567934272327692,\n\
\ \"acc_norm\": 0.8713450292397661,\n \"acc_norm_stderr\": 0.02567934272327692\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.39167686658506734,\n\
\ \"mc1_stderr\": 0.017087795881769625,\n \"mc2\": 0.5441532764532347,\n\
\ \"mc2_stderr\": 0.015072690852418868\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8476716653512234,\n \"acc_stderr\": 0.010099208246065614\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6694465504169825,\n \
\ \"acc_stderr\": 0.012957496367085024\n }\n}\n```"
repo_url: https://huggingface.co/OpenBuddy/openbuddy-deepseek-67b-v15.1
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|arc:challenge|25_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|gsm8k|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hellaswag|10_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-10T20-13-41.089487.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-10T20-13-41.089487.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- '**/details_harness|winogrande|5_2023-12-10T20-13-41.089487.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-12-10T20-13-41.089487.parquet'
- config_name: results
data_files:
- split: 2023_12_10T20_13_41.089487
path:
- results_2023-12-10T20-13-41.089487.parquet
- split: latest
path:
- results_2023-12-10T20-13-41.089487.parquet
---
# Dataset Card for Evaluation run of OpenBuddy/openbuddy-deepseek-67b-v15.1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/OpenBuddy/openbuddy-deepseek-67b-v15.1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [OpenBuddy/openbuddy-deepseek-67b-v15.1](https://huggingface.co/OpenBuddy/openbuddy-deepseek-67b-v15.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_OpenBuddy__openbuddy-deepseek-67b-v15.1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-10T20:13:41.089487](https://huggingface.co/datasets/open-llm-leaderboard/details_OpenBuddy__openbuddy-deepseek-67b-v15.1/blob/main/results_2023-12-10T20-13-41.089487.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.7036058129176036,
"acc_stderr": 0.03028453159020021,
"acc_norm": 0.705307528908225,
"acc_norm_stderr": 0.030895027239583782,
"mc1": 0.39167686658506734,
"mc1_stderr": 0.017087795881769625,
"mc2": 0.5441532764532347,
"mc2_stderr": 0.015072690852418868
},
"harness|arc:challenge|25": {
"acc": 0.6527303754266212,
"acc_stderr": 0.013913034529620451,
"acc_norm": 0.6766211604095563,
"acc_norm_stderr": 0.013669421630012127
},
"harness|hellaswag|10": {
"acc": 0.6784505078669588,
"acc_stderr": 0.004661165425661981,
"acc_norm": 0.8648675562636925,
"acc_norm_stderr": 0.0034116630716511135
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6296296296296297,
"acc_stderr": 0.041716541613545426,
"acc_norm": 0.6296296296296297,
"acc_norm_stderr": 0.041716541613545426
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7894736842105263,
"acc_stderr": 0.03317672787533157,
"acc_norm": 0.7894736842105263,
"acc_norm_stderr": 0.03317672787533157
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.81,
"acc_stderr": 0.039427724440366234,
"acc_norm": 0.81,
"acc_norm_stderr": 0.039427724440366234
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7509433962264151,
"acc_stderr": 0.02661648298050171,
"acc_norm": 0.7509433962264151,
"acc_norm_stderr": 0.02661648298050171
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8263888888888888,
"acc_stderr": 0.03167473383795717,
"acc_norm": 0.8263888888888888,
"acc_norm_stderr": 0.03167473383795717
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.59,
"acc_stderr": 0.04943110704237101,
"acc_norm": 0.59,
"acc_norm_stderr": 0.04943110704237101
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001974,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001974
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6820809248554913,
"acc_stderr": 0.0355068398916558,
"acc_norm": 0.6820809248554913,
"acc_norm_stderr": 0.0355068398916558
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.37254901960784315,
"acc_stderr": 0.04810840148082635,
"acc_norm": 0.37254901960784315,
"acc_norm_stderr": 0.04810840148082635
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.82,
"acc_stderr": 0.03861229196653695,
"acc_norm": 0.82,
"acc_norm_stderr": 0.03861229196653695
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6595744680851063,
"acc_stderr": 0.03097669299853443,
"acc_norm": 0.6595744680851063,
"acc_norm_stderr": 0.03097669299853443
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5350877192982456,
"acc_stderr": 0.046920083813689104,
"acc_norm": 0.5350877192982456,
"acc_norm_stderr": 0.046920083813689104
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6620689655172414,
"acc_stderr": 0.039417076320648906,
"acc_norm": 0.6620689655172414,
"acc_norm_stderr": 0.039417076320648906
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4947089947089947,
"acc_stderr": 0.02574986828855657,
"acc_norm": 0.4947089947089947,
"acc_norm_stderr": 0.02574986828855657
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5079365079365079,
"acc_stderr": 0.044715725362943486,
"acc_norm": 0.5079365079365079,
"acc_norm_stderr": 0.044715725362943486
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8225806451612904,
"acc_stderr": 0.021732540689329286,
"acc_norm": 0.8225806451612904,
"acc_norm_stderr": 0.021732540689329286
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5911330049261084,
"acc_stderr": 0.03459058815883233,
"acc_norm": 0.5911330049261084,
"acc_norm_stderr": 0.03459058815883233
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.8,
"acc_stderr": 0.04020151261036846,
"acc_norm": 0.8,
"acc_norm_stderr": 0.04020151261036846
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8,
"acc_stderr": 0.031234752377721175,
"acc_norm": 0.8,
"acc_norm_stderr": 0.031234752377721175
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8838383838383839,
"acc_stderr": 0.022828881775249377,
"acc_norm": 0.8838383838383839,
"acc_norm_stderr": 0.022828881775249377
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9430051813471503,
"acc_stderr": 0.016731085293607558,
"acc_norm": 0.9430051813471503,
"acc_norm_stderr": 0.016731085293607558
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.717948717948718,
"acc_stderr": 0.022815813098896607,
"acc_norm": 0.717948717948718,
"acc_norm_stderr": 0.022815813098896607
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.37037037037037035,
"acc_stderr": 0.02944316932303154,
"acc_norm": 0.37037037037037035,
"acc_norm_stderr": 0.02944316932303154
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.8025210084033614,
"acc_stderr": 0.025859164122051456,
"acc_norm": 0.8025210084033614,
"acc_norm_stderr": 0.025859164122051456
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.44370860927152317,
"acc_stderr": 0.04056527902281732,
"acc_norm": 0.44370860927152317,
"acc_norm_stderr": 0.04056527902281732
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.9009174311926605,
"acc_stderr": 0.012809780081878929,
"acc_norm": 0.9009174311926605,
"acc_norm_stderr": 0.012809780081878929
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.6111111111111112,
"acc_stderr": 0.03324708911809117,
"acc_norm": 0.6111111111111112,
"acc_norm_stderr": 0.03324708911809117
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9215686274509803,
"acc_stderr": 0.01886951464665893,
"acc_norm": 0.9215686274509803,
"acc_norm_stderr": 0.01886951464665893
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.9029535864978903,
"acc_stderr": 0.01926932302564026,
"acc_norm": 0.9029535864978903,
"acc_norm_stderr": 0.01926932302564026
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7892376681614349,
"acc_stderr": 0.02737309550054019,
"acc_norm": 0.7892376681614349,
"acc_norm_stderr": 0.02737309550054019
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8015267175572519,
"acc_stderr": 0.034981493854624714,
"acc_norm": 0.8015267175572519,
"acc_norm_stderr": 0.034981493854624714
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8099173553719008,
"acc_stderr": 0.03581796951709282,
"acc_norm": 0.8099173553719008,
"acc_norm_stderr": 0.03581796951709282
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7962962962962963,
"acc_stderr": 0.03893542518824847,
"acc_norm": 0.7962962962962963,
"acc_norm_stderr": 0.03893542518824847
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7730061349693251,
"acc_stderr": 0.03291099578615771,
"acc_norm": 0.7730061349693251,
"acc_norm_stderr": 0.03291099578615771
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5714285714285714,
"acc_stderr": 0.04697113923010213,
"acc_norm": 0.5714285714285714,
"acc_norm_stderr": 0.04697113923010213
},
"harness|hendrycksTest-management|5": {
"acc": 0.9029126213592233,
"acc_stderr": 0.02931596291881347,
"acc_norm": 0.9029126213592233,
"acc_norm_stderr": 0.02931596291881347
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9102564102564102,
"acc_stderr": 0.01872430174194166,
"acc_norm": 0.9102564102564102,
"acc_norm_stderr": 0.01872430174194166
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768079,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768079
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8914431673052363,
"acc_stderr": 0.011124283175851183,
"acc_norm": 0.8914431673052363,
"acc_norm_stderr": 0.011124283175851183
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7658959537572254,
"acc_stderr": 0.022797110278071128,
"acc_norm": 0.7658959537572254,
"acc_norm_stderr": 0.022797110278071128
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.45139664804469276,
"acc_stderr": 0.016643307372315872,
"acc_norm": 0.45139664804469276,
"acc_norm_stderr": 0.016643307372315872
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7581699346405228,
"acc_stderr": 0.024518195641879334,
"acc_norm": 0.7581699346405228,
"acc_norm_stderr": 0.024518195641879334
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7909967845659164,
"acc_stderr": 0.02309314039837422,
"acc_norm": 0.7909967845659164,
"acc_norm_stderr": 0.02309314039837422
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8333333333333334,
"acc_stderr": 0.020736358408060006,
"acc_norm": 0.8333333333333334,
"acc_norm_stderr": 0.020736358408060006
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.475177304964539,
"acc_stderr": 0.02979071924382972,
"acc_norm": 0.475177304964539,
"acc_norm_stderr": 0.02979071924382972
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.5391134289439374,
"acc_stderr": 0.012731102790504519,
"acc_norm": 0.5391134289439374,
"acc_norm_stderr": 0.012731102790504519
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7279411764705882,
"acc_stderr": 0.027033041151681456,
"acc_norm": 0.7279411764705882,
"acc_norm_stderr": 0.027033041151681456
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.7679738562091504,
"acc_stderr": 0.017077373377856926,
"acc_norm": 0.7679738562091504,
"acc_norm_stderr": 0.017077373377856926
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302505,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302505
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7795918367346939,
"acc_stderr": 0.02653704531214529,
"acc_norm": 0.7795918367346939,
"acc_norm_stderr": 0.02653704531214529
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8706467661691543,
"acc_stderr": 0.023729830881018526,
"acc_norm": 0.8706467661691543,
"acc_norm_stderr": 0.023729830881018526
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.89,
"acc_stderr": 0.03144660377352203,
"acc_norm": 0.89,
"acc_norm_stderr": 0.03144660377352203
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5783132530120482,
"acc_stderr": 0.038444531817709175,
"acc_norm": 0.5783132530120482,
"acc_norm_stderr": 0.038444531817709175
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8713450292397661,
"acc_stderr": 0.02567934272327692,
"acc_norm": 0.8713450292397661,
"acc_norm_stderr": 0.02567934272327692
},
"harness|truthfulqa:mc|0": {
"mc1": 0.39167686658506734,
"mc1_stderr": 0.017087795881769625,
"mc2": 0.5441532764532347,
"mc2_stderr": 0.015072690852418868
},
"harness|winogrande|5": {
"acc": 0.8476716653512234,
"acc_stderr": 0.010099208246065614
},
"harness|gsm8k|5": {
"acc": 0.6694465504169825,
"acc_stderr": 0.012957496367085024
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
chuckn/dummy_code | ---
license: apache-2.0
---
|
XiaHan19/ai2_arc4MC | ---
license: unknown
---
|
luist18/ptparl | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- pt
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-label-classification
pretty_name: PTPARL
dataset_info:
features:
- name: text
dtype: string
- name: group
dtype: string
# dtype:
# class_label:
# names:
# '0': PS
# '1': CDS-PP
# '2': PCP
# '3': BE
# '4': PSD
# '5': PEV
# '6': PAN
# '7': CH
# '8': IL
# '9': L
- name: wing
dtype:
class_label:
names:
'0': LEFT
'1': LEAN_LEFT
'2': CENTER
'3': LEAN_RIGHT
'4': RIGHT
--- |
Falah/chapter8_1_prompts | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 2646
num_examples: 9
download_size: 3300
dataset_size: 2646
---
# Dataset Card for "chapter8_1_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zolak/twitter_dataset_81_1713126325 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 273827
num_examples: 681
download_size: 141779
dataset_size: 273827
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Viktor03/ValeriyBarinov | ---
license: openrail
---
|
CasperLD/Pizza_Dataset | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 3787499.0
num_examples: 80
download_size: 0
dataset_size: 3787499.0
---
# Dataset Card for "Pizza_Dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
manycore-research/faceformer | ---
license: mit
---
|
syzym/muc | ---
license: apache-2.0
---
|
kaleemWaheed/twitter_dataset_1713187020 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 28810
num_examples: 64
download_size: 15330
dataset_size: 28810
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
bri25yu/flores200_val_test | ---
dataset_info:
features:
- name: id
dtype: int32
- name: source_lang
dtype: string
- name: target_lang
dtype: string
- name: source
dtype: string
- name: target
dtype: string
splits:
- name: val
num_bytes: 2132022.3333333335
num_examples: 5000
- name: test
num_bytes: 4264044.666666667
num_examples: 10000
download_size: 4975535
dataset_size: 6396067.0
---
# Dataset Card for "flores200_val_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
projecte-aina/CA-EU_Parallel_Corpus | ---
language:
- ca
- eu
multilinguality:
- multilingual
pretty_name: CA-EU Parallel Corpus
size_categories:
- 1M<n<10M
task_categories:
- translation
task_ids: []
license: cc-by-nc-sa-4.0
---
# Dataset Card for CA-EU Parallel Corpus
## Dataset Description
- **Point of Contact:** langtech@bsc.es
### Dataset Summary
The CA-EU Parallel Corpus is a Catalan-Basque synthetic dataset of **9.692.996** parallel sentences.
The dataset was created to support the use of co-official languages from Spain, such as Catalan and Basque,
in NLP tasks, specifically Machine Translation.
### Supported Tasks and Leaderboards
The dataset can be used to train Bilingual Machine Translation models between Basque and Catalan in any direction,
as well as Multilingual Machine Translation models.
### Languages
The sentences included in the dataset are in Catalan (CA) and Basque (EU).
## Dataset Structure
### Data Instances
Two separate txt files are provided with the sentences sorted in the same order:
- train_clean.ca: contains 9.692.996 Catalan sentences (synthetic).
- train_clean.eu: contains 9.692.996 Basque sentences (authentic).
### Data Fields
[N/A]
### Data Splits
The dataset contains a single split: `train`.
## Dataset Creation
### Curation Rationale
This dataset is aimed at promoting the development of Machine Translation between Catalan
and other co-official languages from Spain, specifically Basque.
### Source Data
#### Initial Data Collection and Normalization
This synthetic dataset was created in the frame of Project Ilenia.
An authentic parallel corpus ES-EU was delivered by [HiTZ](http://hitz.eus/) and the Spanish was
translated to Catalan using the machine translation model [PlanTL-GOB-ES](https://huggingface.co/PlanTL-GOB-ES/mt-plantl-es-ca).
**Total: 9.692.996 parallel sentences** .
#### Who are the source language producers?
[HiTZ](http://hitz.eus/)
### Annotations
#### Annotation process
The dataset does not contain any annotations.
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
Given that this dataset is partly derived from pre-existing datasets that may contain crawled data, and that no specific anonymisation process has been applied,
personal and sensitive information may be present in the data. This needs to be considered when using the data for training models.
## Considerations for Using the Data
### Social Impact of Dataset
By providing this resource, we intend to promote the use of Catalan and Basque, two of the co-official languages of Spain,
across NLP tasks, thereby improving the accessibility and visibility of both Catalan and Basque.
### Discussion of Biases
No specific bias mitigation strategies were applied to this dataset.
Inherent biases may exist within the data.
### Other Known Limitations
The dataset contains data of a general domain.
Applications of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.
## Additional Information
### Dataset Curators
Language Technologies Unit at the Barcelona Supercomputing Center (langtech@bsc.es).
This work is funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU
within the framework of the [project ILENIA](https://proyectoilenia.es/)
with reference 2022/TL22/00215337, 2022/TL22/00215336, 2022/TL22/00215335 y 2022/TL22/00215334
### Licensing Information
This work is licensed under a [Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/).
### Citation Information
[N/A]
### Contributions
[N/A] |
JackB09/aircrafts | ---
license: unknown
language:
- en
size_categories:
- n<1K
viewer: true
--- |
Moreza009/Tehran_covid | ---
license: apache-2.0
---
|
ruliad/factual-expert-processed-v2-packed | ---
dataset_info:
features:
- name: text
dtype: string
- name: token_count
dtype: int64
splits:
- name: train
num_bytes: 17899779962
num_examples: 517216
download_size: 10456721289
dataset_size: 17899779962
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
YBXL/NEJM_Reasoning_test | ---
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1128386
num_examples: 146
- name: valid
num_bytes: 1128386
num_examples: 146
- name: test
num_bytes: 1128386
num_examples: 146
download_size: 1195515
dataset_size: 3385158
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
---
|
bjoernp/ultrachat_de | ---
dataset_info:
features:
- name: prompt_id
dtype: string
- name: prompt
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: len_en
dtype: int64
- name: len_de
dtype: int64
- name: system_prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 5676838
num_examples: 959
download_size: 3083642
dataset_size: 5676838
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: mit
language:
- de
---
# German UltraChat
This dataset contains the first 1k prompts from [HuggingFaceH4/ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) translated to German and inference on with GPT-4. |
tyzhu/lmind_nq_train6000_eval6489_v1_recite_qa_v3 | ---
dataset_info:
features:
- name: answers
struct:
- name: answer_start
sequence: 'null'
- name: text
sequence: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train_qa
num_bytes: 697367
num_examples: 6000
- name: train_ic_qa
num_bytes: 4540536
num_examples: 6000
- name: train_recite_qa
num_bytes: 4546536
num_examples: 6000
- name: eval_qa
num_bytes: 752802
num_examples: 6489
- name: eval_ic_qa
num_bytes: 4906186
num_examples: 6489
- name: eval_recite_qa
num_bytes: 4912675
num_examples: 6489
- name: all_docs
num_bytes: 7126313
num_examples: 10925
- name: all_docs_eval
num_bytes: 7125701
num_examples: 10925
- name: train
num_bytes: 9568899
num_examples: 16925
- name: validation
num_bytes: 4103798
num_examples: 6489
download_size: 30086951
dataset_size: 48280813
configs:
- config_name: default
data_files:
- split: train_qa
path: data/train_qa-*
- split: train_ic_qa
path: data/train_ic_qa-*
- split: train_recite_qa
path: data/train_recite_qa-*
- split: eval_qa
path: data/eval_qa-*
- split: eval_ic_qa
path: data/eval_ic_qa-*
- split: eval_recite_qa
path: data/eval_recite_qa-*
- split: all_docs
path: data/all_docs-*
- split: all_docs_eval
path: data/all_docs_eval-*
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
zhangchi0104/MaaOcrDataset | ---
license: mit
---
|
income/quora-top-20-gen-queries | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
---
# NFCorpus: 20 generated queries (BEIR Benchmark)
This HF dataset contains the top-20 synthetic queries generated for each passage in the above BEIR benchmark dataset.
- DocT5query model used: [BeIR/query-gen-msmarco-t5-base-v1](https://huggingface.co/BeIR/query-gen-msmarco-t5-base-v1)
- id (str): unique document id in NFCorpus in the BEIR benchmark (`corpus.jsonl`).
- Questions generated: 20
- Code used for generation: [evaluate_anserini_docT5query_parallel.py](https://github.com/beir-cellar/beir/blob/main/examples/retrieval/evaluation/sparse/evaluate_anserini_docT5query_parallel.py)
Below contains the old dataset card for the BEIR benchmark.
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset.Top-20 generated queries for every passage in NFCorpus
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
dkuntso/gen-qm-17000 | ---
dataset_info:
features:
- name: utterance
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 27449266
num_examples: 14960
- name: test
num_bytes: 1929362
num_examples: 1020
- name: validation
num_bytes: 1871516
num_examples: 1020
download_size: 3761317
dataset_size: 31250144
task_categories:
- text-generation
language:
- en
pretty_name: Generate Query/Model from Request 15000/1000/1000
license: apache-2.0
size_categories:
- 10K<n<100K
---
# Dataset Card for "gen-qm-17000"
### Dataset Summary
Dataset for converting request into query and extracting model name.
DEV/VAL/TEST: 90/10/10
SIZE: 17000
### Supported Tasks and Leaderboards
The tasks represented in GEN-QM cover a text2text generation for producing qureries based on request or extracting models.
### Languages
The data in QM are in English.
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
```bash
{
'answer': '$count(EventCategory.Children) $neq 1029',
'utterance': 'Instructions: Based on Request and Model Description generate query with represents requests filter. Generaly query statement consists of path to the models column on the left, operator of comparison in the middle started with $ and comparison value on the right. Also query can contain more than one statement combined with $and or $or operator.\nModel Description: CreatedByUserName as created by user name;ModifiedByUserName as modified by user name;CreatedOn as created on;ModifiedOn as modified on;EventCategory.IsApprovalRequired as is approval required of experience category;EventCategory.Name as name of experience category;EventCategory.Code as code of experience category;EventCategory.CreatedByUserName as created by user name of experience category;EventCategory.ModifiedByUserName as modified by user name of experience category;EventCategory.Priority as priority of experience category;EventCategory.CreatedOn as created on of experience category;EventCategory.ModifiedOn as modified on of experience category;EventCategory.EventInCategories as experience in categories of experience category,event in categories of event category;EventCategory.EventCategoryInTypes as event category in types of experience category,experience category in types of event category;EventCategory.Children as children of experience category,children categories of event category;EventCategoryType.Name as name of experience category type;EventCategoryType.CreatedByUserName as created by user name of experience category type;EventCategoryType.ModifiedByUserName as modified by user name of experience category type;EventCategoryType.CreatedOn as created on of experience category type;EventCategoryType.ModifiedOn as modified on of experience category type;EventCategoryType.EventCategoryInTypes as event category in types of experience category type,experience category in types of event category type\nRequest: select event category in type where count of children of experience category != one thousand and twenty-nine\nQuery:'
}
```
## Additional Information
### Licensing Information
The dataset is released under Apache 2.0. |
VishaalY/synthetic-code-generations | ---
license: apache-2.0
---
This dataset was synthetically generated using mixtral8x7b to create unique instructions following the [MagicCoder Paper](https://arxiv.org/abs/2312.02120) and reproducing the results by modifying specific attributes (snippets are larger, instructions/responses are larger, and more specific).
Below is the prompt used to generate the instruction set:
``` python
prompt=f"""<s>[INST] You are an incredibly intelligent programming AI with expertise in CloudFormation, Terraform, AWS CDK and {lang}. Please gain inspiration from the following code snippet to create the highest-quality programming problem.
Present your problem and solution in two sections: **[Programming Question]** and **[Solution]**.
Code snippet in {lang} for inspiration:
{snippet}
The **[Programming Question]** section must be completely self-contained, providing all the contextual information one needs to understand and solve the problem.
Assume common programming knowledge, but ensure that any specific context, variables, or code snippets pertinent to this problem are explicitly included.
Do NOT include a title, just the question and keep this section as brief as possible.
The **[Solution]** must offer a comprehensive solution that accurately and CORRECTLY addresses the **[Programming Question]** you provided. [/INST]"""
```
The dataset contains problem sets for python, javascript, typescript, c++, c, yaml and others. Snippets where generated using [the Stack](bigcode/the-stack-dedup), AWS Documentation and only repos with stars and Apache-2.0/MIT Licenses where used as snippets.
please share if you use this dataset to train any models, am curious to see everyones results!
|
songlab/gpn-msa-hg38-scores | ---
license: mit
tags:
- dna
- variant-effect-prediction
- biology
- genomics
---
# GPN-MSA predictions for all possible SNPs in the human genome (~9 billion)
For more information check out our [paper](https://doi.org/10.1101/2023.10.10.561776) and [repository](https://github.com/songlab-cal/gpn).
## Querying specific variants or genes
- Install the latest [tabix](https://www.htslib.org/doc/tabix.html):
In your current conda environment (might be slow):
```bash
conda install -c bioconda -c conda-forge htslib=1.18
```
or in a new conda environment:
```bash
conda create -n tabix -c bioconda -c conda-forge htslib=1.18
conda activate tabix
```
- Query a specific region (e.g. BRCA1), from the remote file:
```bash
tabix https://huggingface.co/datasets/songlab/gpn-msa-hg38-scores/resolve/main/scores.tsv.bgz 17:43,044,295-43,125,364
```
The output has the following columns:
| chrom | pos | ref | alt | GPN-MSA score |
and would start like this:
```tsv
17 43044295 T A -1.60
17 43044295 T C -1.47
17 43044295 T G -1.61
17 43044296 G A -1.12
17 43044296 G C -1.46
17 43044296 G T -1.45
17 43044297 G A -1.45
17 43044297 G C -1.55
17 43044297 G T -1.54
17 43044298 A C -1.64
```
- If you want to do many queries you might want to first download the files locally
```bash
wget https://huggingface.co/datasets/songlab/gpn-msa-hg38-scores/resolve/main/scores.tsv.bgz
wget https://huggingface.co/datasets/songlab/gpn-msa-hg38-scores/resolve/main/scores.tsv.bgz.tbi
```
and then score:
```bash
tabix scores.tsv.bgz 17:43,044,295-43,125,364
``` |
HKBU-NLP/Code-Evol-Instruct-OSS | ---
license: bigcode-openrail-m
language:
- en
size_categories:
- 1K<n<10K
---
# Code-Evol-Instruct-OSS
## Summary
Code-Evol-Instruct-OSS is a dataset that was generated with Code Evol-Instruct by prompting open-souce LLMs, WizardLM-13B-v1.2 and WizardCoder-34B-Python.
The underlying process is explained in the paper [code-evol-instruct](https://arxiv.org/abs/2306.08568). This algorithm gave birth to famous open-souce code LLMs, WizardCoder-Family.
## Our approach
- We did not use any closed-source LLMs.
- Our seed dataset is sourced from [self-instruct-starcoder](https://huggingface.co/datasets/codeparrot/self-instruct-starcoder).
- We leverage the WizardLM-13B-v1.2 to evol the instructions in three rounds.
- The responses to each instruction are generated using WizardCoder-34B-Python.
- Samples that are excessively long or lack code responses are filtered out.
- The final dataset contains 4308 samples.
## Preliminary Experiments
We've fine-tuned the starcoderbase-3b using this dataset, achieving a 28.7 pass@1 on HumanEval (greedy), surpassing the original model by approximately 8 points.
## Citation
If you use this dataset, please cite the paper of WizardCoder.
```
@misc{luo2023wizardcoder,
title={WizardCoder: Empowering Code Large Language Models with Evol-Instruct},
author={Ziyang Luo and Can Xu and Pu Zhao and Qingfeng Sun and Xiubo Geng and Wenxiang Hu and Chongyang Tao and Jing Ma and Qingwei Lin and Daxin Jiang},
year={2023},
eprint={2306.08568},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Warlord-K/parti-prompts-sdxl-1.0 | ---
dataset_info:
features:
- name: Prompt
dtype: string
- name: Category
dtype: string
- name: Challenge
dtype: string
- name: Note
dtype: string
- name: model_name
dtype: string
- name: seed
dtype: int64
- name: images
dtype: image
splits:
- name: train
num_bytes: 2617808054.24
num_examples: 1632
download_size: 2616607357
dataset_size: 2617808054.24
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "parti-promtps-sdxl-1.0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lewtun/raft-test-submission | ---
benchmark: raft
type: prediction
submission_name: Test submission 0
---
# RAFT submissions for raft-test-submission
## Submitting to the leaderboard
To make a submission to the [leaderboard](https://huggingface.co/spaces/ought/raft-leaderboard), there are three main steps:
1. Generate predictions on the unlabeled test set of each task
2. Validate the predictions are compatible with the evaluation framework
3. Push the predictions to the Hub!
See the instructions below for more details.
### Rules
1. To prevent overfitting to the public leaderboard, we only evaluate **one submission per week**. You can push predictions to the Hub as many times as you wish, but we will only evaluate the most recent commit in a given week.
2. Transfer or meta-learning using other datasets, including further pre-training on other corpora, is allowed.
3. Use of unlabeled test data is allowed, as is it always available in the applied setting. For example, further pre-training using the unlabeled data for a task would be permitted.
4. Systems may be augmented with information retrieved from the internet, e.g. via automated web searches.
### Submission file format
For each task in RAFT, you should create a CSV file called `predictions.csv` with your model's predictions on the unlabeled test set. Each file should have exactly 2 columns:
* ID (int)
* Label (string)
See the dummy predictions in the `data` folder for examples with the expected format. Here is a simple example that creates a majority-class baseline:
```python
from pathlib import Path
import pandas as pd
from collections import Counter
from datasets import load_dataset, get_dataset_config_names
tasks = get_dataset_config_names("ought/raft")
for task in tasks:
# Load dataset
raft_subset = load_dataset("ought/raft", task)
# Compute majority class over training set
counter = Counter(raft_subset["train"]["Label"])
majority_class = counter.most_common(1)[0][0]
# Load predictions file
preds = pd.read_csv(f"data/{task}/predictions.csv")
# Convert label IDs to label names
preds["Label"] = raft_subset["train"].features["Label"].int2str(majority_class)
# Save predictions
preds.to_csv(f"data/{task}/predictions.csv", index=False)
```
As you can see in the example, each `predictions.csv` file should be stored in the task's subfolder in `data` and at the end you should have something like the following:
```
data
├── ade_corpus_v2
│ ├── predictions.csv
│ └── task.json
├── banking_77
│ ├── predictions.csv
│ └── task.json
├── neurips_impact_statement_risks
│ ├── predictions.csv
│ └── task.json
├── one_stop_english
│ ├── predictions.csv
│ └── task.json
├── overruling
│ ├── predictions.csv
│ └── task.json
├── semiconductor_org_types
│ ├── predictions.csv
│ └── task.json
├── systematic_review_inclusion
│ ├── predictions.csv
│ └── task.json
├── tai_safety_research
│ ├── predictions.csv
│ └── task.json
├── terms_of_service
│ ├── predictions.csv
│ └── task.json
├── tweet_eval_hate
│ ├── predictions.csv
│ └── task.json
└── twitter_complaints
├── predictions.csv
└── task.json
```
### Validate your submission
To ensure that your submission files are correctly formatted, run the following command from the root of the repository:
```
python cli.py validate
```
If everything is correct, you should see the following message:
```
All submission files validated! ✨ 🚀 ✨
Now you can make a submission 🤗
```
### Push your submission to the Hugging Face Hub!
The final step is to commit your files and push them to the Hub:
```
python cli.py submit
```
If there are no errors, you should see the following message:
```
Submission successful! 🎉 🥳 🎉
Your submission will be evaulated on Sunday 05 September 2021 ⏳
```
where the evaluation is run every Sunday and your results will be visible on the leaderboard. |
VPixel/dataset-no-1 | ---
license: mit
task_categories:
- conversational
- translation
language:
- aa
tags:
- chemistry
- biology
- climate
pretty_name: Pretty name 1
size_categories:
- n<1K
--- |
AfshanAhmed/training-data | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 301571473.0
num_examples: 300
download_size: 301565751
dataset_size: 301571473.0
---
# Dataset Card for "training-data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jbrophy123/alpaca_dataset | ---
dataset_info:
features:
- name: chat_sample
dtype: string
- name: dataset_origin
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 2287315
num_examples: 5000
download_size: 0
dataset_size: 2287315
---
# Dataset Card for "alpaca_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuyanchen1015/MULTI_VALUE_cola_perfect_already | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 1497
num_examples: 13
- name: test
num_bytes: 1639
num_examples: 18
- name: train
num_bytes: 16569
num_examples: 207
download_size: 15249
dataset_size: 19705
---
# Dataset Card for "MULTI_VALUE_cola_perfect_already"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
LongNN/news_sum | ---
license: gpl-3.0
---
|
aihdu111/daisy | ---
license: other
---
|
liuyanchen1015/MULTI_VALUE_stsb_finna_future | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: score
dtype: float64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 11861
num_examples: 54
- name: test
num_bytes: 6938
num_examples: 36
- name: train
num_bytes: 19901
num_examples: 84
download_size: 36510
dataset_size: 38700
---
# Dataset Card for "MULTI_VALUE_stsb_finna_future"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Nexdata/Chinese_Mandarin_Songs_in_Acapella__Female | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for Nexdata/Chinese_Mandarin_Songs_in_Acapella__Female
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.nexdata.ai/datasets/1151?source=Huggingface
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
103 Chinese Mandarin Songs in Acapella - Female. It is recorded by Chinese professional singer, with sweet voice. Professional phonetician participates in the annotation. It precisely matches with the research and development needs of the song synthesis.
For more details, please refer to the link: https://www.nexdata.ai/datasets/1151?source=Huggingface
### Supported Tasks and Leaderboards
tts,: The dataset can be used to train a model for Text to Speech (TTS).
### Languages
Chinese Mandarin
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing
### Citation Information
[More Information Needed]
### Contributions
|
wietsedv/stsbenchmark | ---
license: cc-by-sa-4.0
---
|
PinkysMusing/Banners | ---
license: cc
---
|
polinaeterna/old_push2 | ---
dataset_info:
- config_name: custom
features:
- name: x
dtype: int64
- name: y
dtype: int64
splits:
- name: train
num_bytes: 80
num_examples: 5
download_size: 1317
dataset_size: 80
- config_name: default
features:
- name: x
dtype: int64
- name: y
dtype: int64
splits:
- name: train
num_bytes: 160
num_examples: 10
download_size: 1371
dataset_size: 160
builder_config:
- config_name: custom
data_files:
- split: train
pattern: custom/train-*
- config_name: default
data_files:
- split: train
pattern: data/train-*
---
# Dataset Card for "old_push2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jxm/dbpedia | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: dev
path: data/dev-*
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 14782633
num_examples: 49999
- name: test
num_bytes: 20641120
num_examples: 70000
- name: dev
num_bytes: 74007
num_examples: 256
download_size: 21721890
dataset_size: 35497760
---
# Dataset Card for "dbpedia"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kanaka123/new_room | ---
dataset_info:
features:
- name: image
dtype: image
- name: additional_feature
dtype: string
splits:
- name: train
num_bytes: 2671952.0
num_examples: 20
download_size: 2635392
dataset_size: 2671952.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
soulhq-ai/insuranceQA-v2 | ---
task_categories:
- text-generation
- question-answering
language:
- en
tags:
- finance
- insurance
size_categories:
- 10K<n<100K
---
This dataset was released as a part of <a id="2" href="https://ieeexplore.ieee.org/abstract/document/7404872/">Feng, Minwei, et al. "Applying deep learning to answer selection: A study and an open task." 2015 IEEE workshop on automatic speech recognition and understanding (ASRU). IEEE, 2015</a>.
We've deconstructed the tokens provided at https://github.com/shuzi/insuranceQA/tree/master/V2. |
arthurmluz/wikilingua_data-temario_results | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: summary
dtype: string
- name: gen_summary
dtype: string
- name: rouge
struct:
- name: rouge1
dtype: float64
- name: rouge2
dtype: float64
- name: rougeL
dtype: float64
- name: rougeLsum
dtype: float64
- name: bert
struct:
- name: f1
sequence: float64
- name: hashcode
dtype: string
- name: precision
sequence: float64
- name: recall
sequence: float64
- name: moverScore
dtype: float64
splits:
- name: validation
num_bytes: 31900191
num_examples: 8165
download_size: 19378476
dataset_size: 31900191
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "wikilingua_data-temario_results"
rouge={'rouge1': 0.17417346657091554, 'rouge2': 0.05244434884193, 'rougeL': 0.11143891313862225, 'rougeLsum': 0.11143891313862225}
Bert={'precision': 0.6341577677623086, 'recall': 0.7350342140413835, 'f1': 0.6800217146312832}
moverscore 0.5511240248681097 |
mainlp/pervasive_imdb | ---
license: gpl-3.0
---
|
another-symato/law-dedup | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 693572294
num_examples: 411025
download_size: 259132261
dataset_size: 693572294
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.