id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
ymoslem/Law-StackExchange | 2023-08-20T17:25:54.000Z | [
"task_categories:question-answering",
"task_categories:text-classification",
"task_categories:sentence-similarity",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-sa-4.0",
"legal",
"region:us"
] | ymoslem | null | null | 7 | 17 | 2023-08-20T16:54:45 | ---
license: cc-by-sa-4.0
task_categories:
- question-answering
- text-classification
- sentence-similarity
language:
- en
tags:
- legal
pretty_name: Law Stack Exchange Questions and Answers
size_categories:
- 10K<n<100K
---
All StackExchange legal questions and their answers from the Law site, up to 14 August 2023. The repository includes a notebook for the process using the official StackExchange API. | 407 | [
[
-0.043731689453125,
-0.048553466796875,
0.05914306640625,
0.039215087890625,
-0.018157958984375,
-0.0369873046875,
0.01413726806640625,
-0.06201171875,
0.02423095703125,
0.07122802734375,
-0.047576904296875,
-0.003528594970703125,
-0.0139923095703125,
0.0050... |
zxvix/pubmed_subset_new | 2023-08-23T09:04:37.000Z | [
"region:us"
] | zxvix | null | null | 0 | 17 | 2023-08-23T08:08:51 | ---
dataset_info:
features:
- name: MedlineCitation
struct:
- name: PMID
dtype: int32
- name: DateCompleted
struct:
- name: Year
dtype: int32
- name: Month
dtype: int32
- name: Day
dtype: int32
- name: NumberOfReferences
dtype: int32
- name: DateRevised
struct:
- name: Year
dtype: int32
- name: Month
dtype: int32
- name: Day
dtype: int32
- name: Article
struct:
- name: Abstract
struct:
- name: AbstractText
dtype: string
- name: ArticleTitle
dtype: string
- name: AuthorList
struct:
- name: Author
sequence:
- name: LastName
dtype: string
- name: ForeName
dtype: string
- name: Initials
dtype: string
- name: CollectiveName
dtype: string
- name: Language
dtype: string
- name: GrantList
struct:
- name: Grant
sequence:
- name: GrantID
dtype: string
- name: Agency
dtype: string
- name: Country
dtype: string
- name: PublicationTypeList
struct:
- name: PublicationType
sequence: string
- name: MedlineJournalInfo
struct:
- name: Country
dtype: string
- name: ChemicalList
struct:
- name: Chemical
sequence:
- name: RegistryNumber
dtype: string
- name: NameOfSubstance
dtype: string
- name: CitationSubset
dtype: string
- name: MeshHeadingList
struct:
- name: MeshHeading
sequence:
- name: DescriptorName
dtype: string
- name: QualifierName
dtype: string
- name: PubmedData
struct:
- name: ArticleIdList
sequence:
- name: ArticleId
sequence: string
- name: PublicationStatus
dtype: string
- name: History
struct:
- name: PubMedPubDate
sequence:
- name: Year
dtype: int32
- name: Month
dtype: int32
- name: Day
dtype: int32
- name: ReferenceList
sequence:
- name: Citation
dtype: string
- name: CitationId
dtype: int32
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 3033204166.457245
num_examples: 1000000
- name: test
num_bytes: 3033204.166457245
num_examples: 1000
download_size: 1638343655
dataset_size: 3036237370.623702
---
# Dataset Card for "pubmed_subset_new"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 2,841 | [
[
-0.032684326171875,
-0.0075225830078125,
0.029937744140625,
0.00213623046875,
-0.0305938720703125,
0.0028247833251953125,
0.0203399658203125,
0.00008618831634521484,
0.066650390625,
0.0482177734375,
-0.0487060546875,
-0.057647705078125,
-0.0484619140625,
0.0... |
TaylorAI/pubmed_commercial | 2023-08-26T07:32:30.000Z | [
"region:us"
] | TaylorAI | null | null | 11 | 17 | 2023-08-23T19:00:38 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
vikp/evol_instruct_code_filtered_39k | 2023-08-29T17:35:13.000Z | [
"region:us"
] | vikp | null | null | 3 | 17 | 2023-08-29T14:35:42 | ---
dataset_info:
features:
- name: output
dtype: string
- name: instruction
dtype: string
- name: quality_prob
dtype: float64
- name: learning_prob
dtype: float64
splits:
- name: train
num_bytes: 56854896.038860105
num_examples: 39078
download_size: 37822990
dataset_size: 56854896.038860105
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "evol_instruct_code_filtered_38k"
Filtered version of `nickrosh/Evol-Instruct-Code-80k-v1`, with manual filtering, and automatic filtering based on quality and learning value classifiers. | 632 | [
[
-0.046783447265625,
-0.0018701553344726562,
0.0003693103790283203,
-0.0299835205078125,
-0.0601806640625,
0.0168304443359375,
0.01678466796875,
-0.02685546875,
0.0201263427734375,
0.06976318359375,
-0.050262451171875,
-0.056915283203125,
-0.0224151611328125,
... |
StudentLLM/Sampled_Orca_GPT4 | 2023-08-31T02:58:44.000Z | [
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"region:us"
] | StudentLLM | null | null | 0 | 17 | 2023-08-30T06:57:21 | ---
language:
- en
size_categories:
- 10K<n<100K
license: mit
---
# Stratify Sampled Dataset of Open-Orca 🐬
This dataset is a stratified sampled dataset of Open-Orca's GPT-4 answered dataset(1M-GPT4-Augmented.parquet) [[Link](https://huggingface.co/datasets/Open-Orca/OpenOrca)]
For sampling the dataset stratify, `train_test_split` of scikit-learn library was used.
The specific setup of sampling is as follows:
- split_size: 0.05
- shuffle: True
- stratify: `'id'` of Open-Orca dataset | 491 | [
[
-0.038299560546875,
-0.052825927734375,
-0.0028667449951171875,
0.0102691650390625,
-0.043731689453125,
-0.00568389892578125,
0.0014553070068359375,
-0.0229644775390625,
0.052581787109375,
0.03558349609375,
-0.04876708984375,
-0.044158935546875,
-0.0120773315429... |
Falah/photography_prompts | 2023-09-10T12:53:20.000Z | [
"region:us"
] | Falah | null | null | 1 | 17 | 2023-09-10T12:53:18 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 36884997
num_examples: 100000
download_size: 5112133
dataset_size: 36884997
---
# Dataset Card for "photography_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 368 | [
[
-0.043701171875,
-0.00997161865234375,
0.034027099609375,
0.019561767578125,
-0.01702880859375,
-0.01137542724609375,
0.0172119140625,
-0.0094451904296875,
0.037628173828125,
0.0128936767578125,
-0.0791015625,
-0.058624267578125,
-0.0292205810546875,
-0.0063... |
msinankhan1/India_Tax_FAQs | 2023-09-14T12:12:26.000Z | [
"region:us"
] | msinankhan1 | null | null | 0 | 17 | 2023-09-12T07:22:30 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
open-llm-leaderboard/details_TigerResearch__tigerbot-70b-base | 2023-10-24T09:25:34.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 17 | 2023-09-13T01:25:28 | ---
pretty_name: Evaluation run of TigerResearch/tigerbot-70b-base
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [TigerResearch/tigerbot-70b-base](https://huggingface.co/TigerResearch/tigerbot-70b-base)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TigerResearch__tigerbot-70b-base\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-24T09:25:20.725516](https://huggingface.co/datasets/open-llm-leaderboard/details_TigerResearch__tigerbot-70b-base/blob/main/results_2023-10-24T09-25-20.725516.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.4872063758389262,\n\
\ \"em_stderr\": 0.005118791512925044,\n \"f1\": 0.5244914010067125,\n\
\ \"f1_stderr\": 0.004935563924712029,\n \"acc\": 0.5897264974960701,\n\
\ \"acc_stderr\": 0.012277506705422794\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.4872063758389262,\n \"em_stderr\": 0.005118791512925044,\n\
\ \"f1\": 0.5244914010067125,\n \"f1_stderr\": 0.004935563924712029\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3775587566338135,\n \
\ \"acc_stderr\": 0.013353150666358539\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8018942383583267,\n \"acc_stderr\": 0.011201862744487047\n\
\ }\n}\n```"
repo_url: https://huggingface.co/TigerResearch/tigerbot-70b-base
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|arc:challenge|25_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_24T09_25_20.725516
path:
- '**/details_harness|drop|3_2023-10-24T09-25-20.725516.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-24T09-25-20.725516.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_24T09_25_20.725516
path:
- '**/details_harness|gsm8k|5_2023-10-24T09-25-20.725516.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-24T09-25-20.725516.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hellaswag|10_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-13T01-25-14.196261.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-13T01-25-14.196261.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-13T01-25-14.196261.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_24T09_25_20.725516
path:
- '**/details_harness|winogrande|5_2023-10-24T09-25-20.725516.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-24T09-25-20.725516.parquet'
- config_name: results
data_files:
- split: 2023_09_13T01_25_14.196261
path:
- results_2023-09-13T01-25-14.196261.parquet
- split: 2023_10_24T09_25_20.725516
path:
- results_2023-10-24T09-25-20.725516.parquet
- split: latest
path:
- results_2023-10-24T09-25-20.725516.parquet
---
# Dataset Card for Evaluation run of TigerResearch/tigerbot-70b-base
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/TigerResearch/tigerbot-70b-base
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [TigerResearch/tigerbot-70b-base](https://huggingface.co/TigerResearch/tigerbot-70b-base) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TigerResearch__tigerbot-70b-base",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-24T09:25:20.725516](https://huggingface.co/datasets/open-llm-leaderboard/details_TigerResearch__tigerbot-70b-base/blob/main/results_2023-10-24T09-25-20.725516.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.4872063758389262,
"em_stderr": 0.005118791512925044,
"f1": 0.5244914010067125,
"f1_stderr": 0.004935563924712029,
"acc": 0.5897264974960701,
"acc_stderr": 0.012277506705422794
},
"harness|drop|3": {
"em": 0.4872063758389262,
"em_stderr": 0.005118791512925044,
"f1": 0.5244914010067125,
"f1_stderr": 0.004935563924712029
},
"harness|gsm8k|5": {
"acc": 0.3775587566338135,
"acc_stderr": 0.013353150666358539
},
"harness|winogrande|5": {
"acc": 0.8018942383583267,
"acc_stderr": 0.011201862744487047
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 38,674 | [
[
-0.031280517578125,
-0.043212890625,
0.0120697021484375,
0.014892578125,
-0.016082763671875,
0.0113983154296875,
-0.0272216796875,
-0.01059722900390625,
0.03253173828125,
0.0423583984375,
-0.05059814453125,
-0.0670166015625,
-0.0391845703125,
0.0152740478515... |
Nacholmo/coco-pattern | 2023-09-16T05:43:17.000Z | [
"region:us"
] | Nacholmo | null | null | 0 | 17 | 2023-09-16T04:10:25 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: filepath
dtype: string
- name: sentids
list: int32
- name: filename
dtype: string
- name: imgid
dtype: int32
- name: split
dtype: string
- name: sentences_tokens
list:
list: string
- name: sentences_raw
list: string
- name: sentences_sentid
list: int32
- name: cocoid
dtype: int32
- name: id
dtype: int64
- name: conditioning_image
dtype: image
splits:
- name: train
num_bytes: 14068039590.25
num_examples: 113287
download_size: 14013924288
dataset_size: 14068039590.25
---
# Dataset Card for "coco"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 877 | [
[
-0.0401611328125,
-0.0269012451171875,
0.00255584716796875,
0.035369873046875,
-0.01496124267578125,
0.0188751220703125,
0.00988006591796875,
-0.0289764404296875,
0.06524658203125,
0.035919189453125,
-0.056304931640625,
-0.05914306640625,
-0.04449462890625,
... |
infinityofspace/python_codestyles-random-1k | 2023-10-18T20:42:59.000Z | [
"size_categories:100K<n<1M",
"license:mit",
"python",
"code-style",
"random",
"doi:10.57967/hf/1232",
"region:us"
] | infinityofspace | null | null | 0 | 17 | 2023-09-17T18:24:57 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: code
dtype: string
- name: code_codestyle
dtype: int64
- name: style_context
dtype: string
- name: style_context_codestyle
dtype: int64
- name: label
dtype: int64
splits:
- name: train
num_bytes: 3604934957
num_examples: 308000
- name: test
num_bytes: 645620388
num_examples: 56400
download_size: 671035436
dataset_size: 4250555345
license: mit
tags:
- python
- code-style
- random
size_categories:
- 100K<n<1M
---
# Dataset Card for "python_codestyles-random-1k"
This dataset contains negative and positive examples with python code of compliance with a code style. A positive
example represents compliance with the code style (label is 1). Each example is composed of two components, the first
component consists of a code that either conforms to the code style or violates it and the second component
corresponding to an example code that already conforms to a code style. In total, the dataset contains `1.000` completely
different code styles. The code styles differ in at least one codestyle rule, which is called a `random` codestyle
dataset variant. The dataset consists of a training and test group, with none of the code styles overlapping between
groups. In addition, both groups contain completely different underlying codes.
The examples contain source code from the following repositories:
| repository | tag or commit |
|:-----------------------------------------------------------------------:|:----------------------------------------:|
| [TheAlgorithms/Python](https://github.com/TheAlgorithms/Python) | f614ed72170011d2d439f7901e1c8daa7deac8c4 |
| [huggingface/transformers](https://github.com/huggingface/transformers) | v4.31.0 |
| [huggingface/datasets](https://github.com/huggingface/datasets) | 2.13.1 |
| [huggingface/diffusers](https://github.com/huggingface/diffusers) | v0.18.2 |
| [huggingface/accelerate](https://github.com/huggingface/accelerate) | v0.21.0 |
You can find the corresponding code styles of the examples in the file [additional_data.json](additional_data.json).
The code styles in the file are split by training and test group and the index corresponds to the class for the
columns `code_codestyle` and `style_context_codestyle` in the dataset.
There are 364.400 samples in total and 182.200 positive and 182.200 negative samples. | 2,745 | [
[
-0.0447998046875,
-0.03277587890625,
-0.0106353759765625,
0.0308990478515625,
-0.012451171875,
-0.0154876708984375,
-0.0133209228515625,
-0.01409912109375,
0.03887939453125,
0.0263214111328125,
-0.053955078125,
-0.0440673828125,
-0.0289764404296875,
0.023300... |
legacy107/sentence_transformer_wikipedia_chunked | 2023-09-19T04:00:50.000Z | [
"region:us"
] | legacy107 | null | null | 0 | 17 | 2023-09-18T08:27:13 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answer_start
dtype: int64
- name: answer
dtype: string
- name: article
dtype: string
- name: chunked_article
sequence: string
splits:
- name: train
num_bytes: 3734770114
num_examples: 27742
- name: test
num_bytes: 408448904
num_examples: 3468
- name: validation
num_bytes: 564192755
num_examples: 3458
download_size: 717817867
dataset_size: 4707411773
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
# Dataset Card for "qa_wikipedia_sentence_transformer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 953 | [
[
-0.0369873046875,
-0.0230865478515625,
0.0191192626953125,
0.0099639892578125,
-0.0087738037109375,
-0.01439666748046875,
0.00814056396484375,
0.0001316070556640625,
0.04833984375,
0.031524658203125,
-0.050872802734375,
-0.040557861328125,
-0.033935546875,
-... |
dim/databricks_dolly_15k_ru | 2023-09-20T15:51:37.000Z | [
"region:us"
] | dim | null | null | 0 | 17 | 2023-09-20T15:51:24 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 22121608
num_examples: 14914
download_size: 11365356
dataset_size: 22121608
---
# Dataset Card for "databricks_dolly_15k_ru"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 486 | [
[
-0.02276611328125,
-0.0214080810546875,
-0.002872467041015625,
0.040283203125,
-0.019134521484375,
0.005435943603515625,
0.042236328125,
0.0012607574462890625,
0.0518798828125,
0.0238037109375,
-0.06805419921875,
-0.045379638671875,
-0.036376953125,
-0.00255... |
infCapital/vnnews-corpus | 2023-09-22T00:10:16.000Z | [
"region:us"
] | infCapital | null | null | 1 | 17 | 2023-09-21T17:43:50 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
tyzhu/eval_tag_nq_test_v0.5 | 2023-09-25T06:07:50.000Z | [
"region:us"
] | tyzhu | null | null | 0 | 17 | 2023-09-25T06:07:43 | ---
dataset_info:
features:
- name: question
dtype: string
- name: title
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
- name: answers
struct:
- name: answer_start
sequence: 'null'
- name: text
sequence: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 1972
num_examples: 10
- name: validation
num_bytes: 787384
num_examples: 3610
download_size: 488101
dataset_size: 789356
---
# Dataset Card for "eval_tag_nq_test_v0.5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 680 | [
[
-0.04742431640625,
-0.020599365234375,
-0.003551483154296875,
0.005035400390625,
-0.012939453125,
0.0135650634765625,
0.030517578125,
-0.00644683837890625,
0.052398681640625,
0.0309295654296875,
-0.04791259765625,
-0.051605224609375,
-0.01074981689453125,
0.... |
MattCoddity/dockerNLcommands | 2023-10-06T08:35:01.000Z | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"region:us"
] | MattCoddity | null | null | 2 | 17 | 2023-09-27T04:21:12 | ---
license: apache-2.0
task_categories:
- question-answering
language:
- en
size_categories:
- 10K<n<100K
---
# Natural Language to Docker Command Dataset
This dataset is designed to translate natural language instructions into Docker commands. It contains mappings of textual phrases to corresponding Docker commands, aiding in the development of models capable of understanding and translating user requests into executable Docker instructions.
## Dataset Format
Each entry in the dataset consists of a JSON object with the following keys:
- `input`: The natural language phrase.
- `instruction`: A static field indicating the task to translate the phrase into a Docker command.
- `output`: The corresponding Docker command.
### Example Entry
```json
{
"input": "Can you show me the digests of all the available Docker images?",
"instruction": "translate this sentence in docker command",
"output": "docker images --digests"
}
```
## Usage
This dataset can be utilized to train and evaluate models for a variety of applications including, but not limited to, Natural Language Processing (NLP), Command Line Interface (CLI) automation, and educational tools for Docker.
## Commands coverage
- docker ps
- docker images
- docker stop
- docker kill
- docker login
## Contributing
We welcome contributions to improve this dataset. Please feel free to open a Pull Request or an Issue to discuss potential improvements, bug fixes, or other changes. | 1,463 | [
[
-0.05291748046875,
-0.047149658203125,
0.0335693359375,
0.0227813720703125,
-0.03399658203125,
0.0042724609375,
-0.004589080810546875,
-0.0031280517578125,
0.0027923583984375,
0.08026123046875,
-0.0518798828125,
-0.069580078125,
-0.03436279296875,
0.01538848... |
lowem1/mimic_radiology_ocr | 2023-09-27T15:47:13.000Z | [
"region:us"
] | lowem1 | null | null | 0 | 17 | 2023-09-27T15:47:09 | ---
dataset_info:
features:
- name: tag
dtype: string
- name: ocr_data
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2270338
num_examples: 1000
download_size: 1178315
dataset_size: 2270338
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "mimic_radiology_ocr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 516 | [
[
-0.022796630859375,
-0.00682830810546875,
0.0308074951171875,
-0.018524169921875,
-0.003887176513671875,
0.0011892318725585938,
0.0260009765625,
-0.033416748046875,
0.056427001953125,
0.03363037109375,
-0.043426513671875,
-0.04522705078125,
-0.04058837890625,
... |
jhuang14/Labeled_Data | 2023-09-28T08:32:36.000Z | [
"region:us"
] | jhuang14 | null | null | 0 | 17 | 2023-09-28T08:32:09 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': bustruck
'2': other
'3': rail
splits:
- name: train
num_bytes: 1652124.1515151516
num_examples: 92
- name: test
num_bytes: 718314.8484848485
num_examples: 40
download_size: 2372957
dataset_size: 2370439.0
---
# Dataset Card for "Labeled_Data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 708 | [
[
-0.039276123046875,
-0.025146484375,
0.01020050048828125,
0.0211181640625,
-0.010223388671875,
-0.00010776519775390625,
0.0149688720703125,
-0.0218048095703125,
0.0552978515625,
0.039276123046875,
-0.049835205078125,
-0.06683349609375,
-0.046722412109375,
-0... |
ashiyakatuka11/corpus1_dataset | 2023-10-03T12:01:15.000Z | [
"region:us"
] | ashiyakatuka11 | null | null | 0 | 17 | 2023-09-28T10:08:14 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: Session_ID
dtype: int64
- name: 'Speaker '
dtype: string
- name: UserID
dtype: string
- name: prev_Utterance
dtype: string
- name: Utterance
dtype: string
- name: prevUtt_TAG
dtype: string
- name: TAG
dtype: string
- name: new_TAG
dtype: string
- name: new_TAG_name
dtype: string
- name: labels
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 826401
num_examples: 4964
- name: test
num_bytes: 207557
num_examples: 1241
download_size: 426039
dataset_size: 1033958
---
# Dataset Card for "corpus1_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 927 | [
[
-0.044952392578125,
-0.0203857421875,
0.0047607421875,
0.0250396728515625,
-0.01515960693359375,
0.0048675537109375,
0.0091552734375,
-0.00829315185546875,
0.069580078125,
0.03533935546875,
-0.046661376953125,
-0.0692138671875,
-0.052215576171875,
-0.0185241... |
ashiyakatuka11/corpus2_dataset | 2023-10-03T12:01:21.000Z | [
"region:us"
] | ashiyakatuka11 | null | null | 0 | 17 | 2023-09-28T10:08:18 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: 'Corpus Utterance #'
dtype: int64
- name: 'Session Utterance #'
dtype: string
- name: Time
dtype: string
- name: User
dtype: string
- name: Utterance
dtype: string
- name: TAG
dtype: string
- name: Session ID
dtype: string
- name: new_TAG
dtype: string
- name: new_TAG_name
dtype: string
- name: labels
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 327599
num_examples: 2720
- name: test
num_bytes: 81553
num_examples: 681
download_size: 165842
dataset_size: 409152
---
# Dataset Card for "corpus2_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 932 | [
[
-0.03314208984375,
-0.01666259765625,
0.007183074951171875,
0.0228118896484375,
-0.01412200927734375,
0.007724761962890625,
0.003917694091796875,
-0.01849365234375,
0.05474853515625,
0.0312042236328125,
-0.0340576171875,
-0.056060791015625,
-0.05224609375,
-... |
piyush23111991/amazonProductData | 2023-10-13T04:50:11.000Z | [
"region:us"
] | piyush23111991 | null | null | 0 | 17 | 2023-10-02T20:25:57 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ashiyakatuka11/en_es_combo_dataset | 2023-10-03T12:19:15.000Z | [
"region:us"
] | ashiyakatuka11 | null | null | 0 | 17 | 2023-10-03T12:19:12 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: Session_ID
dtype: float64
- name: 'Speaker '
dtype: string
- name: UserID
dtype: string
- name: prev_Utterance
dtype: string
- name: Utterance
dtype: string
- name: prevUtt_TAG
dtype: string
- name: TAG
dtype: string
- name: new_TAG
dtype: string
- name: new_TAG_name
dtype: string
- name: labels
dtype: int64
- name: 'Corpus Utterance #'
dtype: float64
- name: 'Session Utterance #'
dtype: string
- name: Time
dtype: string
- name: User
dtype: string
- name: Session ID
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1348026
num_examples: 7684
- name: test
num_bytes: 337648
num_examples: 1922
download_size: 595953
dataset_size: 1685674
---
# Dataset Card for "en_es_combo_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,139 | [
[
-0.0478515625,
-0.00821685791015625,
0.0024623870849609375,
0.01177978515625,
-0.02520751953125,
0.0211029052734375,
0.01422119140625,
-0.007732391357421875,
0.08245849609375,
0.047088623046875,
-0.05657958984375,
-0.042724609375,
-0.0361328125,
-0.006984710... |
Hack90/ncbi_genbank_part_0 | 2023-10-04T19:45:14.000Z | [
"region:us"
] | Hack90 | null | null | 0 | 17 | 2023-10-04T18:59:55 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: features
dtype: int64
- name: seq_length
dtype: int64
splits:
- name: train
num_bytes: 257341428
num_examples: 156
download_size: 118952731
dataset_size: 257341428
---
# Dataset Card for "ncbi_genbank_part_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 634 | [
[
-0.044097900390625,
-0.0227813720703125,
0.0204315185546875,
0.01021575927734375,
-0.0254974365234375,
0.0182647705078125,
0.039459228515625,
-0.004192352294921875,
0.06707763671875,
0.03729248046875,
-0.05224609375,
-0.06591796875,
-0.026824951171875,
-0.00... |
jayashri710/llama2-cricketdata | 2023-10-06T09:50:46.000Z | [
"region:us"
] | jayashri710 | null | null | 0 | 17 | 2023-10-05T13:30:28 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
PericlesSavio/contratacao4 | 2023-10-06T14:42:45.000Z | [
"region:us"
] | PericlesSavio | null | null | 0 | 17 | 2023-10-06T14:41:26 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
towhid/aesir-test | 2023-10-06T20:29:56.000Z | [
"region:us"
] | towhid | null | null | 0 | 17 | 2023-10-06T20:29:29 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
JordanTallon/political_bias | 2023-10-07T21:43:25.000Z | [
"region:us"
] | JordanTallon | null | null | 0 | 17 | 2023-10-07T21:42:19 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Mizukiluke/ureader-instruction-1.0 | 2023-10-13T19:17:19.000Z | [
"region:us"
] | Mizukiluke | null | null | 0 | 17 | 2023-10-09T02:07:28 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
joheras/spanish-suicide-intent | 2023-10-10T14:20:03.000Z | [
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:es",
"license:cc-by-4.0",
"region:us"
] | joheras | null | null | 0 | 17 | 2023-10-10T12:34:26 | ---
license: cc-by-4.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
dataset_info:
features:
- name: Text
dtype: string
- name: Label
dtype: int64
- name: dataset
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 31442785
num_examples: 136136
- name: val
num_bytes: 3542897
num_examples: 15131
- name: test
num_bytes: 8671755
num_examples: 37820
download_size: 17952583
dataset_size: 43657437
task_categories:
- text-classification
language:
- es
size_categories:
- 100K<n<1M
---
## Dataset Summary
The dataset consists of comments from several sources translated to Spanish language and classified as suicidal ideation/behavior and non-suicidal.
# Dataset Structure
The dataset has 175010 rows (77223 considered as Suicidal Ideation/Behavior and 97787 considered Not Suicidal).
## Dataset fields
* `Text`: User comment.
* `Label`: 1 if suicidal ideation/behavior; 0 if not suicidal comment.
* `Dataset`: Source of the comment
# Dataset Creation
* 112385 (84485 non suicidal, 27905 suicidal) from the [Suicide Watch dataset](https://www.kaggle.com/datasets/nikhileswarkomati/suicide-watch/).
* 46894 (46894 suicidal) from the [TwitterSuicidalAnalysis](https://github.com/IE-NITK/TwitterSuicidalAnalysis).
* 9919 (9183 non suicidal, 736 suicidal) from the corpus genereated in [Hackaton Somos NLP](https://huggingface.co/datasets/hackathon-somos-nlp-2023/suicide-comments-es)
* 8744 (4802 non suicidal, 3942 suicidal) from the paper [An Attention-based hybrid architecture with explainability for depressive social media text detection in Bangla](https://github.com/NM001007/An-Attention-based-Hybrid-Suicide-Ideation-Detection)
* 7084 (3559 non suicidal, 3525 suicidal) from the paper [Supervised Learning for Suicidal Ideation Detection in Online User Content](https://github.com/TabbieD/NLP-Sentiment-Analysis)
* 1972 (1540 non suicidal, 432 suicidal) from the paper [Detection of Suicidal Intent in Spanish Language Social Networks using Machine Learning](https://github.com/kvvaldez/spanish_suicide/blob/master/dataset/suicidio_notacion.csv)
* 1769 (1122 non suicidal, 647 suicidal) from the corpus [Suicidal Tweet Detection](https://www.kaggle.com/datasets/aunanya875/suicidal-tweet-detection-dataset/data)
* 316 (204 non suicidal, 112 suicidal) from the paper [Data Mining Approach to the Detection of Suicide in Social Media: A Case Study of Singapore](https://github.com/shingkid/data-mining-suicide-sg/tree/master)
# Considerations for Using the Data
## Social Impact of Dataset
The dataset could contain some patterns to detect suicidal ideation/behavior.
## Discussion of Biases
No measures have been taken to estimate the bias and toxicity embedded in the dataset. However, the most of the data is collected on Reddit, Twitter, and ChatGPT. So there is probably an age bias because [the Internet is used more by younger people](https://www.statista.com/statistics/272365/age-distribution-of-internet-users-worldwide).
# Additional Information
## Team
* [joheras](https://huggingface.co/joheras)
| 3,242 | [
[
-0.01971435546875,
-0.06256103515625,
0.046112060546875,
0.05926513671875,
-0.00807952880859375,
-0.0045166015625,
-0.0150909423828125,
-0.025146484375,
0.0283966064453125,
0.009613037109375,
-0.054962158203125,
-0.0657958984375,
-0.039764404296875,
0.028289... |
W1lson/RMData | 2023-10-11T05:39:01.000Z | [
"region:us"
] | W1lson | null | null | 0 | 17 | 2023-10-11T05:38:59 | ---
dataset_info:
features:
- name: Source ID
dtype: int64
- name: Primary Text
dtype: string
- name: Artifact Type
dtype: string
- name: Design Package
dtype: string
- name: Location
dtype: string
- name: Verification Method
dtype: string
- name: Validation Method
dtype: string
splits:
- name: train
num_bytes: 6326
num_examples: 35
download_size: 7719
dataset_size: 6326
---
# Dataset Card for "RMData"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 598 | [
[
-0.04443359375,
-0.0183563232421875,
0.01910400390625,
0.01143646240234375,
-0.01393890380859375,
-0.001697540283203125,
0.0236358642578125,
-0.01068115234375,
0.059814453125,
0.03094482421875,
-0.06756591796875,
-0.06109619140625,
-0.041748046875,
-0.011619... |
datastax/entomology | 2023-10-11T08:55:50.000Z | [
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"region:us"
] | datastax | null | null | 0 | 17 | 2023-10-11T08:03:34 | ---
license: apache-2.0
language:
- en
pretty_name: Fictional entomology
size_categories:
- n<1K
---
32 made-up insect descriptions with Latin name and order (well, there's a spider, too), as one would find in a field guide.
These were created with ChatGPT 3.5 / ChatGPT 4 for the purpose of running example applications such as a "entomology field guide helper".
It was chosen to use entirely fictional material to avoid inadvertently using the LLM's implicit knowledge from pretraining in the demos. | 502 | [
[
-0.038604736328125,
-0.023193359375,
0.041717529296875,
0.027862548828125,
-0.0284271240234375,
0.0025424957275390625,
0.0173187255859375,
-0.06573486328125,
0.038330078125,
0.026947021484375,
-0.04931640625,
-0.02435302734375,
-0.0203399658203125,
0.0508422... |
shrutisingh/dataset_recommendation_mcq_mc | 2023-10-12T17:15:59.000Z | [
"license:apache-2.0",
"region:us"
] | shrutisingh | null | null | 0 | 17 | 2023-10-12T17:02:16 | ---
license: apache-2.0
---
Task: MCQ with multiple correct answers.
Dataset: Recommendation of datasets to validate a research question.
This dataset is derived from the [DataFinder](https://aclanthology.org/2023.acl-long.573/) dataset. We curate the abstracts of each dataset from [PapersWithCode](https://paperswithcode.com/datasets).
Given is a short `query` discussing a research question, and keyphrases relevant the query.
The original training set of the DataFinder dataset has positive and negative candidates for each query, to train a contrastive model.
We objective is to convert the dataset into a MCQ question-answering task with multiple correct answers. We also add the abstracts from the research papers introducing the datasets so that context can be provided to the models.
To reproduce the construction of this dataset, please visit [https://github.com/shruti-singh/scidata_recommendation](https://github.com/shruti-singh/scidata_recommendation).
Please note that the query instances in this dataset have no intersection with the [`dataset_recommendation_mcq_sc`](https://huggingface.co/datasets/shrutisingh/dataset_recommendation_mcq_sc) dataset. [`dataset_recommendation_mcq_sc`](https://huggingface.co/datasets/shrutisingh/dataset_recommendation_mcq_sc) is a variant of this MCQ question-answering task with only single correct answer. | 1,378 | [
[
-0.036468505859375,
-0.038787841796875,
0.037445068359375,
0.002735137939453125,
-0.0130157470703125,
-0.00720977783203125,
0.004314422607421875,
-0.0008263587951660156,
0.016876220703125,
0.050750732421875,
-0.05670166015625,
-0.035400390625,
-0.018646240234375... |
Eitanli/abstracts_cleaned | 2023-10-14T11:37:43.000Z | [
"region:us"
] | Eitanli | null | null | 0 | 17 | 2023-10-13T11:43:34 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: recall
dtype: int64
- name: article_title
dtype: string
- name: topic
dtype: string
- name: abstract
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 137515873.22056717
num_examples: 79863
- name: test
num_bytes: 17189699.389716417
num_examples: 9983
- name: valid
num_bytes: 17189699.389716417
num_examples: 9983
download_size: 92795013
dataset_size: 171895272.0
---
# Dataset Card for "abstracts_cleaned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 844 | [
[
-0.0325927734375,
-0.0192108154296875,
0.0300140380859375,
-0.008056640625,
-0.028778076171875,
-0.00231170654296875,
0.01163482666015625,
-0.0218963623046875,
0.07281494140625,
0.04034423828125,
-0.044891357421875,
-0.058929443359375,
-0.0401611328125,
0.00... |
Konthee/pokemon | 2023-10-14T04:42:21.000Z | [
"region:us"
] | Konthee | null | null | 0 | 17 | 2023-10-13T17:06:50 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: th-input_ids
sequence: int64
- name: th-attention_mask
sequence: int64
splits:
- name: train
num_bytes: 496836
num_examples: 666
- name: val
num_bytes: 124582
num_examples: 167
download_size: 32687
dataset_size: 621418
---
# Dataset Card for "pokemon"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 666 | [
[
-0.041015625,
-0.0103302001953125,
0.02020263671875,
0.0194091796875,
-0.0105743408203125,
-0.0017490386962890625,
0.0192718505859375,
-0.014007568359375,
0.078857421875,
0.0254669189453125,
-0.06103515625,
-0.041900634765625,
-0.03961181640625,
-0.010627746... |
khalidalt/Ashaar_diac_1 | 2023-10-14T13:55:59.000Z | [
"region:us"
] | khalidalt | null | null | 0 | 17 | 2023-10-14T13:48:44 | ---
dataset_info:
features:
- name: output
dtype: string
- name: instruction
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 12159497
num_examples: 23481
download_size: 6059483
dataset_size: 12159497
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Ashaar_diac"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 518 | [
[
-0.0478515625,
-0.01366424560546875,
0.003444671630859375,
0.01580810546875,
-0.017608642578125,
0.0003509521484375,
0.033233642578125,
-0.01605224609375,
0.06597900390625,
0.028778076171875,
-0.04290771484375,
-0.06622314453125,
-0.04486083984375,
0.0009083... |
phanvancongthanh/pubchem_bioassay | 2023-10-17T06:51:24.000Z | [
"region:us"
] | phanvancongthanh | null | null | 0 | 17 | 2023-10-16T04:41:57 | ---
dataset_info:
features:
- name: PUBCHEM_CID
dtype: float64
- name: PUBCHEM_EXT_DATASOURCE_SMILES
dtype: string
splits:
- name: train
num_bytes: 13266669373.336466
num_examples: 210186056
download_size: 6660630004
dataset_size: 13266669373.336466
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "pubchem_bioassay"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 540 | [
[
-0.0164642333984375,
-0.0167694091796875,
0.03875732421875,
0.01515960693359375,
-0.01378631591796875,
0.0133056640625,
0.02984619140625,
-0.004375457763671875,
0.06817626953125,
0.03564453125,
-0.047607421875,
-0.06787109375,
-0.031951904296875,
0.003849029... |
HumanCompatibleAI/random-seals-HalfCheetah-v1 | 2023-10-17T05:38:15.000Z | [
"region:us"
] | HumanCompatibleAI | null | null | 0 | 17 | 2023-10-17T05:37:48 | ---
dataset_info:
features:
- name: obs
sequence:
sequence: float64
- name: acts
sequence:
sequence: float32
- name: infos
sequence: string
- name: terminal
dtype: bool
- name: rews
sequence: float32
splits:
- name: train
num_bytes: 109003139
num_examples: 100
download_size: 46825772
dataset_size: 109003139
---
# Dataset Card for "random-seals-HalfCheetah-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 554 | [
[
-0.033599853515625,
-0.0191802978515625,
0.01251220703125,
0.02197265625,
-0.03448486328125,
-0.0024814605712890625,
0.03790283203125,
-0.022918701171875,
0.07720947265625,
0.043548583984375,
-0.07000732421875,
-0.046600341796875,
-0.047454833984375,
-0.0149... |
Back-up/flan-5k-sample | 2023-10-17T12:07:55.000Z | [
"region:us"
] | Back-up | null | null | 0 | 17 | 2023-10-17T12:07:46 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: task
dtype: string
splits:
- name: train
num_bytes: 3596003.2
num_examples: 4000
- name: test
num_bytes: 899000.8
num_examples: 1000
download_size: 2413137
dataset_size: 4495004.0
---
# Dataset Card for "flan-5k-sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 617 | [
[
-0.04864501953125,
-0.00839996337890625,
0.0019235610961914062,
0.005889892578125,
-0.00943756103515625,
-0.007251739501953125,
0.0157928466796875,
-0.0264434814453125,
0.061492919921875,
0.036651611328125,
-0.0584716796875,
-0.0494384765625,
-0.025726318359375,... |
schhetri41/SSDataset | 2023-10-18T07:16:01.000Z | [
"region:us"
] | schhetri41 | null | null | 0 | 17 | 2023-10-18T07:02:17 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
goodcoffee/covidQA_eval | 2023-10-19T11:56:42.000Z | [
"region:us"
] | goodcoffee | null | null | 0 | 17 | 2023-10-18T21:42:53 | ---
dataset_info:
features:
- name: input_ids
sequence: int64
- name: attention_mask
sequence: int64
- name: answer
dtype: string
- name: start_positions
dtype: int64
- name: end_positions
dtype: int64
splits:
- name: train
num_bytes: 414807
num_examples: 50
download_size: 50631
dataset_size: 414807
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "covidQA_eval"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 605 | [
[
-0.03558349609375,
-0.03228759765625,
0.002117156982421875,
0.016387939453125,
-0.006366729736328125,
0.01174163818359375,
0.025054931640625,
-0.0018777847290039062,
0.049163818359375,
0.0181884765625,
-0.052459716796875,
-0.054351806640625,
-0.030364990234375,
... |
cmu-mlsp/encodec_24khz-opt-125m-pretrained-ft-librispeech_asr-validation.clean-features | 2023-10-24T12:45:07.000Z | [
"region:us"
] | cmu-mlsp | null | null | 0 | 17 | 2023-10-20T16:25:40 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 24000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
- name: audio_codes
sequence:
sequence: int64
splits:
- name: validation.clean
num_bytes: 955281891.125
num_examples: 2703
download_size: 914893005
dataset_size: 955281891.125
configs:
- config_name: default
data_files:
- split: validation.clean
path: data/validation.clean-*
---
# Dataset Card for "encodec_24khz-opt-125m-pretrained-ft-librispeech_asr-validation.clean-features"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 820 | [
[
-0.057037353515625,
-0.00872802734375,
-0.0038661956787109375,
0.011871337890625,
-0.0273895263671875,
0.01198577880859375,
-0.0099945068359375,
-0.0166015625,
0.035430908203125,
0.038116455078125,
-0.0667724609375,
-0.044830322265625,
-0.0306549072265625,
-... |
Claudiano/donut-invoices | 2023-10-21T00:07:13.000Z | [
"region:us"
] | Claudiano | null | null | 1 | 17 | 2023-10-21T00:07:12 | ---
dataset_info:
features:
- name: ground_truth
dtype: string
- name: image
dtype: image
splits:
- name: test2
num_bytes: 99821.0
num_examples: 1
download_size: 103707
dataset_size: 99821.0
configs:
- config_name: default
data_files:
- split: test2
path: data/test2-*
---
# Dataset Card for "donut-invoices"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 479 | [
[
-0.0104217529296875,
-0.00476837158203125,
0.0153656005859375,
0.00270843505859375,
-0.002044677734375,
0.01287841796875,
0.0160369873046875,
-0.0052337646484375,
0.0556640625,
0.052398681640625,
-0.0445556640625,
-0.046875,
-0.035888671875,
-0.0293426513671... |
qgyd2021/nxcloud_customer_service | 2023-10-24T03:11:08.000Z | [
"task_categories:text-generation",
"task_categories:feature-extraction",
"task_categories:conversational",
"size_categories:100M<n<1B",
"language:zh",
"region:us"
] | qgyd2021 | null | @dataset{nxcloud_customer_service,
author = {Xing Tian},
title = {nxcloud_customer_service},
month = sep,
year = 2023,
publisher = {Xing Tian},
version = {1.0},
} | 0 | 17 | 2023-10-23T06:44:51 | ---
task_categories:
- text-generation
- feature-extraction
- conversational
language:
- zh
size_categories:
- 100M<n<1B
---
## NXCloud Customer Service
| 154 | [
[
-0.03900146484375,
-0.01091766357421875,
0.034637451171875,
0.0650634765625,
-0.0218353271484375,
0.033233642578125,
0.022125244140625,
-0.00559234619140625,
0.034515380859375,
0.0968017578125,
-0.0782470703125,
-0.0198974609375,
-0.013671875,
0.020843505859... |
roupenminassian/vehicle-dataset | 2023-10-23T09:40:06.000Z | [
"region:us"
] | roupenminassian | null | null | 0 | 17 | 2023-10-23T09:39:23 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: image_id
dtype: int64
- name: width
dtype: int64
- name: height
dtype: int64
- name: objects
struct:
- name: id
sequence: int64
- name: area
sequence: float64
- name: bbox
sequence:
sequence: float64
- name: category
sequence: int64
splits:
- name: train
num_bytes: 74749784.0
num_examples: 618
download_size: 74708626
dataset_size: 74749784.0
---
# Dataset Card for "vehicle-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 762 | [
[
-0.04937744140625,
-0.007080078125,
0.0233917236328125,
0.0165863037109375,
-0.0143585205078125,
0.00820159912109375,
0.022705078125,
-0.0114898681640625,
0.041656494140625,
0.0205078125,
-0.06640625,
-0.0418701171875,
-0.0276031494140625,
-0.036712646484375... |
Mihir1108/json_data | 2023-10-23T13:02:52.000Z | [
"region:us"
] | Mihir1108 | null | null | 0 | 17 | 2023-10-23T13:02:25 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
kardosdrur/hestenet-qa | 2023-10-23T14:16:16.000Z | [
"license:mit",
"region:us"
] | kardosdrur | null | null | 1 | 17 | 2023-10-23T13:37:15 | ---
license: mit
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1144206.5903728174
num_examples: 1695
- name: test
num_bytes: 286220.40962718264
num_examples: 424
download_size: 936129
dataset_size: 1430427.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Hestenet Question-Answer
The dataset is based on data from Hestenettet in the Danish Gigaword corpus.
Question-answer pairs are purely extracted on the basis of heuristics, and have not been manually evaluated.
The dataset was created for aiding the training of sentence transformer models in the Danish Foundation Models project.
The dataset is currently not production-ready.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 960 | [
[
-0.04058837890625,
-0.05169677734375,
0.0242767333984375,
0.00762939453125,
0.002361297607421875,
-0.0006561279296875,
-0.02313232421875,
-0.0302276611328125,
0.02947998046875,
0.04791259765625,
-0.0601806640625,
-0.024383544921875,
-0.04193115234375,
0.0098... |
optech/fbz_chat | 2023-10-24T04:37:22.000Z | [
"region:us"
] | optech | null | null | 0 | 17 | 2023-10-24T04:36:47 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
gabrielmbmb/my-dataset | 2023-10-24T09:27:36.000Z | [
"region:us"
] | gabrielmbmb | null | null | 0 | 17 | 2023-10-24T09:27:34 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: generations
sequence: string
- name: score
sequence: int64
- name: rationale
sequence: string
splits:
- name: train
num_bytes: 176800
num_examples: 50
download_size: 94403
dataset_size: 176800
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "my-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 559 | [
[
-0.05706787109375,
-0.01666259765625,
0.015167236328125,
0.01261138916015625,
-0.001407623291015625,
0.003391265869140625,
0.0207977294921875,
-0.00951385498046875,
0.07794189453125,
0.03778076171875,
-0.06475830078125,
-0.04388427734375,
-0.037078857421875,
... |
gayathrimanoj/dataset-llama-unix-extended | 2023-10-24T14:43:50.000Z | [
"region:us"
] | gayathrimanoj | null | null | 0 | 17 | 2023-10-24T14:43:26 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Geonmo/gcc12m_caption_only | 2023-10-25T08:40:33.000Z | [
"region:us"
] | Geonmo | null | null | 0 | 17 | 2023-10-25T08:32:24 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1329443791
num_examples: 12423374
download_size: 943024335
dataset_size: 1329443791
---
# Dataset Card for "gcc12m_caption_only"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 373 | [
[
-0.0345458984375,
-0.018951416015625,
0.02313232421875,
0.0222015380859375,
-0.0372314453125,
0.01192474365234375,
0.002719879150390625,
-0.00952911376953125,
0.057891845703125,
0.049652099609375,
-0.06573486328125,
-0.061981201171875,
-0.050384521484375,
-0... |
HoangHa/Vie_alpaca | 2023-10-26T09:44:26.000Z | [
"region:us"
] | HoangHa | null | null | 0 | 17 | 2023-10-26T09:44:22 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 51907952
num_examples: 49999
download_size: 24606528
dataset_size: 51907952
---
# Dataset Card for "Vie_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 442 | [
[
-0.048248291015625,
-0.03411865234375,
0.002208709716796875,
0.01922607421875,
-0.0214080810546875,
-0.012359619140625,
0.04229736328125,
-0.0146026611328125,
0.0828857421875,
0.052276611328125,
-0.0467529296875,
-0.053924560546875,
-0.044921875,
-0.03216552... |
emi429/humansleepproject-small-individuals | 2023-10-26T18:18:10.000Z | [
"region:us"
] | emi429 | null | null | 0 | 17 | 2023-10-26T14:31:15 | ---
dataset_info:
features:
- name: rr_intervals
dtype: int64
- name: sleep_stage
dtype: int64
- name: patient_id
dtype: int64
splits:
- name: test
num_bytes: 12096
num_examples: 504
- name: train
num_bytes: 49680
num_examples: 2070
download_size: 47116
dataset_size: 61776
---
# Dataset Card for "humansleepproject-small-individuals"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 513 | [
[
-0.034149169921875,
-0.005870819091796875,
0.0173187255859375,
0.0200653076171875,
-0.0056304931640625,
0.004215240478515625,
0.00959014892578125,
-0.0217437744140625,
0.0704345703125,
0.0266571044921875,
-0.057464599609375,
-0.03948974609375,
-0.026229858398437... |
Kateway/Thursday | 2023-10-26T18:42:34.000Z | [
"region:us"
] | Kateway | null | null | 0 | 17 | 2023-10-26T18:36:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
DataScienceClubUVU/ServiceProjectFall2023 | 2023-10-29T02:27:16.000Z | [
"region:us"
] | DataScienceClubUVU | null | null | 0 | 17 | 2023-10-26T20:16:29 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': d0
'1': d1
'2': d10
'3': d100
'4': d101
'5': d102
'6': d103
'7': d104
'8': d105
'9': d106
'10': d107
'11': d108
'12': d109
'13': d11
'14': d110
'15': d111
'16': d112
'17': d113
'18': d114
'19': d115
'20': d116
'21': d117
'22': d118
'23': d119
'24': d12
'25': d120
'26': d121
'27': d122
'28': d123
'29': d124
'30': d125
'31': d126
'32': d127
'33': d128
'34': d129
'35': d13
'36': d130
'37': d131
'38': d132
'39': d133
'40': d134
'41': d135
'42': d136
'43': d137
'44': d138
'45': d139
'46': d14
'47': d140
'48': d141
'49': d142
'50': d143
'51': d144
'52': d145
'53': d146
'54': d147
'55': d148
'56': d149
'57': d15
'58': d150
'59': d151
'60': d152
'61': d153
'62': d154
'63': d155
'64': d156
'65': d157
'66': d158
'67': d159
'68': d16
'69': d160
'70': d161
'71': d162
'72': d163
'73': d164
'74': d165
'75': d166
'76': d167
'77': d168
'78': d169
'79': d17
'80': d170
'81': d171
'82': d172
'83': d173
'84': d174
'85': d175
'86': d176
'87': d177
'88': d178
'89': d179
'90': d18
'91': d180
'92': d181
'93': d182
'94': d183
'95': d184
'96': d185
'97': d186
'98': d187
'99': d188
'100': d189
'101': d19
'102': d190
'103': d191
'104': d192
'105': d193
'106': d194
'107': d195
'108': d196
'109': d197
'110': d198
'111': d199
'112': d2
'113': d20
'114': d200
'115': d201
'116': d202
'117': d203
'118': d204
'119': d205
'120': d206
'121': d207
'122': d208
'123': d209
'124': d21
'125': d210
'126': d211
'127': d212
'128': d213
'129': d214
'130': d215
'131': d216
'132': d217
'133': d218
'134': d219
'135': d22
'136': d220
'137': d221
'138': d222
'139': d223
'140': d224
'141': d225
'142': d226
'143': d227
'144': d228
'145': d229
'146': d23
'147': d230
'148': d231
'149': d232
'150': d233
'151': d234
'152': d235
'153': d236
'154': d237
'155': d238
'156': d239
'157': d24
'158': d240
'159': d241
'160': d242
'161': d243
'162': d244
'163': d245
'164': d246
'165': d247
'166': d248
'167': d249
'168': d25
'169': d250
'170': d251
'171': d252
'172': d253
'173': d254
'174': d255
'175': d256
'176': d257
'177': d258
'178': d259
'179': d26
'180': d260
'181': d261
'182': d262
'183': d263
'184': d264
'185': d265
'186': d266
'187': d267
'188': d268
'189': d269
'190': d27
'191': d270
'192': d271
'193': d272
'194': d273
'195': d274
'196': d275
'197': d276
'198': d277
'199': d278
'200': d279
'201': d28
'202': d280
'203': d281
'204': d282
'205': d283
'206': d284
'207': d285
'208': d286
'209': d287
'210': d288
'211': d289
'212': d29
'213': d290
'214': d291
'215': d292
'216': d293
'217': d294
'218': d295
'219': d296
'220': d297
'221': d298
'222': d299
'223': d3
'224': d30
'225': d300
'226': d301
'227': d302
'228': d303
'229': d304
'230': d305
'231': d306
'232': d307
'233': d308
'234': d309
'235': d31
'236': d310
'237': d311
'238': d312
'239': d313
'240': d314
'241': d315
'242': d316
'243': d317
'244': d318
'245': d319
'246': d32
'247': d320
'248': d321
'249': d322
'250': d323
'251': d324
'252': d325
'253': d326
'254': d327
'255': d328
'256': d329
'257': d33
'258': d330
'259': d331
'260': d332
'261': d333
'262': d334
'263': d335
'264': d336
'265': d337
'266': d338
'267': d339
'268': d34
'269': d340
'270': d341
'271': d342
'272': d343
'273': d344
'274': d345
'275': d346
'276': d347
'277': d348
'278': d349
'279': d35
'280': d350
'281': d351
'282': d352
'283': d353
'284': d354
'285': d355
'286': d356
'287': d357
'288': d358
'289': d359
'290': d36
'291': d360
'292': d361
'293': d362
'294': d363
'295': d364
'296': d365
'297': d366
'298': d367
'299': d368
'300': d369
'301': d37
'302': d370
'303': d371
'304': d372
'305': d373
'306': d374
'307': d375
'308': d376
'309': d377
'310': d378
'311': d379
'312': d38
'313': d380
'314': d381
'315': d382
'316': d383
'317': d384
'318': d385
'319': d386
'320': d387
'321': d388
'322': d389
'323': d39
'324': d390
'325': d391
'326': d392
'327': d393
'328': d394
'329': d395
'330': d396
'331': d397
'332': d398
'333': d399
'334': d4
'335': d40
'336': d400
'337': d401
'338': d402
'339': d403
'340': d404
'341': d405
'342': d406
'343': d407
'344': d408
'345': d409
'346': d41
'347': d410
'348': d411
'349': d412
'350': d413
'351': d414
'352': d415
'353': d416
'354': d417
'355': d418
'356': d419
'357': d42
'358': d420
'359': d421
'360': d422
'361': d423
'362': d424
'363': d425
'364': d426
'365': d427
'366': d428
'367': d429
'368': d43
'369': d430
'370': d431
'371': d432
'372': d433
'373': d434
'374': d435
'375': d436
'376': d437
'377': d438
'378': d439
'379': d44
'380': d440
'381': d441
'382': d442
'383': d443
'384': d444
'385': d445
'386': d446
'387': d447
'388': d448
'389': d449
'390': d45
'391': d450
'392': d451
'393': d452
'394': d453
'395': d454
'396': d455
'397': d456
'398': d457
'399': d458
'400': d459
'401': d46
'402': d460
'403': d461
'404': d462
'405': d463
'406': d464
'407': d465
'408': d466
'409': d467
'410': d468
'411': d469
'412': d47
'413': d470
'414': d471
'415': d472
'416': d473
'417': d474
'418': d475
'419': d476
'420': d477
'421': d478
'422': d479
'423': d48
'424': d480
'425': d481
'426': d482
'427': d483
'428': d484
'429': d485
'430': d486
'431': d487
'432': d488
'433': d489
'434': d49
'435': d490
'436': d491
'437': d492
'438': d493
'439': d494
'440': d495
'441': d496
'442': d497
'443': d498
'444': d499
'445': d5
'446': d50
'447': d500
'448': d501
'449': d502
'450': d503
'451': d504
'452': d505
'453': d506
'454': d507
'455': d508
'456': d509
'457': d51
'458': d510
'459': d511
'460': d512
'461': d513
'462': d514
'463': d515
'464': d516
'465': d517
'466': d518
'467': d519
'468': d52
'469': d520
'470': d521
'471': d522
'472': d523
'473': d524
'474': d525
'475': d526
'476': d527
'477': d528
'478': d529
'479': d53
'480': d530
'481': d531
'482': d532
'483': d533
'484': d534
'485': d535
'486': d536
'487': d537
'488': d538
'489': d539
'490': d54
'491': d540
'492': d541
'493': d542
'494': d543
'495': d544
'496': d545
'497': d546
'498': d547
'499': d548
'500': d549
'501': d55
'502': d550
'503': d551
'504': d552
'505': d553
'506': d554
'507': d555
'508': d556
'509': d557
'510': d558
'511': d559
'512': d56
'513': d560
'514': d561
'515': d562
'516': d563
'517': d564
'518': d565
'519': d566
'520': d567
'521': d568
'522': d569
'523': d57
'524': d570
'525': d571
'526': d572
'527': d573
'528': d574
'529': d575
'530': d576
'531': d577
'532': d578
'533': d579
'534': d58
'535': d580
'536': d581
'537': d582
'538': d583
'539': d584
'540': d585
'541': d586
'542': d587
'543': d588
'544': d589
'545': d59
'546': d590
'547': d591
'548': d592
'549': d593
'550': d594
'551': d595
'552': d596
'553': d597
'554': d598
'555': d599
'556': d6
'557': d60
'558': d600
'559': d601
'560': d602
'561': d603
'562': d604
'563': d605
'564': d606
'565': d607
'566': d608
'567': d609
'568': d61
'569': d610
'570': d611
'571': d612
'572': d613
'573': d614
'574': d615
'575': d616
'576': d617
'577': d618
'578': d619
'579': d62
'580': d620
'581': d621
'582': d622
'583': d623
'584': d624
'585': d625
'586': d626
'587': d627
'588': d628
'589': d629
'590': d63
'591': d630
'592': d631
'593': d632
'594': d633
'595': d634
'596': d635
'597': d636
'598': d637
'599': d638
'600': d639
'601': d64
'602': d640
'603': d641
'604': d642
'605': d643
'606': d644
'607': d645
'608': d646
'609': d647
'610': d648
'611': d649
'612': d65
'613': d650
'614': d651
'615': d652
'616': d653
'617': d654
'618': d655
'619': d656
'620': d657
'621': d658
'622': d659
'623': d66
'624': d660
'625': d661
'626': d662
'627': d663
'628': d664
'629': d665
'630': d666
'631': d667
'632': d668
'633': d669
'634': d67
'635': d670
'636': d671
'637': d672
'638': d673
'639': d674
'640': d675
'641': d676
'642': d677
'643': d678
'644': d679
'645': d68
'646': d680
'647': d681
'648': d682
'649': d683
'650': d684
'651': d685
'652': d686
'653': d687
'654': d688
'655': d689
'656': d69
'657': d690
'658': d691
'659': d692
'660': d693
'661': d694
'662': d695
'663': d696
'664': d697
'665': d698
'666': d699
'667': d7
'668': d70
'669': d700
'670': d701
'671': d702
'672': d703
'673': d704
'674': d705
'675': d706
'676': d707
'677': d708
'678': d709
'679': d71
'680': d710
'681': d711
'682': d712
'683': d713
'684': d714
'685': d715
'686': d716
'687': d717
'688': d718
'689': d719
'690': d72
'691': d720
'692': d721
'693': d722
'694': d723
'695': d724
'696': d725
'697': d726
'698': d727
'699': d728
'700': d729
'701': d73
'702': d730
'703': d731
'704': d732
'705': d733
'706': d734
'707': d735
'708': d736
'709': d737
'710': d738
'711': d739
'712': d74
'713': d740
'714': d741
'715': d742
'716': d743
'717': d744
'718': d745
'719': d746
'720': d747
'721': d748
'722': d749
'723': d75
'724': d750
'725': d751
'726': d752
'727': d753
'728': d754
'729': d755
'730': d756
'731': d757
'732': d758
'733': d759
'734': d76
'735': d760
'736': d761
'737': d762
'738': d763
'739': d764
'740': d765
'741': d766
'742': d767
'743': d768
'744': d769
'745': d77
'746': d770
'747': d771
'748': d772
'749': d773
'750': d774
'751': d775
'752': d776
'753': d777
'754': d778
'755': d779
'756': d78
'757': d780
'758': d781
'759': d782
'760': d783
'761': d784
'762': d785
'763': d786
'764': d787
'765': d788
'766': d789
'767': d79
'768': d790
'769': d791
'770': d792
'771': d793
'772': d794
'773': d795
'774': d796
'775': d797
'776': d798
'777': d799
'778': d8
'779': d80
'780': d800
'781': d801
'782': d802
'783': d803
'784': d804
'785': d805
'786': d806
'787': d807
'788': d808
'789': d809
'790': d81
'791': d810
'792': d811
'793': d812
'794': d813
'795': d814
'796': d815
'797': d816
'798': d817
'799': d818
'800': d819
'801': d82
'802': d820
'803': d821
'804': d822
'805': d823
'806': d824
'807': d825
'808': d826
'809': d827
'810': d828
'811': d829
'812': d83
'813': d830
'814': d831
'815': d832
'816': d833
'817': d834
'818': d835
'819': d836
'820': d837
'821': d838
'822': d839
'823': d84
'824': d840
'825': d841
'826': d842
'827': d843
'828': d844
'829': d845
'830': d846
'831': d847
'832': d848
'833': d849
'834': d85
'835': d850
'836': d851
'837': d852
'838': d853
'839': d854
'840': d855
'841': d856
'842': d857
'843': d858
'844': d859
'845': d86
'846': d860
'847': d861
'848': d862
'849': d863
'850': d864
'851': d865
'852': d866
'853': d867
'854': d868
'855': d869
'856': d87
'857': d870
'858': d871
'859': d872
'860': d873
'861': d874
'862': d875
'863': d876
'864': d877
'865': d878
'866': d879
'867': d88
'868': d880
'869': d881
'870': d882
'871': d883
'872': d884
'873': d885
'874': d886
'875': d887
'876': d888
'877': d889
'878': d89
'879': d890
'880': d891
'881': d892
'882': d893
'883': d894
'884': d895
'885': d896
'886': d897
'887': d898
'888': d899
'889': d9
'890': d90
'891': d900
'892': d901
'893': d902
'894': d903
'895': d904
'896': d905
'897': d906
'898': d907
'899': d908
'900': d909
'901': d91
'902': d910
'903': d911
'904': d912
'905': d913
'906': d914
'907': d915
'908': d916
'909': d917
'910': d918
'911': d919
'912': d92
'913': d920
'914': d921
'915': d922
'916': d923
'917': d924
'918': d925
'919': d926
'920': d927
'921': d928
'922': d929
'923': d93
'924': d930
'925': d931
'926': d932
'927': d933
'928': d934
'929': d935
'930': d936
'931': d937
'932': d938
'933': d939
'934': d94
'935': d940
'936': d941
'937': d942
'938': d943
'939': d944
'940': d945
'941': d946
'942': d947
'943': d948
'944': d949
'945': d95
'946': d950
'947': d951
'948': d952
'949': d953
'950': d954
'951': d955
'952': d956
'953': d957
'954': d958
'955': d959
'956': d96
'957': d960
'958': d961
'959': d962
'960': d963
'961': d964
'962': d965
'963': d966
'964': d967
'965': d968
'966': d969
'967': d97
'968': d970
'969': d971
'970': d972
'971': d973
'972': d974
'973': d975
'974': d976
'975': d977
'976': d978
'977': d979
'978': d98
'979': d980
'980': d981
'981': d982
'982': d983
'983': d984
'984': d985
'985': d986
'986': d987
'987': d988
'988': d989
'989': d99
'990': d990
'991': d991
'992': d992
'993': d993
'994': d994
'995': d995
'996': d996
'997': d997
'998': d998
'999': d999
splits:
- name: train
num_bytes: 21307658444.479
num_examples: 5976559
download_size: 19698451402
dataset_size: 21307658444.479
---
# Dataset Card for "ServiceProjectFall2023"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 22,314 | [
[
-0.04156494140625,
-0.0072479248046875,
0.010467529296875,
0.03955078125,
0.0023670196533203125,
0.0118865966796875,
0.04852294921875,
-0.007312774658203125,
0.052154541015625,
0.05230712890625,
-0.07513427734375,
-0.031982421875,
-0.0438232421875,
-0.015670... |
zelalt/content-papers-withprompt | 2023-10-27T00:27:54.000Z | [
"region:us"
] | zelalt | null | null | 0 | 17 | 2023-10-27T00:27:53 | ---
dataset_info:
features:
- name: id
dtype: string
- name: authors
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1283997
num_examples: 992
download_size: 797519
dataset_size: 1283997
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "content-papers-withprompt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 589 | [
[
-0.0386962890625,
-0.01262664794921875,
0.023345947265625,
0.02069091796875,
-0.02734375,
-0.0011587142944335938,
0.008514404296875,
-0.0027294158935546875,
0.07293701171875,
0.032867431640625,
-0.0550537109375,
-0.062408447265625,
-0.06365966796875,
-0.0237... |
Ioana23/codeparrot-ds-50k | 2023-10-30T08:20:47.000Z | [
"region:us"
] | Ioana23 | null | null | 0 | 17 | 2023-10-30T08:19:20 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: repo_name
dtype: string
- name: path
dtype: string
- name: copies
dtype: string
- name: size
dtype: string
- name: content
dtype: string
- name: license
dtype: string
splits:
- name: train
num_bytes: 652784990.8524525
num_examples: 50000
- name: valid
num_bytes: 6658657.886815172
num_examples: 500
download_size: 251530132
dataset_size: 659443648.7392677
---
# Dataset Card for "codeparrot-ds-50k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 757 | [
[
-0.050567626953125,
0.0083160400390625,
0.003032684326171875,
0.0170745849609375,
-0.02752685546875,
0.0242156982421875,
0.01373291015625,
0.006168365478515625,
0.06475830078125,
0.0301666259765625,
-0.05755615234375,
-0.054351806640625,
-0.03924560546875,
-... |
marziye-A/dataset-farma-test3 | 2023-11-01T10:15:26.000Z | [
"region:us"
] | marziye-A | null | null | 0 | 17 | 2023-11-01T09:51:51 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: name
dtype: string
splits:
- name: train
num_bytes: 74308913.54
num_examples: 2005
download_size: 72537312
dataset_size: 74308913.54
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dataset-farma-test3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 489 | [
[
-0.03924560546875,
-0.0211639404296875,
0.01226806640625,
0.017913818359375,
-0.0014886856079101562,
-0.0052032470703125,
0.034271240234375,
-0.020782470703125,
0.051025390625,
0.0239410400390625,
-0.054290771484375,
-0.04498291015625,
-0.03570556640625,
-0.... |
Shishir1807/test_drug | 2023-11-02T07:01:49.000Z | [
"region:us"
] | Shishir1807 | null | null | 0 | 17 | 2023-11-02T07:01:32 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
drAbreu/bc4chemd_ner | 2022-10-25T10:02:51.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:GitHub",
"language:en",
"license:unknown",
"region:us"
] | drAbreu | The automatic extraction of chemical information from text requires the recognition of chemical entity mentions as one of its key steps. When developing supervised named entity recognition (NER) systems, the availability of a large, manually annotated text corpus is desirable. Furthermore, large corpora permit the robust evaluation and comparison of different approaches that detect chemicals in documents. We present the CHEMDNER corpus, a collection of 10,000 PubMed abstracts that contain a total of 84,355 chemical entity mentions labeled manually by expert chemistry literature curators, following annotation guidelines specifically defined for this task. The abstracts of the CHEMDNER corpus were selected to be representative for all major chemical disciplines. Each of the chemical entity mentions was manually labeled according to its structure-associated chemical entity mention (SACEM) class: abbreviation, family, formula, identifier, multiple, systematic and trivial. The difficulty and consistency of tagging chemicals in text was measured using an agreement study between annotators, obtaining a percentage agreement of 91. For a subset of the CHEMDNER corpus (the test set of 3,000 abstracts) we provide not only the Gold Standard manual annotations, but also mentions automatically detected by the 26 teams that participated in the BioCreative IV CHEMDNER chemical mention recognition task. In addition, we release the CHEMDNER silver standard corpus of automatically extracted mentions from 17,000 randomly selected PubMed abstracts. A version of the CHEMDNER corpus in the BioC format has been generated as well. We propose a standard for required minimum information about entity annotations for the construction of domain specific corpora on chemical and drug entities. The CHEMDNER corpus and annotation guidelines are available at: http://www.biocreative.org/resources/biocreative-iv/chemdner-corpus/ | @article{Krallinger2015TheCC,
title={The CHEMDNER corpus of chemicals and drugs and its annotation principles},
author={Martin Krallinger and Obdulia Rabal and Florian Leitner and Miguel Vazquez and David Salgado and Zhiyong Lu and Robert Leaman and Yanan Lu and Dong-Hong Ji and Daniel M. Lowe and Roger A. Sayle and Riza Theresa Batista-Navarro and Rafal Rak and Torsten Huber and Tim Rockt{\"a}schel and S{\'e}rgio Matos and David Campos and Buzhou Tang and Hua Xu and Tsendsuren Munkhdalai and Keun Ho Ryu and S. V. Ramanan and P. Senthil Nathan and Slavko Zitnik and Marko Bajec and Lutz Weber and Matthias Irmer and Saber Ahmad Akhondi and Jan A. Kors and Shuo Xu and Xin An and Utpal Kumar Sikdar and Asif Ekbal and Masaharu Yoshioka and Thaer M. Dieb and Miji Choi and Karin M. Verspoor and Madian Khabsa and C. Lee Giles and Hongfang Liu and K. E. Ravikumar and Andre Lamurias and Francisco M. Couto and Hong-Jie Dai and Richard Tzong-Han Tsai and C Ata and Tolga Can and Anabel Usie and Rui Alves and Isabel Segura-Bedmar and Paloma Mart{\'i}nez and Julen Oyarz{\'a}bal and Alfonso Valencia},
journal={Journal of Cheminformatics},
year={2015},
volume={7},
pages={S2 - S2}
} | 1 | 16 | 2022-03-09T14:56:16 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- GitHub
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: bc4chemd
pretty_name: bc4chemd_ner
---
# Dataset Card for bc2gm_corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://biocreative.bioinformatics.udel.edu/resources/biocreative-iv/chemdner-corpus/)
- **Repository:** [Github](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/tree/master/data/BC4CHEMD)
- **Paper:** [NCBI](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4331692/)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
* Token Classification
* Named Entity Recognition
### Languages
- English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- `id`: Sentence identifier.
- `tokens`: Array of tokens composing a sentence.
- `ner_tags`: Array of tags, where `0` indicates no disease mentioned, `1` signals the first token of a disease and `2` the subsequent disease tokens.
### Data Splits
```python
DatasetDict({
train: Dataset({
features: ['id', 'tokens', 'ner_tags'],
num_rows: 30683
})
validation: Dataset({
features: ['id', 'tokens', 'ner_tags'],
num_rows: 30640
})
test: Dataset({
features: ['id', 'tokens', 'ner_tags'],
num_rows: 26365
})
})
```
## Dataset Creation
### Curation Rationale
The automatic extraction of chemical information from text requires the recognition of chemical
entity mentions as one of its key steps. When developing supervised named entity recognition
(NER) systems, the availability of a large, manually annotated text corpus is desirable.
Furthermore, large corpora permit the robust evaluation and comparison of different
approaches that detect chemicals in documents.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
### Annotations
#### Annotation process
We present the CHEMDNER corpus, a collection of 10,000 PubMed abstracts that contain a
total of 84,355 chemical entity mentions labeled manually by expert chemistry literature curators,
following annotation guidelines specifically defined for this task.
#### Who are the annotators?
Expert chemistry literature curators
### Personal and Sensitive Information
It does not contain this kind of information
The abstracts of the CHEMDNER corpus were selected to be representative for all
major chemical disciplines. Each of the chemical entity mentions was manually
labeled according to its structure-associated chemical entity mention (SACEM)
class: abbreviation, family, formula, identifier, multiple, systematic and
trivial. The difficulty and consistency of tagging chemicals in text was measured using an agreement study
between annotators, obtaining a percentage agreement of 91.
### Licensing Information
Unknown
### Citation Information
```latex
@article{Krallinger2015TheCC,
title={The CHEMDNER corpus of chemicals and drugs and its annotation principles},
author={Martin Krallinger and Obdulia Rabal and Florian Leitner and Miguel Vazquez and David Salgado and Zhiyong Lu and Robert Leaman and Yanan Lu and Dong-Hong Ji and Daniel M. Lowe and Roger A. Sayle and Riza Theresa Batista-Navarro and Rafal Rak and Torsten Huber and Tim Rockt{\"a}schel and S{\'e}rgio Matos and David Campos and Buzhou Tang and Hua Xu and Tsendsuren Munkhdalai and Keun Ho Ryu and S. V. Ramanan and P. Senthil Nathan and Slavko Zitnik and Marko Bajec and Lutz Weber and Matthias Irmer and Saber Ahmad Akhondi and Jan A. Kors and Shuo Xu and Xin An and Utpal Kumar Sikdar and Asif Ekbal and Masaharu Yoshioka and Thaer M. Dieb and Miji Choi and Karin M. Verspoor and Madian Khabsa and C. Lee Giles and Hongfang Liu and K. E. Ravikumar and Andre Lamurias and Francisco M. Couto and Hong-Jie Dai and Richard Tzong-Han Tsai and C Ata and Tolga Can and Anabel Usie and Rui Alves and Isabel Segura-Bedmar and Paloma Mart{\'i}nez and Julen Oyarz{\'a}bal and Alfonso Valencia},
journal={Journal of Cheminformatics},
year={2015},
volume={7},
pages={S2 - S2}
}
```
### Contributions
Thanks to [@GamalC](https://github.com/GamalC) for uploading this dataset to GitHub.
| 5,465 | [
[
-0.035736083984375,
-0.0257720947265625,
0.045379638671875,
0.0010814666748046875,
-0.005138397216796875,
0.006801605224609375,
-0.0178985595703125,
-0.034210205078125,
0.027435302734375,
0.01139068603515625,
-0.033172607421875,
-0.06982421875,
-0.04306030273437... |
BlackSamorez/2ch_b_dialogues | 2022-07-01T15:55:21.000Z | [
"task_categories:conversational",
"task_ids:dialogue-generation",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ru",
"region:us"
] | BlackSamorez | Dialogues build from 2ch.hk/b/ threads | @InProceedings{huggingface:dataset,
title = {2ch b dialogues},
author={black_samorez},
year={2022}
} | 3 | 16 | 2022-06-05T13:05:55 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- ru
license: []
multilinguality:
- monolingual
pretty_name: Dialogues mined from 2ch/b/.
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- conversational
task_ids:
- dialogue-generation
---
# Dataset Card for 2ch_b_dialogues
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/BlackSamorez/ebanko
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Russian language dialogues mined from 2ch.hk/b/
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
Russian
## Dataset Structure
### Data Instances
{
"dialogue": ["Glad to hear!", "Fine, thank you!", "Hi, how are you?"]
}
### Data Fields
- dialogue: list of posts ordered last-to-first
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
Fun
### Source Data
#### Initial Data Collection and Normalization
In a thread graph only vertices with single parent were selected. Then non-overlapping threads of dialogues were build.
#### Who are the source language producers?
2ch.hk/b/ users
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
Morally questionable data
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
blacks_samorez
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] | 2,837 | [
[
-0.0208740234375,
-0.05401611328125,
0.0112457275390625,
0.01172637939453125,
-0.025146484375,
0.0151824951171875,
-0.0276641845703125,
-0.02728271484375,
0.026123046875,
0.04803466796875,
-0.070556640625,
-0.06536865234375,
-0.046051025390625,
-0.0025844573... |
relbert/analogy_questions | 2023-05-16T20:24:12.000Z | [
"multilinguality:monolingual",
"size_categories:n<1K",
"language:en",
"license:other",
"region:us"
] | relbert | [Analogy Question](https://aclanthology.org/2021.acl-long.280/) | @inproceedings{ushio-etal-2021-bert,
title = "{BERT} is to {NLP} what {A}lex{N}et is to {CV}: Can Pre-Trained Language Models Identify Analogies?",
author = "Ushio, Asahi and
Espinosa Anke, Luis and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.280",
doi = "10.18653/v1/2021.acl-long.280",
pages = "3609--3624",
abstract = "Analogies play a central role in human commonsense reasoning. The ability to recognize analogies such as {``}eye is to seeing what ear is to hearing{''}, sometimes referred to as analogical proportions, shape how we structure knowledge and understand language. Surprisingly, however, the task of identifying such analogies has not yet received much attention in the language model era. In this paper, we analyze the capabilities of transformer-based language models on this unsupervised task, using benchmarks obtained from educational settings, as well as more commonly used datasets. We find that off-the-shelf language models can identify analogies to a certain extent, but struggle with abstract and complex relations, and results are highly sensitive to model architecture and hyperparameters. Overall the best results were obtained with GPT-2 and RoBERTa, while configurations using BERT were not able to outperform word embedding models. Our results raise important questions for future work about how, and to what extent, pre-trained language models capture knowledge about abstract semantic relations.",
} | 2 | 16 | 2022-07-18T18:01:16 | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- n<1K
pretty_name: Analogy Question
---
# Dataset Card for "relbert/analogy_questions"
## Dataset Description
- **Repository:** [RelBERT](https://github.com/asahi417/relbert)
- **Paper:** [https://aclanthology.org/2021.acl-long.280/](https://aclanthology.org/2021.acl-long.280/)
- **Dataset:** Analogy Questions
### Dataset Summary
This dataset contains 5 different word analogy questions used in [Analogy Language Model](https://aclanthology.org/2021.acl-long.280/).
- original analogy questions
| name | Size (valid/test) | Num of choice | Num of relation group | Original Reference |
|-----------|------------------:|--------------:|----------------------:|:--------------------------------------------------------------------------:|
| `u2` | 24/228 | 5,4,3 | 9 | [EnglishForEveryone](https://englishforeveryone.org/Topics/Analogies.html) |
| `u4` | 48/432 | 5,4,3 | 5 | [EnglishForEveryone](https://englishforeveryone.org/Topics/Analogies.html) |
| `google` | 50/500 | 4 | 2 | [Mikolov et al., (2013)](https://www.aclweb.org/anthology/N13-1090.pdf) |
| `bats` | 199/1799 | 4 | 3 | [Gladkova et al., (2016)](https://www.aclweb.org/anthology/N18-2017.pdf) |
- extra analogy questions
| name | Size (valid/test) | Num of choice (valid/test) | Num of relation group (valid/test) | Original Reference |
|:------------------------------------|:--------------------|:-----------------------------|:-------------------------------------|:-----------------------------------------------------------------------------------------------------------------------|
| `semeval2012_relational_similarity` | 79/- | 3/- | 79/- | [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity) |
| `t_rex_relational_similarity` | 496/183 | 74/48 | 60/19 | [relbert/t_rex_relational_similarity](https://huggingface.co/datasets/relbert/t_rex_relational_similarity) |
| `conceptnet_relational_similarity` | 1112/1192 | 19/17 | 18/16 | [relbert/conceptnet_relational_similarity](https://huggingface.co/datasets/relbert/conceptnet_relational_similarity) |
| `nell_relational_similarity` | 400/600 | 5/7 | 4/6 | [relbert/nell_relational_similarity](https://huggingface.co/datasets/relbert/nell_relational_similarity) |
| `scan` | 178/1616 | 3,36,136,10,45,78,15,21,55,120,153,91,28/3,36,136,10,45,78,15,21,55,120,153,91,28 | 2/2 | [relbert/scientific_and_creative_analogy](https://huggingface.co/datasets/relbert/scientific_and_creative_analogy) |
## Dataset Structure
### Data Instances
An example of `test` looks as follows.
```
{
"stem": ["raphael", "painter"],
"answer": 2,
"choice": [["andersen", "plato"],
["reading", "berkshire"],
["marx", "philosopher"],
["tolstoi", "edison"]]
}
```
The `stem` is the query word pair, `choice` has word pair candidates,
and `answer` indicates the index of correct candidate which starts from `0`.
All data is lowercased except Google dataset.
### Citation Information
```
@inproceedings{ushio-etal-2021-bert-is,
title ={{BERT} is to {NLP} what {A}lex{N}et is to {CV}: {C}an {P}re-{T}rained {L}anguage {M}odels {I}dentify {A}nalogies?},
author={Ushio, Asahi and
Espinosa-Anke, Luis and
Schockaert, Steven and
Camacho-Collados, Jose},
booktitle={Proceedings of the {ACL}-{IJCNLP} 2021 Main Conference},
year={2021},
publisher={Association for Computational Linguistics}
}
```
### LICENSE
The LICENSE of all the resources are under [CC-BY-NC-4.0](./LICENSE). Thus, they are freely available for academic purpose or individual research, but restricted for commercial use.
| 4,493 | [
[
-0.04718017578125,
-0.06341552734375,
0.02154541015625,
0.004207611083984375,
-0.02252197265625,
-0.016876220703125,
-0.004909515380859375,
-0.02606201171875,
0.05517578125,
0.024566650390625,
-0.04974365234375,
-0.045562744140625,
-0.0229644775390625,
0.022... |
nielsr/rvl_cdip_10_examples_per_class | 2022-08-01T16:32:41.000Z | [
"region:us"
] | nielsr | null | null | 0 | 16 | 2022-08-01T16:03:03 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Rifky/indonesian-hoax-news | 2022-08-05T15:49:33.000Z | [
"region:us"
] | Rifky | null | null | 1 | 16 | 2022-08-03T13:50:33 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
PlanTL-GOB-ES/wnli-es | 2022-11-18T12:03:25.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:extended|glue",
"language:es",
"license:cc-by-4.0",
"region:us"
] | PlanTL-GOB-ES | professional translation into Spanish of Winograd NLI dataset as published in GLUE Benchmark.
The Winograd NLI dataset presents 855 sentence pairs,
in which the first sentence contains an ambiguity and the second one a possible interpretation of it.
The label indicates if the interpretation is correct (1) or not (0). | ADD CITATION | 2 | 16 | 2022-09-16T13:51:45 | ---
YAML tags:
annotations_creators:
- expert-generated
language_creators:
- found
language:
- es
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: wnli-es
size_categories:
- unknown
source_datasets:
- extended|glue
task_categories:
- text-classification
task_ids:
- natural-language-inference
---
# WNLI-es
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Website:** https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html
- **Point of Contact:** [Carlos Rodríguez-Penagos](carlos.rodriguez1@bsc.es) and [Carme Armentano-Oller](carme.armentano@bsc.es)
### Dataset Summary
"A Winograd schema is a pair of sentences that differ in only one or two words and that contain an ambiguity that is resolved in opposite ways in the two sentences and requires the use of world knowledge and reasoning for its resolution. The schema takes its name from Terry Winograd." Source: [The Winograd Schema Challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html).
The [Winograd NLI dataset](https://dl.fbaipublicfiles.com/glue/data/WNLI.zip) presents 855 sentence pairs, in which the first sentence contains an ambiguity and the second one a possible interpretation of it. The label indicates if the interpretation is correct (1) or not (0).
This dataset is a professional translation into Spanish of [Winograd NLI dataset](https://dl.fbaipublicfiles.com/glue/data/WNLI.zip) as published in [GLUE Benchmark](https://gluebenchmark.com/tasks).
Both the original dataset and this translation are licenced under a [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/).
### Supported Tasks and Leaderboards
Textual entailment, Text classification, Language Model.
### Languages
* Spanish (es)
## Dataset Structure
### Data Instances
Three tsv files.
### Data Fields
- index
- sentence 1: first sentence of the pair
- sentence 2: second sentence of the pair
- label: relation between the two sentences:
* 0: the second sentence does not entail a correct interpretation of the first one (neutral)
* 1: the second sentence entails a correct interpretation of the first one (entailment)
### Data Splits
- wnli-train-es.csv: 636 sentence pairs
- wnli-dev-es.csv: 72 sentence pairs
- wnli-test-shuffled-es.csv: 147 sentence pairs
## Dataset Creation
### Curation Rationale
We translated this dataset to contribute to the development of language models in Spanish.
### Source Data
- [GLUE Benchmark site](https://gluebenchmark.com)
#### Initial Data Collection and Normalization
This is a professional translation of [WNLI dataset](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html) into Spanish, commissioned by [BSC TeMU](https://temu.bsc.es/) within the the framework of the [Plan-TL](https://plantl.mineco.gob.es/Paginas/index.aspx).
For more information on how the Winograd NLI dataset was created, visit the webpage [The Winograd Schema Challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html).
#### Who are the source language producers?
For more information on how the Winograd NLI dataset was created, visit the webpage [The Winograd Schema Challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html).
### Annotations
#### Annotation process
We comissioned a professional translation of [WNLI dataset](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WS.html) into Spanish.
#### Who are the annotators?
Translation was commisioned to a professional translation agency.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contributes to the development of language models in Spanish.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es).
For further information, send an email to (plantl-gob-es@bsc.es).
This work was funded by the [Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA)](https://avancedigital.mineco.gob.es/en-us/Paginas/index.aspx) within the framework of the [Plan-TL](https://plantl.mineco.gob.es/Paginas/index.aspx).
### Licensing information
This work is licensed under [CC Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) License.
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
### Contributions
[N/A]
| 5,622 | [
[
-0.01009368896484375,
-0.033447265625,
0.01806640625,
0.0274200439453125,
-0.00791168212890625,
0.0015726089477539062,
-0.0308837890625,
-0.050445556640625,
0.03131103515625,
0.0247039794921875,
-0.05169677734375,
-0.062347412109375,
-0.052734375,
0.01240539... |
kkotkar1/course-reviews | 2022-10-04T00:50:55.000Z | [
"region:us"
] | kkotkar1 | null | null | 1 | 16 | 2022-09-30T21:04:25 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ghoumrassi/clothes_sample | 2022-10-15T18:07:22.000Z | [
"region:us"
] | ghoumrassi | null | null | 3 | 16 | 2022-10-15T15:50:15 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 20078406.0
num_examples: 990
download_size: 0
dataset_size: 20078406.0
---
# Dataset Card for "clothes_sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 388 | [
[
-0.032196044921875,
-0.0129852294921875,
0.004100799560546875,
0.0120086669921875,
-0.0236968994140625,
-0.00850677490234375,
0.021484375,
-0.0199432373046875,
0.052490234375,
0.034210205078125,
-0.07391357421875,
-0.05499267578125,
-0.041107177734375,
-0.01... |
crystina-z/mmarco | 2023-02-07T14:21:54.000Z | [
"region:us"
] | crystina-z | mMARCO translated datasets | @misc{bonifacio2021mmarco,
title={mMARCO: A Multilingual Version of the MS MARCO Passage Ranking Dataset},
author={Luiz Henrique Bonifacio and Israel Campiotti and Vitor Jeronymo and Hugo Queiroz Abonizio and Roberto Lotufo and Rodrigo Nogueira},
year={2021},
eprint={2108.13897},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 0 | 16 | 2022-11-09T00:48:48 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
dreamproit/bill_summary_us | 2023-10-17T04:16:57.000Z | [
"task_categories:summarization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"bills",
"legal",
"region:us"
] | dreamproit | null | null | 4 | 16 | 2022-11-09T10:13:33 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- expert-generated
multilinguality:
- monolingual
pretty_name: bill_summary_us
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- bills
- legal
task_categories:
- summarization
task_ids: []
configs:
- config_name: default
---
# Dataset Card for "bill_summary_us"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [BillML](https://github.com/dreamproit/BillML)
- **Repository:** [BillML](https://github.com/dreamproit/BillML)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Leaderboard:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
Dataset for summarization of summarization of US Congressional bills (bill_summary_us).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
English
## Dataset Structure
### Data Instances
#### default
### Data Fields
- id: id of the bill in format(congress number + bill type + bill number + bill version).
- congress: number of the congress.
- bill_type: type of the bill.
- bill_number: number of the bill.
- bill_version: version of the bill.
- sections: list of bill sections with section_id, text and header.
- sections_length: number with lenght of the sections list.
- text: bill text.
- text_length: number of characters in the text.
- summary: summary of the bill.
- summary_length: number of characters in the summary.
- title: official title of the bill.
### Data Splits
train
## Dataset Creation
### Curation Rationale
Bills (proposed laws) are specialized, structured documents with great public significance. Often, the language of a bill may not directly explain the potential impact of the legislation. For bills in the U.S. Congress, the Congressional Research Service of the Library of Congress provides professional, non-partisan summaries of bills. These are valuable for public understanding of the bills and are serve as an essential part of the lawmaking process to understand the meaning and potential legislative impact.
This dataset collects the text of bills, some metadata, as well as the CRS summaries. In order to build more accurate ML models for bill summarization it is important to have a clean dataset, alongside the professionally-written CRS summaries. ML summarization models built on generic data are bound to produce less accurate results (sometimes creating summaries that describe the opposite of a bill's actual effect). In addition, models that attempt to summarize all bills (some of which may reach 4000 pages long) may also be inaccurate due to the current limitations of summarization on long texts.
As a result, this dataset collects bill and summary information; it provides text as a list of sections with the text and header. This could be used to create a summary of sections and then a summary of summaries.
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
[govinfo.gov](https://www.govinfo.gov/)
#### Initial Data Collection and Normalization
The data consists of the US congress bills that were collected from the [govinfo.gov](https://www.govinfo.gov/) service provided by the United States Government Publishing Office (GPO) under CC0-1.0 license.
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[dreamproit.com](https://dreamproit.com/)
### Licensing Information
Bill and summary information are public and are unlicensed, as it is data produced by government entities. The collection and enhancement work that we provide for this dataset, to the degree it may be covered by copyright, is released under [CC0](https://creativecommons.org/share-your-work/public-domain/cc0/).
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@aih](https://github.com/aih) [@BorodaUA](https://github.com/BorodaUA), [@alexbojko](https://github.com/alexbojko) for adding this dataset. | 6,079 | [
[
-0.036895751953125,
-0.039581298828125,
-0.0036773681640625,
0.01165771484375,
-0.0418701171875,
-0.00498199462890625,
-0.00902557373046875,
-0.023040771484375,
0.04058837890625,
0.05767822265625,
-0.032745361328125,
-0.07257080078125,
-0.045745849609375,
-0... |
bigbio/cantemist | 2022-12-22T15:44:17.000Z | [
"multilinguality:monolingual",
"language:es",
"license:cc-by-4.0",
"region:us"
] | bigbio | Collection of 1301 oncological clinical case reports written in Spanish, with tumor morphology mentions manually annotated and mapped by clinical experts to a controlled terminology. Every tumor morphology mention is linked to an eCIE-O code (the Spanish equivalent of ICD-O).
The original dataset is distributed in Brat format, and was randomly sampled into 3 subsets. The training, development and test sets contain 501, 500 and 300 documents each, respectively.
This dataset was designed for the CANcer TExt Mining Shared Task, sponsored by Plan-TL. The task is divided in 3 subtasks: CANTEMIST-NER, CANTEMIST_NORM and CANTEMIST-CODING.
CANTEMIST-NER track: requires finding automatically tumor morphology mentions. All tumor morphology mentions are defined by their corresponding character offsets in UTF-8 plain text medical documents.
CANTEMIST-NORM track: clinical concept normalization or named entity normalization task that requires to return all tumor morphology entity mentions together with their corresponding eCIE-O-3.1 codes i.e. finding and normalizing tumor morphology mentions.
CANTEMIST-CODING track: requires returning for each of document a ranked list of its corresponding ICD-O-3 codes. This it is essentially a sort of indexing or multi-label classification task or oncology clinical coding.
For further information, please visit https://temu.bsc.es/cantemist or send an email to encargo-pln-life@bsc.es | @article{miranda2020named,
title={Named Entity Recognition, Concept Normalization and Clinical Coding: Overview of the Cantemist Track for Cancer Text Mining in Spanish, Corpus, Guidelines, Methods and Results.},
author={Miranda-Escalada, Antonio and Farr{\'e}, Eul{\`a}lia and Krallinger, Martin},
journal={IberLEF@ SEPLN},
pages={303--323},
year={2020}
} | 0 | 16 | 2022-11-13T22:07:32 |
---
language:
- es
bigbio_language:
- Spanish
license: cc-by-4.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_4p0
pretty_name: CANTEMIST
homepage: https://temu.bsc.es/cantemist/?p=4338
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
- TEXT_CLASSIFICATION
---
# Dataset Card for CANTEMIST
## Dataset Description
- **Homepage:** https://temu.bsc.es/cantemist/?p=4338
- **Pubmed:** False
- **Public:** True
- **Tasks:** NER,NED,TXTCLASS
Collection of 1301 oncological clinical case reports written in Spanish, with tumor morphology mentions manually annotated and mapped by clinical experts to a controlled terminology. Every tumor morphology mention is linked to an eCIE-O code (the Spanish equivalent of ICD-O).
The original dataset is distributed in Brat format, and was randomly sampled into 3 subsets. The training, development and test sets contain 501, 500 and 300 documents each, respectively.
This dataset was designed for the CANcer TExt Mining Shared Task, sponsored by Plan-TL. The task is divided in 3 subtasks: CANTEMIST-NER, CANTEMIST_NORM and CANTEMIST-CODING.
CANTEMIST-NER track: requires finding automatically tumor morphology mentions. All tumor morphology mentions are defined by their corresponding character offsets in UTF-8 plain text medical documents.
CANTEMIST-NORM track: clinical concept normalization or named entity normalization task that requires to return all tumor morphology entity mentions together with their corresponding eCIE-O-3.1 codes i.e. finding and normalizing tumor morphology mentions.
CANTEMIST-CODING track: requires returning for each of document a ranked list of its corresponding ICD-O-3 codes. This it is essentially a sort of indexing or multi-label classification task or oncology clinical coding.
For further information, please visit https://temu.bsc.es/cantemist or send an email to encargo-pln-life@bsc.es
## Citation Information
```
@article{miranda2020named,
title={Named Entity Recognition, Concept Normalization and Clinical Coding: Overview of the Cantemist Track for Cancer Text Mining in Spanish, Corpus, Guidelines, Methods and Results.},
author={Miranda-Escalada, Antonio and Farr{'e}, Eul{\`a}lia and Krallinger, Martin},
journal={IberLEF@ SEPLN},
pages={303--323},
year={2020}
}
```
| 2,364 | [
[
0.00437164306640625,
-0.00989532470703125,
0.04437255859375,
0.035064697265625,
-0.0472412109375,
-0.008331298828125,
-0.021240234375,
-0.027557373046875,
0.042999267578125,
0.048583984375,
-0.04510498046875,
-0.0999755859375,
-0.07354736328125,
0.0132141113... |
bigbio/genia_relation_corpus | 2022-12-22T15:44:40.000Z | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | bigbio | The extraction of various relations stated to hold between biomolecular entities is one of the most frequently
addressed information extraction tasks in domain studies. Typical relation extraction targets involve protein-protein
interactions or gene regulatory relations. However, in the GENIA corpus, such associations involving change in the
state or properties of biomolecules are captured in the event annotation.
The GENIA corpus relation annotation aims to complement the event annotation of the corpus by capturing (primarily)
static relations, relations such as part-of that hold between entities without (necessarily) involving change. | @inproceedings{pyysalo-etal-2009-static,
title = "Static Relations: a Piece in the Biomedical Information Extraction Puzzle",
author = "Pyysalo, Sampo and
Ohta, Tomoko and
Kim, Jin-Dong and
Tsujii, Jun{'}ichi",
booktitle = "Proceedings of the {B}io{NLP} 2009 Workshop",
month = jun,
year = "2009",
address = "Boulder, Colorado",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W09-1301",
pages = "1--9",
}
@article{article,
author = {Ohta, Tomoko and Pyysalo, Sampo and Kim, Jin-Dong and Tsujii, Jun'ichi},
year = {2010},
month = {10},
pages = {917-28},
title = {A reevaluation of biomedical named entity - term relations},
volume = {8},
journal = {Journal of bioinformatics and computational biology},
doi = {10.1142/S0219720010005014}
}
@MISC{Hoehndorf_applyingontology,
author = {Robert Hoehndorf and Axel-cyrille Ngonga Ngomo and Sampo Pyysalo and Tomoko Ohta and Anika Oellrich and
Dietrich Rebholz-schuhmann},
title = {Applying ontology design patterns to the implementation of relations in GENIA},
year = {}
} | 1 | 16 | 2022-11-13T22:08:39 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: GENIA_PROJECT_LICENSE
pretty_name: GENIA Relation Corpus
homepage: http://www.geniaproject.org/genia-corpus/relation-corpus
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- RELATION_EXTRACTION
---
# Dataset Card for GENIA Relation Corpus
## Dataset Description
- **Homepage:** http://www.geniaproject.org/genia-corpus/relation-corpus
- **Pubmed:** True
- **Public:** True
- **Tasks:** RE
The extraction of various relations stated to hold between biomolecular entities is one of the most frequently
addressed information extraction tasks in domain studies. Typical relation extraction targets involve protein-protein
interactions or gene regulatory relations. However, in the GENIA corpus, such associations involving change in the
state or properties of biomolecules are captured in the event annotation.
The GENIA corpus relation annotation aims to complement the event annotation of the corpus by capturing (primarily)
static relations, relations such as part-of that hold between entities without (necessarily) involving change.
## Citation Information
```
@inproceedings{pyysalo-etal-2009-static,
title = "Static Relations: a Piece in the Biomedical Information Extraction Puzzle",
author = "Pyysalo, Sampo and
Ohta, Tomoko and
Kim, Jin-Dong and
Tsujii, Jun{'}ichi",
booktitle = "Proceedings of the {B}io{NLP} 2009 Workshop",
month = jun,
year = "2009",
address = "Boulder, Colorado",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W09-1301",
pages = "1--9",
}
@article{article,
author = {Ohta, Tomoko and Pyysalo, Sampo and Kim, Jin-Dong and Tsujii, Jun'ichi},
year = {2010},
month = {10},
pages = {917-28},
title = {A reevaluation of biomedical named entity - term relations},
volume = {8},
journal = {Journal of bioinformatics and computational biology},
doi = {10.1142/S0219720010005014}
}
@MISC{Hoehndorf_applyingontology,
author = {Robert Hoehndorf and Axel-cyrille Ngonga Ngomo and Sampo Pyysalo and Tomoko Ohta and Anika Oellrich and
Dietrich Rebholz-schuhmann},
title = {Applying ontology design patterns to the implementation of relations in GENIA},
year = {}
}
```
| 2,337 | [
[
-0.018157958984375,
-0.039764404296875,
0.02911376953125,
0.006633758544921875,
-0.0269317626953125,
-0.0111236572265625,
-0.01033782958984375,
-0.043304443359375,
0.03924560546875,
0.0142364501953125,
-0.043243408203125,
-0.052520751953125,
-0.032562255859375,
... |
bigbio/sciq | 2022-12-22T15:46:48.000Z | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-nc-3.0",
"region:us"
] | bigbio | The SciQ dataset contains 13,679 crowdsourced science exam questions about Physics, Chemistry and Biology, among others. The questions are in multiple-choice format with 4 answer options each. For most questions, an additional paragraph with supporting evidence for the correct answer is provided. | @inproceedings{welbl-etal-2017-crowdsourcing,
title = "Crowdsourcing Multiple Choice Science Questions",
author = "Welbl, Johannes and
Liu, Nelson F. and
Gardner, Matt",
booktitle = "Proceedings of the 3rd Workshop on Noisy User-generated Text",
month = sep,
year = "2017",
address = "Copenhagen, Denmark",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W17-4413",
doi = "10.18653/v1/W17-4413",
pages = "94--106",
} | 1 | 16 | 2022-11-13T22:12:14 |
---
language:
- en
bigbio_language:
- English
license: cc-by-nc-3.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_NC_3p0
pretty_name: SciQ
homepage: https://allenai.org/data/sciq
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- QUESTION_ANSWERING
---
# Dataset Card for SciQ
## Dataset Description
- **Homepage:** https://allenai.org/data/sciq
- **Pubmed:** False
- **Public:** True
- **Tasks:** QA
The SciQ dataset contains 13,679 crowdsourced science exam questions about Physics, Chemistry and Biology, among others. The questions are in multiple-choice format with 4 answer options each. For most questions, an additional paragraph with supporting evidence for the correct answer is provided.
## Citation Information
```
@inproceedings{welbl-etal-2017-crowdsourcing,
title = "Crowdsourcing Multiple Choice Science Questions",
author = "Welbl, Johannes and
Liu, Nelson F. and
Gardner, Matt",
booktitle = "Proceedings of the 3rd Workshop on Noisy User-generated Text",
month = sep,
year = "2017",
address = "Copenhagen, Denmark",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W17-4413",
doi = "10.18653/v1/W17-4413",
pages = "94--106",
}
```
| 1,280 | [
[
-0.01097869873046875,
-0.026153564453125,
0.038177490234375,
0.0194549560546875,
-0.010528564453125,
-0.00156402587890625,
0.00609588623046875,
-0.0140228271484375,
0.019989013671875,
0.0259246826171875,
-0.037567138671875,
-0.0296173095703125,
-0.02476501464843... |
bigbio/spl_adr_200db | 2022-12-22T15:46:56.000Z | [
"multilinguality:monolingual",
"language:en",
"license:cc0-1.0",
"region:us"
] | bigbio | The United States Food and Drug Administration (FDA) partnered with the National Library
of Medicine to create a pilot dataset containing standardised information about known
adverse reactions for 200 FDA-approved drugs. The Structured Product Labels (SPLs),
the documents FDA uses to exchange information about drugs and other products, were
manually annotated for adverse reactions at the mention level to facilitate development
and evaluation of text mining tools for extraction of ADRs from all SPLs. The ADRs were
then normalised to the Unified Medical Language System (UMLS) and to the Medical
Dictionary for Regulatory Activities (MedDRA). | @article{demner2018dataset,
author = {Demner-Fushman, Dina and Shooshan, Sonya and Rodriguez, Laritza and Aronson,
Alan and Lang, Francois and Rogers, Willie and Roberts, Kirk and Tonning, Joseph},
title = {A dataset of 200 structured product labels annotated for adverse drug reactions},
journal = {Scientific Data},
volume = {5},
year = {2018},
month = {01},
pages = {180001},
url = {
https://www.researchgate.net/publication/322810855_A_dataset_of_200_structured_product_labels_annotated_for_adverse_drug_reactions
},
doi = {10.1038/sdata.2018.1}
} | 2 | 16 | 2022-11-13T22:12:21 |
---
language:
- en
bigbio_language:
- English
license: cc0-1.0
multilinguality: monolingual
bigbio_license_shortname: CC0_1p0
pretty_name: SPL ADR
homepage: https://bionlp.nlm.nih.gov/tac2017adversereactions/
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
- RELATION_EXTRACTION
---
# Dataset Card for SPL ADR
## Dataset Description
- **Homepage:** https://bionlp.nlm.nih.gov/tac2017adversereactions/
- **Pubmed:** False
- **Public:** True
- **Tasks:** NER,NED,RE
The United States Food and Drug Administration (FDA) partnered with the National Library
of Medicine to create a pilot dataset containing standardised information about known
adverse reactions for 200 FDA-approved drugs. The Structured Product Labels (SPLs),
the documents FDA uses to exchange information about drugs and other products, were
manually annotated for adverse reactions at the mention level to facilitate development
and evaluation of text mining tools for extraction of ADRs from all SPLs. The ADRs were
then normalised to the Unified Medical Language System (UMLS) and to the Medical
Dictionary for Regulatory Activities (MedDRA).
## Citation Information
```
@article{demner2018dataset,
author = {Demner-Fushman, Dina and Shooshan, Sonya and Rodriguez, Laritza and Aronson,
Alan and Lang, Francois and Rogers, Willie and Roberts, Kirk and Tonning, Joseph},
title = {A dataset of 200 structured product labels annotated for adverse drug reactions},
journal = {Scientific Data},
volume = {5},
year = {2018},
month = {01},
pages = {180001},
url = {
https://www.researchgate.net/publication/322810855_A_dataset_of_200_structured_product_labels_annotated_for_adverse_drug_reactions
},
doi = {10.1038/sdata.2018.1}
}
```
| 1,851 | [
[
0.0013103485107421875,
-0.0389404296875,
0.004055023193359375,
0.01174163818359375,
-0.0006012916564941406,
-0.0244140625,
-0.004299163818359375,
-0.027801513671875,
0.020172119140625,
0.059478759765625,
-0.027252197265625,
-0.0723876953125,
-0.03533935546875,
... |
cjvt/si_nli | 2023-04-04T08:51:01.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:natural-language-inference",
"annotations_creators:expert-generated",
"language_creators:found",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:sl",
"... | cjvt | SI-NLI (Slovene Natural Language Inference Dataset) contains 5,937 human-created Slovene sentence pairs
(premise and hypothesis) that are manually labeled with the labels "entailment", "contradiction", and "neutral".
The dataset was created using sentences that appear in the Slovenian reference corpus ccKres.
Annotators were tasked to modify the hypothesis in a candidate pair in a way that reflects one of the labels.
The dataset is balanced since the annotators created three modifications (entailment, contradiction, neutral)
for each candidate sentence pair. | @misc{sinli,
title = {Slovene Natural Language Inference Dataset {SI}-{NLI}},
author = {Klemen, Matej and {\v Z}agar, Ale{\v s} and {\v C}ibej, Jaka and Robnik-{\v S}ikonja, Marko},
url = {http://hdl.handle.net/11356/1707},
note = {Slovenian language resource repository {CLARIN}.{SI}},
year = {2022}
} | 0 | 16 | 2022-11-15T08:41:29 | ---
annotations_creators:
- expert-generated
language:
- sl
language_creators:
- found
- expert-generated
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
pretty_name: Slovene natural language inference dataset
size_categories:
- 1K<n<10K
source_datasets: []
tags: []
task_categories:
- text-classification
task_ids:
- multi-class-classification
- natural-language-inference
dataset_info:
- config_name: default
features:
- name: pair_id
dtype: string
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: annotation1
dtype: string
- name: annotator1_id
dtype: string
- name: annotation2
dtype: string
- name: annotator2_id
dtype: string
- name: annotation3
dtype: string
- name: annotator3_id
dtype: string
- name: annotation_final
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 1352635
num_examples: 4392
- name: validation
num_bytes: 164561
num_examples: 547
- name: test
num_bytes: 246518
num_examples: 998
download_size: 410093
dataset_size: 1763714
- config_name: public
features:
- name: pair_id
dtype: string
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: annotation1
dtype: string
- name: annotator1_id
dtype: string
- name: annotation2
dtype: string
- name: annotator2_id
dtype: string
- name: annotation3
dtype: string
- name: annotator3_id
dtype: string
- name: annotation_final
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 1352591
num_examples: 4392
- name: validation
num_bytes: 164517
num_examples: 547
- name: test
num_bytes: 246474
num_examples: 998
download_size: 410093
dataset_size: 1763582
- config_name: private
features:
- name: pair_id
dtype: string
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: annotation1
dtype: string
- name: annotator1_id
dtype: string
- name: annotation2
dtype: string
- name: annotator2_id
dtype: string
- name: annotation3
dtype: string
- name: annotator3_id
dtype: string
- name: annotation_final
dtype: string
- name: label
dtype: string
splits:
- name: train
- name: validation
- name: test
download_size: 0
dataset_size: 0
---
# Dataset Card for SI-NLI
### Dataset Summary
SI-NLI (Slovene Natural Language Inference Dataset) contains 5,937 human-created Slovene sentence pairs (premise and hypothesis) that are manually labeled with the labels "entailment", "contradiction", and "neutral". We created the dataset using sentences that appear in the Slovenian reference corpus [ccKres](http://hdl.handle.net/11356/1034). Annotators were tasked to modify the hypothesis in a candidate pair in a way that reflects one of the labels. The dataset is balanced since the annotators created three modifications (entailment, contradiction, neutral) for each candidate sentence pair. The dataset is split into train, validation, and test sets, with sizes of 4,392, 547, and 998.
Only the hypothesis and premise are given in the test set (i.e. no annotations) since SI-NLI is integrated into the Slovene evaluation framework [SloBENCH](https://slobench.cjvt.si/). If you use the dataset to train your models, please consider submitting the test set predictions to SloBENCH to get the evaluation score and see how it compares to others.
If you have access to the private test set (with labels), you can load it instead of the public one via `datasets.load_dataset("cjvt/si_nli", "private", data_dir="<...>")`.
### Supported Tasks and Leaderboards
Natural language inference.
### Languages
Slovenian.
## Dataset Structure
### Data Instances
A sample instance from the dataset:
```
{
'pair_id': 'P0',
'premise': 'Vendar se je anglikanska večina v grofijah na severu otoka (Ulster) na plebiscitu odločila, da ostane v okviru Velike Britanije.',
'hypothesis': 'A na glasovanju o priključitvi ozemlja k Severni Irski so se prebivalci ulsterskih grofij, pretežno anglikanske veroizpovedi, izrekli o obstanku pod okriljem VB.',
'annotation1': 'entailment',
'annotator1_id': 'annotator_C',
'annotation2': 'entailment',
'annotator2_id': 'annotator_A',
'annotation3': '',
'annotator3_id': '',
'annotation_final': 'entailment',
'label': 'entailment'
}
```
### Data Fields
- `pair_id`: string identifier of the pair (`""` in the test set),
- `premise`: premise sentence,
- `hypothesis`: hypothesis sentence,
- `annotation1`: the first annotation (`""` if not available),
- `annotator1_id`: anonymized identifier of the first annotator (`""` if not available),
- `annotation2`: the second annotation (`""` if not available),
- `annotator2_id`: anonymized identifier of the second annotator (`""` if not available),
- `annotation3`: the third annotation (`""` if not available),
- `annotator3_id`: anonymized identifier of the third annotator (`""` if not available),
- `annotation_final`: aggregated annotation where it could be unanimously determined (`""` if not available or an unanimous agreement could not be reached),
- `label`: aggregated annotation: either same as `annotation_final` (in case of agreement), same as `annotation1` (in case of disagreement), or `""` (in the test set). **Note that examples with disagreement are all put in the training set**. This aggregation is just the most simple possibility and the user may instead do something more advanced based on the individual annotations (e.g., learning with disagreement).
\* A small number of examples did not go through the annotation process because they were constructed by the authors when writing the guidelines. The quality of these was therefore checked by the authors. Such examples do not have the individual annotations and the annotator IDs.
## Additional Information
### Dataset Curators
Matej Klemen, Aleš Žagar, Jaka Čibej, Marko Robnik-Šikonja.
### Licensing Information
CC BY-NC-SA 4.0.
### Citation Information
```
@misc{sinli,
title = {Slovene Natural Language Inference Dataset {SI}-{NLI}},
author = {Klemen, Matej and {\v Z}agar, Ale{\v s} and {\v C}ibej, Jaka and Robnik-{\v S}ikonja, Marko},
url = {http://hdl.handle.net/11356/1707},
note = {Slovenian language resource repository {CLARIN}.{SI}},
year = {2022}
}
```
### Contributions
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset. | 6,567 | [
[
-0.028533935546875,
-0.04693603515625,
0.019683837890625,
0.0318603515625,
-0.0218353271484375,
-0.0284423828125,
-0.0225982666015625,
-0.032623291015625,
0.032318115234375,
0.05078125,
-0.04864501953125,
-0.057373046875,
-0.0458984375,
0.0284423828125,
... |
Norod78/RickAndMorty-HorizontalMirror-blip-captions | 2022-11-15T14:38:40.000Z | [
"task_categories:text-to-image",
"annotations_creators:machine-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:n<1K",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | Norod78 | null | null | 0 | 16 | 2022-11-15T14:31:28 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 161499799.0
num_examples: 530
download_size: 161488169
dataset_size: 161499799.0
pretty_name: 'Rick and Morty, Horizontal Mirror, BLIP captions'
size_categories:
- n<1K
tags: []
task_categories:
- text-to-image
license: cc-by-nc-sa-4.0
annotations_creators:
- machine-generated
language:
- en
language_creators:
- other
multilinguality:
- monolingual
---
# Dataset Card for "RickAndMorty-HorizontalMirror-blip-captions" | 580 | [
[
-0.031829833984375,
-0.0099639892578125,
-0.0026397705078125,
0.0382080078125,
-0.059661865234375,
0.01462554931640625,
-0.01165008544921875,
0.0057220458984375,
0.0305633544921875,
0.0384521484375,
-0.03631591796875,
-0.05255126953125,
-0.03857421875,
0.023... |
thennal/IMaSC | 2022-12-08T17:21:02.000Z | [
"task_categories:text-to-speech",
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ml",
"license:cc-by-sa-4.0",
"arxiv:2211.12796",
... | thennal | null | null | 2 | 16 | 2022-11-17T05:16:00 | ---
annotations_creators:
- expert-generated
language:
- ml
language_creators:
- found
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: ICFOSS Malayalam Speech Corpus
size_categories:
- 10K<n<100K
source_datasets:
- original
tags: []
task_categories:
- text-to-speech
- automatic-speech-recognition
task_ids: []
---
# IMaSC: ICFOSS Malayalam Speech Corpus
**IMaSC** is a Malayalam text and speech corpus made available by [ICFOSS](https://icfoss.in/) for the purpose of developing speech technology for Malayalam, particularly text-to-speech. The corpus contains 34,473 text-audio pairs of Malayalam sentences spoken by 8 speakers, totalling in approximately 50 hours of audio.
## Dataset Description
- **Paper:** [IMaSC — ICFOSS Malayalam Speech Corpus](https://arxiv.org/abs/2211.12796)
- **Point of Contact:** [Thennal D K](mailto:thennal10@gmail.com)
## Dataset Structure
The dataset consists of 34,473 instances with fields `text`, `speaker`, and `audio`. The audio is mono, sampled at 16kH. The transcription is normalized and only includes Malayalam characters and common punctuation. The table given below specifies how the 34,473 instances are split between the speakers, along with some basic speaker info:
| Speaker | Gender | Age | Time (HH:MM:SS) | Sentences |
| --- | --- | --- | --- | --- |
| Joji | Male | 28 | 06:08:55 | 4,332 |
| Sonia | Female | 43 | 05:22:39 | 4,294 |
| Jijo | Male | 26 | 05:34:05 | 4,093 |
| Greeshma | Female | 22 | 06:32:39 | 4,416 |
| Anil | Male | 48 | 05:58:34 | 4,239 |
| Vidhya | Female | 23 | 04:21:56 | 3,242 |
| Sonu | Male | 25 | 06:04:43 | 4,219 |
| Simla | Female | 24 | 09:34:21 | 5,638 |
| **Total** | | | **49:37:54** | **34,473** |
### Data Instances
An example instance is given below:
```json
{'text': 'സർവ്വകലാശാല വൈസ് ചാൻസലർ ഡോ. ചന്ദ്രബാബുവിനും സംഭവം തലവേദനയാവുകയാണ്',
'speaker': 'Sonia',
'audio': {'path': None,
'array': array([ 0.00921631, 0.00930786, 0.00939941, ..., -0.00497437,
-0.00497437, -0.00497437]),
'sampling_rate': 16000}}
```
### Data Fields
- **text** (str): Transcription of the audio file
- **speaker** (str): The name of the speaker
- **audio** (dict): Audio object including loaded audio array, sampling rate and path to audio (always None)
### Data Splits
We provide all the data in a single `train` split. The loaded dataset object thus looks like this:
```json
DatasetDict({
train: Dataset({
features: ['text', 'speaker', 'audio'],
num_rows: 34473
})
})
```
### Dataset Creation
The text is sourced from [Malayalam Wikipedia](https://ml.wikipedia.org), and read by our speakers in studio conditions. Extensive error correction was conducted to provide a clean, accurate database. Further details are given in our paper, accessible at [https://arxiv.org/abs/2211.12796](https://arxiv.org/abs/2211.12796).
## Additional Information
### Licensing
The corpus is made available under the [Creative Commons license (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/).
### Citation
```
@misc{gopinath2022imasc,
title={IMaSC -- ICFOSS Malayalam Speech Corpus},
author={Deepa P Gopinath and Thennal D K and Vrinda V Nair and Swaraj K S and Sachin G},
year={2022},
eprint={2211.12796},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
| 3,343 | [
[
-0.03436279296875,
-0.041839599609375,
0.01239013671875,
0.030242919921875,
-0.031341552734375,
-0.0011014938354492188,
-0.0254669189453125,
-0.021942138671875,
0.042266845703125,
0.0200347900390625,
-0.0355224609375,
-0.039276123046875,
-0.05364990234375,
0... |
israfelsr/mm_tiny_imagenet | 2022-12-16T11:19:54.000Z | [
"region:us"
] | israfelsr | null | null | 1 | 16 | 2022-11-17T12:44:50 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': n01443537
'1': n01629819
'2': n01641577
'3': n01644900
'4': n01698640
'5': n01742172
'6': n01768244
'7': n01770393
'8': n01774384
'9': n01774750
'10': n01784675
'11': n01882714
'12': n01910747
'13': n01917289
'14': n01944390
'15': n01950731
'16': n01983481
'17': n01984695
'18': n02002724
'19': n02056570
'20': n02058221
'21': n02074367
'22': n02094433
'23': n02099601
'24': n02099712
'25': n02106662
'26': n02113799
'27': n02123045
'28': n02123394
'29': n02124075
'30': n02125311
'31': n02129165
'32': n02132136
'33': n02165456
'34': n02226429
'35': n02231487
'36': n02233338
'37': n02236044
'38': n02268443
'39': n02279972
'40': n02281406
'41': n02321529
'42': n02364673
'43': n02395406
'44': n02403003
'45': n02410509
'46': n02415577
'47': n02423022
'48': n02437312
'49': n02480495
'50': n02481823
'51': n02486410
'52': n02504458
'53': n02509815
'54': n02666347
'55': n02669723
'56': n02699494
'57': n02769748
'58': n02788148
'59': n02791270
'60': n02793495
'61': n02795169
'62': n02802426
'63': n02808440
'64': n02814533
'65': n02814860
'66': n02815834
'67': n02823428
'68': n02837789
'69': n02841315
'70': n02843684
'71': n02883205
'72': n02892201
'73': n02909870
'74': n02917067
'75': n02927161
'76': n02948072
'77': n02950826
'78': n02963159
'79': n02977058
'80': n02988304
'81': n03014705
'82': n03026506
'83': n03042490
'84': n03085013
'85': n03089624
'86': n03100240
'87': n03126707
'88': n03160309
'89': n03179701
'90': n03201208
'91': n03255030
'92': n03355925
'93': n03373237
'94': n03388043
'95': n03393912
'96': n03400231
'97': n03404251
'98': n03424325
'99': n03444034
'100': n03447447
'101': n03544143
'102': n03584254
'103': n03599486
'104': n03617480
'105': n03637318
'106': n03649909
'107': n03662601
'108': n03670208
'109': n03706229
'110': n03733131
'111': n03763968
'112': n03770439
'113': n03796401
'114': n03814639
'115': n03837869
'116': n03838899
'117': n03854065
'118': n03891332
'119': n03902125
'120': n03930313
'121': n03937543
'122': n03970156
'123': n03977966
'124': n03980874
'125': n03983396
'126': n03992509
'127': n04008634
'128': n04023962
'129': n04070727
'130': n04074963
'131': n04099969
'132': n04118538
'133': n04133789
'134': n04146614
'135': n04149813
'136': n04179913
'137': n04251144
'138': n04254777
'139': n04259630
'140': n04265275
'141': n04275548
'142': n04285008
'143': n04311004
'144': n04328186
'145': n04356056
'146': n04366367
'147': n04371430
'148': n04376876
'149': n04398044
'150': n04399382
'151': n04417672
'152': n04456115
'153': n04465666
'154': n04486054
'155': n04487081
'156': n04501370
'157': n04507155
'158': n04532106
'159': n04532670
'160': n04540053
'161': n04560804
'162': n04562935
'163': n04596742
'164': n04598010
'165': n06596364
'166': n07056680
'167': n07583066
'168': n07614500
'169': n07615774
'170': n07646821
'171': n07647870
'172': n07657664
'173': n07695742
'174': n07711569
'175': n07715103
'176': n07720875
'177': n07749582
'178': n07753592
'179': n07768694
'180': n07871810
'181': n07873807
'182': n07875152
'183': n07920052
'184': n07975909
'185': n08496334
'186': n08620881
'187': n08742578
'188': n09193705
'189': n09246464
'190': n09256479
'191': n09332890
'192': n09428293
'193': n12267677
'194': n12520864
'195': n13001041
'196': n13652335
'197': n13652994
'198': n13719102
'199': n14991210
- name: caption
dtype: string
- name: label_name
dtype: string
splits:
- name: train
num_bytes: 159978960.0
num_examples: 80000
- name: validation
num_bytes: 40004701.0
num_examples: 20000
download_size: 149059401
dataset_size: 199983661.0
---
# Dataset Card for "mm_tiny_imagenet"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 5,866 | [
[
-0.05255126953125,
-0.010284423828125,
0.01016998291015625,
0.005756378173828125,
-0.0269927978515625,
-0.02093505859375,
0.022674560546875,
-0.0045318603515625,
0.07733154296875,
0.028350830078125,
-0.0560302734375,
-0.0426025390625,
-0.04486083984375,
-0.0... |
graphs-datasets/AQSOL | 2023-02-07T16:36:58.000Z | [
"task_categories:graph-ml",
"license:mit",
"arxiv:2003.00982",
"region:us"
] | graphs-datasets | null | null | 0 | 16 | 2022-12-08T11:54:55 | ---
license: mit
task_categories:
- graph-ml
---
# Dataset Card for AQSOL
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://github.com/graphdeeplearning/benchmarking-gnns)**
- **Paper:**: (see citation)
### Dataset Summary
The AQSOL dataset comes "from the Benchmarking Graph Neural Networks paper based on AqSolDB, a standardized database of 9,982 molecular graphs with their aqueous solubility values, collected from 9 different data sources" (PyGeometric doc).
### Supported Tasks and Leaderboards
`AQSOL` should be used for graph regression, on aqueous solubility.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| #graphs | 9,833 |
| average #nodes | 17.6 |
| average #edges | 35.8 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: #labels): contains the number of labels available to predict
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data is split. It comes from the PyGeometric version of the dataset.
## Additional Information
### Licensing Information
The dataset has been released under MIT license.
### Citation Information
```
@article{DBLP:journals/corr/abs-2003-00982,
author = {Vijay Prakash Dwivedi and
Chaitanya K. Joshi and
Thomas Laurent and
Yoshua Bengio and
Xavier Bresson},
title = {Benchmarking Graph Neural Networks},
journal = {CoRR},
volume = {abs/2003.00982},
year = {2020},
url = {https://arxiv.org/abs/2003.00982},
eprinttype = {arXiv},
eprint = {2003.00982},
timestamp = {Sat, 23 Jan 2021 01:14:30 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2003-00982.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | 3,054 | [
[
-0.01537322998046875,
-0.0199737548828125,
0.00936126708984375,
0.00403594970703125,
0.0016527175903320312,
-0.00972747802734375,
-0.0022258758544921875,
-0.0211944580078125,
0.020660400390625,
0.013397216796875,
-0.037872314453125,
-0.0491943359375,
-0.03179931... |
Jean-Baptiste/financial_news_sentiment_mixte_with_phrasebank_75 | 2022-12-29T03:19:16.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-nc-sa-3.0",
"region:us"
] | Jean-Baptiste | null | null | 0 | 16 | 2022-12-24T03:49:34 | ---
language:
- en
dataset_info:
splits:
- name: test
num_examples: 785
- name: train
num_examples: 4446
annotations_creators:
- expert-generated
license:
- cc-by-nc-sa-3.0
multilinguality:
- monolingual
pretty_name: financial_news_sentiment_mixte_with_phrasebank_75
size_categories:
- 1K<n<10K
tags: []
task_categories:
- text-classification
task_ids:
- multi-class-classification
- sentiment-classification
---
# Dataset Card for "financial_news_sentiment_mixte_with_phrasebank_75"
This is a customized version of the phrasebank dataset in which I kept only sentences validated by at least 75% annotators.
In addition I added ~2000 articles of Canadian news where sentiment was validated manually.
The dataset also include a column topic which contains one of the following value:
* acquisition
* other
* quaterly financial release
* appointment to new position
* dividend
* corporate update
* drillings results
* conference
* share repurchase program
* grant of stocks
This was generated automatically using a zero-shot classification model and **was not** reviewed manually.
## References
Original dataset is available here:
[https://huggingface.co/datasets/financial_phrasebank]
| 1,207 | [
[
-0.0277557373046875,
-0.03753662109375,
0.015625,
0.05352783203125,
-0.03582763671875,
0.0250701904296875,
-0.003936767578125,
-0.007274627685546875,
0.053619384765625,
0.0543212890625,
-0.03497314453125,
-0.07281494140625,
-0.055206298828125,
0.005577087402... |
neulab/odex | 2023-02-10T18:01:34.000Z | [
"task_categories:text2text-generation",
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"language:es",
"language:ja",
"language:ru",
"license:cc-by-sa-4.0",
"region:us"
] | neulab | ODEX is an Open-Domain EXecution-based NL-to-Code generation data benchmark.
It contains 945 samples with a total of 1,707 human-written test cases,
covering intents in four different natural languages -- 439 in English, 90 in Spanish, 164 in Japanese, and 252 in Russian. | @article{wang2022execution,
title={Execution-Based Evaluation for Open-Domain Code Generation},
author={Wang, Zhiruo and Zhou, Shuyan and Fried, Daniel and Neubig, Graham},
journal={arXiv preprint arXiv:2212.10481},
year={2022}
} | 6 | 16 | 2023-01-06T14:30:00 | ---
license: cc-by-sa-4.0
task_categories:
- text2text-generation
- text-generation
language:
- en
- es
- ja
- ru
size_categories:
- n<1K
---
__ODEX__ is an Open-Domain EXecution-based NL-to-Code generation data benchmark.
It contains 945 samples with a total of 1,707 human-written test cases, covering intents in four different natural languages -- 439 in English, 90 in Spanish, 164 in Japanese, and 252 in Russian.
You can load the dataset by specifying a subset from *en, es, ja, ru* (by default the english subset *en* is loaded):
```python
from datasets import load_dataset
ds = load_dataset("neulab/odex", "ja", split="test")
```
If you find our dataset useful, please cite the paper
```
@article{wang2022execution,
title={Execution-Based Evaluation for Open-Domain Code Generation},
author={Zhiruo Wang, Shuyan Zhou, Daniel Fried, Graham Neubig},
journal={arXiv preprint arXiv:2212.10481},
year={2022}
}
``` | 932 | [
[
-0.033905029296875,
-0.03546142578125,
0.0117645263671875,
0.038909912109375,
-0.0011224746704101562,
-0.018829345703125,
-0.011260986328125,
-0.028350830078125,
-0.0137176513671875,
0.0318603515625,
-0.0251312255859375,
-0.0577392578125,
-0.01447296142578125,
... |
Cohere/wikipedia-22-12-fr-embeddings | 2023-03-22T16:53:41.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:fr",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | 4 | 16 | 2023-01-14T13:09:16 | ---
annotations_creators:
- expert-generated
language:
- fr
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# Wikipedia (fr) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (fr)](https://fr.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-fr-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-fr-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-fr-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) | 3,845 | [
[
-0.051239013671875,
-0.0496826171875,
0.01190185546875,
0.0023555755615234375,
-0.01319122314453125,
-0.007144927978515625,
-0.0240936279296875,
-0.01910400390625,
0.042938232421875,
-0.0011587142944335938,
-0.038299560546875,
-0.06256103515625,
-0.0465393066406... |
cyrilzhang/financial_phrasebank_split | 2023-01-17T21:26:08.000Z | [
"region:us"
] | cyrilzhang | null | null | 1 | 16 | 2023-01-17T21:26:00 | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
0: negative
1: neutral
2: positive
splits:
- name: train
num_bytes: 611259.9339661576
num_examples: 4361
- name: test
num_bytes: 67980.06603384235
num_examples: 485
download_size: 418548
dataset_size: 679240.0
---
# Dataset Card for "financial_phrasebank_split"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 578 | [
[
-0.037139892578125,
-0.04193115234375,
0.006023406982421875,
0.0312042236328125,
-0.023773193359375,
0.0238189697265625,
0.0123138427734375,
-0.0021152496337890625,
0.058197021484375,
0.04754638671875,
-0.050506591796875,
-0.047607421875,
-0.045440673828125,
... |
csinva/fmri_language_responses | 2023-02-12T22:46:10.000Z | [
"region:us"
] | csinva | null | null | 1 | 16 | 2023-02-12T22:33:43 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
grosenthal/latin_english_translation | 2023-07-17T21:59:06.000Z | [
"task_categories:translation",
"size_categories:10K<n<100K",
"language:la",
"language:en",
"license:mit",
"doi:10.57967/hf/0903",
"region:us"
] | grosenthal | null | null | 4 | 16 | 2023-02-28T00:10:51 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: la
dtype: string
- name: en
dtype: string
- name: file
dtype: string
splits:
- name: train
num_bytes: 39252644
num_examples: 99343
- name: test
num_bytes: 405056
num_examples: 1014
- name: valid
num_bytes: 392886
num_examples: 1014
download_size: 25567350
dataset_size: 40050586
license: mit
task_categories:
- translation
language:
- la
- en
pretty_name: Latin to English Translation Pairs
size_categories:
- 10K<n<100K
---
# Dataset Card for "latin_english_parallel"
101k translation pairs between Latin and English, split 99/1/1 as train/test/val. These have been collected roughly 66% from the Loeb Classical Library and 34% from the Vulgate translation.
For those that were gathered from the Loeb Classical Library, alignment was performd manually between Source and Target sequences.
Each sample is annotated with the index and file (and therefore author/work) that the sample is from. If you find errors, please feel free to submit a PR to fix them.
 | 1,113 | [
[
-0.0186614990234375,
-0.0272064208984375,
0.017578125,
0.0257720947265625,
-0.03167724609375,
0.0014829635620117188,
-0.01450347900390625,
-0.028106689453125,
0.042327880859375,
0.0343017578125,
-0.036865234375,
-0.055206298828125,
-0.03302001953125,
0.03149... |
Zombely/wikisource-green | 2023-03-18T11:50:26.000Z | [
"region:us"
] | Zombely | null | null | 0 | 16 | 2023-03-15T02:03:19 | ---
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train_1
num_bytes: 15342818708.456
num_examples: 9816
- name: train_2
num_bytes: 13234327199.457
num_examples: 9997
- name: train_3
num_bytes: 8814747830.88
num_examples: 9935
- name: train_4
num_bytes: 10839226390.145
num_examples: 9995
- name: train_5
num_bytes: 12414635965.0
num_examples: 10000
- name: train_6
num_bytes: 5911580759.0
num_examples: 10000
- name: train_7
num_bytes: 11420080854.0
num_examples: 10000
- name: train_8
num_bytes: 18080629271.0
num_examples: 10000
- name: train_9
num_bytes: 11348011360.0
num_examples: 10000
- name: train_10
num_bytes: 14141957301.0
num_examples: 10000
- name: train_11
num_bytes: 9983910604.0
num_examples: 10000
- name: train_12
num_bytes: 13105253749.0
num_examples: 10000
- name: train_13
num_bytes: 15681320595.0
num_examples: 10000
- name: train_14
num_bytes: 14896725472.0
num_examples: 10000
- name: train_15
num_bytes: 11493364396.927
num_examples: 9987
- name: validation
num_bytes: 4487934740.612
num_examples: 4077
download_size: 5330245163
dataset_size: 191196525196.477
---
# Dataset Card for "wikisource-green"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,495 | [
[
-0.036651611328125,
-0.01064300537109375,
0.0160675048828125,
-0.0010318756103515625,
-0.01294708251953125,
0.004611968994140625,
0.0019388198852539062,
-0.0236358642578125,
0.054229736328125,
0.01409149169921875,
-0.07843017578125,
-0.053955078125,
-0.032348632... |
semeru/code-code-DefectDetection | 2023-03-27T21:16:02.000Z | [
"license:mit",
"region:us"
] | semeru | null | null | 0 | 16 | 2023-03-22T03:30:09 | ---
license: mit
Programminglanguage: "C"
version: "N/A"
Date: "Devign(Jun 2019 - paper release date)"
Contaminated: "Very Likely"
Size: "Standard Tokenizer"
---
### Dataset is imported from CodeXGLUE and pre-processed using their script.
# Where to find in Semeru:
The dataset can be found at /nfs/semeru/semeru_datasets/code_xglue/code-to-code/Defect-detection in Semeru
# CodeXGLUE -- Defect Detection
## Task Definition
Given a source code, the task is to identify whether it is an insecure code that may attack software systems, such as resource leaks, use-after-free vulnerabilities and DoS attack. We treat the task as binary classification (0/1), where 1 stands for insecure code and 0 for secure code.
### Dataset
The dataset we use comes from the paper [*Devign*: Effective Vulnerability Identification by Learning Comprehensive Program Semantics via Graph Neural Networks](http://papers.nips.cc/paper/9209-devign-effective-vulnerability-identification-by-learning-comprehensive-program-semantics-via-graph-neural-networks.pdf). We combine all projects and split 80%/10%/10% for training/dev/test.
### Data Format
Three pre-processed .jsonl files, i.e. train.jsonl, valid.jsonl, test.jsonl are present
For each file, each line in the uncompressed file represents one function. One row is illustrated below.
- **func:** the source code
- **target:** 0 or 1 (vulnerability or not)
- **idx:** the index of example
### Data Statistics
Data statistics of the dataset are shown in the below table:
| | #Examples |
| ----- | :-------: |
| Train | 21,854 |
| Dev | 2,732 |
| Test | 2,732 |
## Reference
<pre><code>@inproceedings{zhou2019devign,
title={Devign: Effective vulnerability identification by learning comprehensive program semantics via graph neural networks},
author={Zhou, Yaqin and Liu, Shangqing and Siow, Jingkai and Du, Xiaoning and Liu, Yang},
booktitle={Advances in Neural Information Processing Systems},
pages={10197--10207},
year={2019}
}</code></pre>
| 2,035 | [
[
-0.017303466796875,
-0.04095458984375,
0.0030803680419921875,
0.0033893585205078125,
0.00862884521484375,
0.002933502197265625,
-0.005035400390625,
-0.0271148681640625,
0.0024013519287109375,
0.0433349609375,
-0.035614013671875,
-0.074462890625,
-0.0645751953125... |
mstz/speeddating | 2023-04-07T14:54:21.000Z | [
"task_categories:tabular-classification",
"size_categories:1K<n<10K",
"language:en",
"speeddating",
"tabular_classification",
"binary_classification",
"region:us"
] | mstz | null | null | 0 | 16 | 2023-03-23T23:41:42 | ---
language:
- en
tags:
- speeddating
- tabular_classification
- binary_classification
pretty_name: Speed dating
size_categories:
- 1K<n<10K
task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
- tabular-classification
configs:
- dating
---
# Speed dating
The [Speed dating dataset](https://www.openml.org/search?type=data&sort=nr_of_likes&status=active&id=40536) from OpenML.
# Configurations and tasks
| **Configuration** | **Task** | Description |
|-------------------|---------------------------|---------------------------------------------------------------|
| dating | Binary classification | Will the two date? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/speeddating")["train"]
```
# Features
|**Features** |**Type** |
|---------------------------------------------------|---------|
|`is_dater_male` |`int8` |
|`dater_age` |`int8` |
|`dated_age` |`int8` |
|`age_difference` |`int8` |
|`dater_race` |`string` |
|`dated_race` |`string` |
|`are_same_race` |`int8` |
|`same_race_importance_for_dater` |`float64`|
|`same_religion_importance_for_dater` |`float64`|
|`attractiveness_importance_for_dated` |`float64`|
|`sincerity_importance_for_dated` |`float64`|
|`intelligence_importance_for_dated` |`float64`|
|`humor_importance_for_dated` |`float64`|
|`ambition_importance_for_dated` |`float64`|
|`shared_interests_importance_for_dated` |`float64`|
|`attractiveness_score_of_dater_from_dated` |`float64`|
|`sincerity_score_of_dater_from_dated` |`float64`|
|`intelligence_score_of_dater_from_dated` |`float64`|
|`humor_score_of_dater_from_dated` |`float64`|
|`ambition_score_of_dater_from_dated` |`float64`|
|`shared_interests_score_of_dater_from_dated` |`float64`|
|`attractiveness_importance_for_dater` |`float64`|
|`sincerity_importance_for_dater` |`float64`|
|`intelligence_importance_for_dater` |`float64`|
|`humor_importance_for_dater` |`float64`|
|`ambition_importance_for_dater` |`float64`|
|`shared_interests_importance_for_dater` |`float64`|
|`self_reported_attractiveness_of_dater` |`float64`|
|`self_reported_sincerity_of_dater` |`float64`|
|`self_reported_intelligence_of_dater` |`float64`|
|`self_reported_humor_of_dater` |`float64`|
|`self_reported_ambition_of_dater` |`float64`|
|`reported_attractiveness_of_dated_from_dater` |`float64`|
|`reported_sincerity_of_dated_from_dater` |`float64`|
|`reported_intelligence_of_dated_from_dater` |`float64`|
|`reported_humor_of_dated_from_dater` |`float64`|
|`reported_ambition_of_dated_from_dater` |`float64`|
|`reported_shared_interests_of_dated_from_dater` |`float64`|
|`dater_interest_in_sports` |`float64`|
|`dater_interest_in_tvsports` |`float64`|
|`dater_interest_in_exercise` |`float64`|
|`dater_interest_in_dining` |`float64`|
|`dater_interest_in_museums` |`float64`|
|`dater_interest_in_art` |`float64`|
|`dater_interest_in_hiking` |`float64`|
|`dater_interest_in_gaming` |`float64`|
|`dater_interest_in_clubbing` |`float64`|
|`dater_interest_in_reading` |`float64`|
|`dater_interest_in_tv` |`float64`|
|`dater_interest_in_theater` |`float64`|
|`dater_interest_in_movies` |`float64`|
|`dater_interest_in_concerts` |`float64`|
|`dater_interest_in_music` |`float64`|
|`dater_interest_in_shopping` |`float64`|
|`dater_interest_in_yoga` |`float64`|
|`interests_correlation` |`float64`|
|`expected_satisfaction_of_dater` |`float64`|
|`expected_number_of_likes_of_dater_from_20_people` |`int8` |
|`expected_number_of_dates_for_dater` |`int8` |
|`dater_liked_dated` |`float64`|
|`probability_dated_wants_to_date` |`float64`|
|`already_met_before` |`int8` |
|`dater_wants_to_date` |`int8` |
|`dated_wants_to_date` |`int8` |
| 5,157 | [
[
-0.046173095703125,
-0.044158935546875,
0.0240631103515625,
0.0216064453125,
-0.0244140625,
-0.006221771240234375,
0.00759124755859375,
-0.03289794921875,
0.0399169921875,
0.020416259765625,
-0.05499267578125,
-0.03546142578125,
-0.04876708984375,
0.00989532... |
pkyoyetera/luganda_english_dataset | 2023-03-25T19:54:14.000Z | [
"task_categories:translation",
"size_categories:10K<n<100K",
"language:en",
"language:lg",
"license:apache-2.0",
"region:us"
] | pkyoyetera | null | null | 0 | 16 | 2023-03-25T06:34:10 | ---
dataset_info:
features:
- name: English
dtype: string
- name: Luganda
dtype: string
splits:
- name: train
num_bytes: 11844863.620338032
num_examples: 78238
download_size: 7020236
dataset_size: 11844863.620338032
license: apache-2.0
task_categories:
- translation
language:
- en
- lg
size_categories:
- 10K<n<100K
---
# Dataset Card for "luganda_english_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Dataset might contain a few mistakes, espeecially on the one word translations. Indicators for verbs and nouns (v.i and n.i) may not have been completely filtered out properly. | 709 | [
[
-0.01047515869140625,
-0.04986572265625,
0.019805908203125,
0.0136260986328125,
-0.023468017578125,
-0.005413055419921875,
-0.009613037109375,
-0.024749755859375,
0.06610107421875,
0.0294036865234375,
-0.06201171875,
-0.05413818359375,
-0.0614013671875,
0.00... |
ipipan/maupqa | 2023-09-18T07:28:41.000Z | [
"task_categories:question-answering",
"task_categories:text-retrieval",
"task_ids:open-domain-qa",
"task_ids:document-retrieval",
"annotations_creators:found",
"annotations_creators:machine-generated",
"size_categories:1M<n<10M",
"language:pl",
"license:cc-by-sa-4.0",
"arxiv:2305.05486",
"arxiv:... | ipipan | MAUPQA is a collection of datasets for Polish Open-domain Question Answering. | @inproceedings{rybak-2023-maupqa,
title = "{MAUPQA}: Massive Automatically-created {P}olish Question Answering Dataset",
author = "Rybak, Piotr",
booktitle = "Proceedings of the 9th Workshop on Slavic Natural Language Processing 2023 (SlavicNLP 2023)",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.bsnlp-1.2",
pages = "11--16",
abstract = "Recently, open-domain question answering systems have begun to rely heavily on annotated datasets to train neural passage retrievers. However, manually annotating such datasets is both difficult and time-consuming, which limits their availability for less popular languages. In this work, we experiment with several methods for automatically collecting weakly labeled datasets and show how they affect the performance of the neural passage retrieval models. As a result of our work, we publish the MAUPQA dataset, consisting of nearly 400,000 question-passage pairs for Polish, as well as the HerBERT-QA neural retriever.",
} | 2 | 16 | 2023-03-31T10:21:18 | ---
task_categories:
- question-answering
- text-retrieval
task_ids:
- open-domain-qa
- document-retrieval
language:
- pl
pretty_name: MAUPQA
size_categories:
- 1M<n<10M
annotations_creators:
- found
- machine-generated
license: cc-by-sa-4.0
---
# Dataset Card for MAUPQA Dataset
## Dataset Description
- **Paper:** [MAUPQA: Massive Automatically-created Polish Question Answering Dataset](https://arxiv.org/abs/2305.05486), [SilverRetriever: Advancing Neural Passage Retrieval for Polish Question Answering](https://arxiv.org/abs/2309.08469)
- **Point of Contact:** [Piotr Rybak](mailto:piotr.cezary.rybak@gmail.com)
### Dataset Summary
MAUPQA is a collection of 14 datasets for Polish document retrieval. Most of the datasets are either machine-generated or machine-translated from English. Across all datasets, it consists of over 1M questions, 1M positive, and 7M hard-negative question-passage pairs.
### Supported Tasks and Leaderboards
- `document-retrieval`: The dataset can be used to train a model for document retrieval. Success on this task is typically measured by [top-k retrieval accuracy](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.top_k_accuracy_score.html) or [NDCG](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.ndcg_score.html).
### Languages
The text is in Polish, as spoken by the [Internet users](https://github.com/facebookresearch/cc_net), [Polish Wikipedia](https://pl.wikipedia.org/) editors, or is an output of generative or translation models. The BCP-47 code for Polish is pl-PL.
## Dataset Structure
### Data Instances
The dataset consists of over 8 million question-passage pairs. For each instance, there is a `question`, a passage (`passage_title`, `passage_text`), and a boolean indicator if the passage is `relevant` for the given question (i.e. does it contain the answers).
For a small subset of `question` there is also a list of possible `answers` formulated in a natural language, in a way a Polish
speaker would answer the questions.
```
{
'question_id': 1,
'question': 'Na którym kontynencie leży państwo Gujana, panie Krzysztofie?',
'answers': "['W Ameryce Południowej']",
'passage_title': 'Gujana (ujednoznacznienie)',
'passage_text': 'Gujana (region) – region Ameryki Południowej Gujana – państwo w Ameryce Południowej Gujana Brytyjska – dawna kolonia brytyjska; obecnie państwo Gujana Gujana Francuska – departament zamorski Francji; dawniej kolonia francuska Gujana Holenderska – dawna kolonia holenderska; obecnie państwo Surinam',
'relevant': True,
'passage_source': 'crawling',
'subset': '1z10'
}
```
### Data Fields
Question-passage pairs:
- `question_id`: an integer id of the question
- `question`: a string containing the question
- `passage_title`: a string containing the title of the Wikipedia article
- `passage_text`: a string containing the passage text as extracted by the human annotator
- `relevant`: a boolean flag representing whether a passage is relevant to the question (i.e. does it contain the answers)
- `annotated_by`: a string containing the name of the annotator who verified the relevance of the pair
- `answers`: a string containing a list of possible short answers to the question
- `passage_source`: a string containing the method of obtaining the passage. One of the following:
- `manual-annotation`: the question-passage pair was manually annotated
- `crawling`: the question-passage pairs were created by taking advantage of the specific structure of crawled website
- `dataset-translation`: the dataset was created by machine-translating the English dataset
- `generative-model`: the question was created by the generative model based on the given passage
- `bm25-negatives`: the passage was found by the BM25 retriever and scored using a multilingual cross-encoder to ensure it is not relevant
- `bm25-positives`: the passage was found by the BM25 retriever and scored using a multilingual cross-encoder to ensure it is relevant
- `subset`: a string containing the name of the dataset
### Data Splits
MAUPQA is a collection of 14 datasets and most of them are weakly labeled. Therefore, the intended use of MAUPQA is for training only. As such, all examples belong to a single `train` split. We recommend using the [PolQA](https://huggingface.co/datasets/ipipan/polqa) dataset for evaluation.
Basic statistics of all 14 datasets:
| dataset | # questions | # answers | # positive passages | # negative passages |
|-------------------|------------:|----------:|--------------------:|--------------------:|
| 1z10 | 22,835 | 21,415 | 22,014 | 139,471 |
| czy-wiesz-v2 | 29,078 | - | 29,078 | 143,306 |
| gpt3-cc | 10,146 | 10,146 | 10,177 | 89,203 |
| gpt3.5-cc | 29,591 | 29,583 | 29,720 | 251,959 |
| gpt3.5-wiki | 29,674 | 29,636 | 29,748 | 115,564 |
| mkqa | 4,036 | 4,036 | 3,968 | 19,814 |
| mqa | 172,768 | - | 178,131 | 1,249,659 |
| msmarco | 389,987 | - | 416,763 | 3,006,996 |
| multilingual-NLI | 100,752 | 64,900 | 68,096 | 743,857 |
| nq | 135,781 | - | 139,976 | 797,436 |
| poleval2021-pairs | 1,977 | - | 2,088 | 17,608 |
| poquad | 56,588 | 46,157 | 46,187 | 299,865 |
| templates | 15,993 | 14,504 | 15,993 | 45,228 |
| wiki-def | 18,093 | 18,092 | 18,093 | 84,956 |
| Total | 1,017,299 | 238,469 | 1,010,032 | 7,004,922 |
## Dataset Creation
### Curation Rationale
Open-domain question answering systems rely heavily on annotated datasets to train neural document retrievers. However, manually annotating such datasets is both difficult and time-consuming. To overcome these difficulties, we experimented with several methods for automatically collecting weakly labeled datasets. As a result, MAUPQA enables the development of robust document retrieval systems for Polish.
### Source Data
#### Initial Data Collection and Normalization
Below, we briefly describe each dataset. For a detailed description please refer to the [paper](https://arxiv.org/abs/2305.05486).
* `1z10`: We transcribe 333 recordings of the [Jeden z Dziesięciu](https://pl.wikipedia.org/wiki/Jeden_z_dziesi%C4%99ciu) TV show using the Whisper model and extract the question-answer pairs using GPT-3.5 model. We use the BM25 retriever and the GPT-3.5-based cross-encoder to match questions with Wikipedia passages.
* `czy-wiesz-v2`: We first crawl all questions from the [Did you know?](https://pl.wikipedia.org/wiki/Wikiprojekt:Czy_wiesz/archiwum) section on Polish Wikipedia together with a link to the relevant Wikipedia article. Then, we use the [multilingual cross-encoder](https://huggingface.co/unicamp-dl/mMiniLM-L6-v2-mmarco-v2) to choose the most relevant passage.
* `gpt3-cc`: We sample random passages from [CCNet](https://github.com/facebookresearch/cc_net) corpus and prompt GPT-3 to generate a relevant question.
* `gpt3.5-cc`: We sample random passages from [CCNet](https://github.com/facebookresearch/cc_net) corpus and prompt GPT-3.5 to generate a relevant question.
* `gpt3.5-wiki`: We sample random passages from Polish Wikipedia and prompt GPT-3.5 to generate a relevant question.
* `mkqa`: We clean the Polish subset of the [MKQA](https://huggingface.co/datasets/mkqa) dataset by removing questions without answers, requiring long answers (*Why?* and *How?* questions), and ambiguous ones ("Who is the *current* president?*). We use the BM25 retriever and the [multilingual cross-encoder](https://huggingface.co/unicamp-dl/mMiniLM-L6-v2-mmarco-v2) to choose the most relevant passage.
* `mqa`: We clean the Polish subset of the [MQA](https://huggingface.co/datasets/clips/mqa) dataset by removing artificially created questions like "What is the best hotel in *{city}*?" for hundreds of different *cities*. To clean the dataset, we cluster lexically similar questions/passages and remove clusters with over 5 questions.
* `msmarco`: We translate the [MS MARCO](https://huggingface.co/datasets/ms_marco) dataset into Polish using the machine translation model.
* `multilingual-NLI`: We extract question-answer pairs from the Polish subset of the [multilingual-NLI](https://huggingface.co/datasets/MoritzLaurer/multilingual-NLI-26lang-2mil7) dataset. We create questions using the following template: "Czy *{premise}*?" (Eng. "Does *{premise}*?") and use hypotheses as passages. We consider `entailment` and `contradiction` labels as relevant and `neutral` as negative.
* `nq`: We translate the [NQ](https://huggingface.co/datasets/natural_questions) dataset into Polish using the machine translation model.
* `poleval2021-pairs`: We take [allegro/polish-question-passage-pairs](https://huggingface.co/datasets/allegro/polish-question-passage-pairs) without any changes.
* `poquad`: We extract question-passages pairs from the training split of the [PoQuAD](https://huggingface.co/datasets/clarin-pl/poquad) dataset.
* `templates`: We take advantage of the Wikipedia structure to generate questions using predefined templates. For example, list pages group together similar entities (e.g. "Writers born in Poland") which allow generating questions like "Where was *{writer name}* born?". In total, we use 33 templates to generate questions. We use the [multilingual cross-encoder](https://huggingface.co/unicamp-dl/mMiniLM-L6-v2-mmarco-v2) to choose the most relevant passage from the linked article.
* `wiki-def`: We use [Wiktionary](https://www.wiktionary.org/) to generate questions based on word definitions. We use definitions that have links to Wikipedia articles to create the question-passage pairs. For example, the definition of "Monday" is "the first day of the week". Based on it, we generate the question "What is the name of *the first day of the week*?".
Additionally, we extend each dataset by sampling the hard negative passages using a BM25 retriever and score using a [multilingual cross-encoder](https://huggingface.co/unicamp-dl/mMiniLM-L6-v2-mmarco-v2) to ensure that passages are not relevant.
#### Who are the source language producers?
The text is in Polish, as spoken by the [Internet users](https://github.com/facebookresearch/cc_net), [Polish Wikipedia](https://pl.wikipedia.org/) editors, or is an output of generative or translation models.
### Annotations
#### Annotation process
The MAUPQA dataset doesn't provide additional annotation except for the annotation present in the source datasets.
#### Who are the annotators?
Please refer to the description of the source datasets.
### Personal and Sensitive Information
The dataset should not contain any personal or sensitive information. However, we use the [CCNet](https://github.com/facebookresearch/cc_net) dataset as a source of passages that we didn't manually inspect for personal and sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset was created to promote the research in the open-domain question answering for Polish and allow developing question answering systems.
### Discussion of Biases
The machine-translated datasets might not represent the natural language as used by native Polish speakers. Similarly, the questions generated by the generative models might not be representative or correct.
Most of the question-passage pairs are created automatically using the BM25 retriever and as such it is biased to lexically similar pairs.
### Other Known Limitations
The MAUPQA dataset is mostly automatically generated and can therefore contain a high proportion of noise and incorrectly labeled question-passage pairs.
## Additional Information
### Dataset Curators
The MAUPQA dataset was collected by Piotr Rybak and Maciej Ogrodniczuk from the [Institute of Computer Science, Polish Academy of Sciences](http://zil.ipipan.waw.pl/) but the source datasets were created by many more researchers. Please refer to the original dataset descriptions for the full authorship.
This work was supported by the European Regional Development Fund as a part of 2014–2020 Smart Growth Operational Programme, CLARIN — Common Language Resources and Technology Infrastructure, project no. POIR.04.02.00-00C002/19.
### Licensing Information
CC BY-SA 4.0
### Citation Information
```
@inproceedings{rybak-2023-maupqa,
title = "{MAUPQA}: Massive Automatically-created {P}olish Question Answering Dataset",
author = "Rybak, Piotr",
booktitle = "Proceedings of the 9th Workshop on Slavic Natural Language Processing 2023 (SlavicNLP 2023)",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.bsnlp-1.2",
pages = "11--16",
abstract = "Recently, open-domain question answering systems have begun to rely heavily on annotated datasets to train neural passage retrievers. However, manually annotating such datasets is both difficult and time-consuming, which limits their availability for less popular languages. In this work, we experiment with several methods for automatically collecting weakly labeled datasets and show how they affect the performance of the neural passage retrieval models. As a result of our work, we publish the MAUPQA dataset, consisting of nearly 400,000 question-passage pairs for Polish, as well as the HerBERT-QA neural retriever.",
}
```
```
@misc{rybak2023silverretriever,
title={SilverRetriever: Advancing Neural Passage Retrieval for Polish Question Answering},
author={Piotr Rybak and Maciej Ogrodniczuk},
year={2023},
eprint={2309.08469},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 14,189 | [
[
-0.046142578125,
-0.066162109375,
0.03338623046875,
-0.0037212371826171875,
-0.0228424072265625,
-0.0192718505859375,
-0.0208740234375,
-0.00922393798828125,
0.021697998046875,
0.032562255859375,
-0.054962158203125,
-0.051025390625,
-0.034423828125,
0.031860... |
IndianaUniversityDatasetsModels/MIMIC-medical-report | 2023-04-06T02:47:09.000Z | [
"region:us"
] | IndianaUniversityDatasetsModels | null | null | 2 | 16 | 2023-04-06T02:46:47 | ---
dataset_info:
features:
- name: FileName
dtype: string
- name: INDICATION
dtype: string
- name: IMPRESSION
dtype: string
- name: FINDINGS
dtype: string
splits:
- name: train
num_bytes: 45203432.183416
num_examples: 83971
- name: test
num_bytes: 461341.9082919998
num_examples: 857
- name: validation
num_bytes: 461341.9082919998
num_examples: 857
download_size: 20175619
dataset_size: 46126116.00000001
---
# Dataset Card for "MIMIC-medical-report"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 647 | [
[
-0.0232391357421875,
-0.0175628662109375,
0.0206298828125,
0.0153961181640625,
-0.00533294677734375,
0.003360748291015625,
0.026092529296875,
-0.03436279296875,
0.0789794921875,
0.031158447265625,
-0.061248779296875,
-0.04779052734375,
-0.04364013671875,
-0.... |
mstz/heart | 2023-04-16T17:31:05.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"heart",
"tabular_classification",
"binary_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_heart_disease_45,
author = {Janosi,Andras, Steinbrunn,William, Pfisterer,Matthias, Detrano,Robert & M.D.,M.D.},
title = {{Heart Disease}},
year = {1988},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C52P4X}}
} | 0 | 16 | 2023-04-06T10:18:50 | ---
language:
- en
tags:
- heart
- tabular_classification
- binary_classification
- UCI
pretty_name: Heart
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- cleveland
- va
- switzerland
- hungary
license: cc
---
# Heart
The [Heart dataset](https://archive.ics.uci.edu/ml/datasets/Heart) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Does the patient have heart disease?
# Configurations and tasks
| **Configuration** | **Task** |
|-------------------|---------------------------|
| hungary | Binary classification |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/heart", "hungary")["train"]
``` | 715 | [
[
-0.00933074951171875,
-0.0211944580078125,
0.0167388916015625,
0.01393890380859375,
-0.02752685546875,
-0.0207061767578125,
-0.00453948974609375,
-0.00858306884765625,
0.0158843994140625,
0.04180908203125,
-0.036407470703125,
-0.06866455078125,
-0.06182861328125... |
j0selit0/insurance-qa-en | 2023-04-07T09:33:50.000Z | [
"region:us"
] | j0selit0 | null | null | 3 | 16 | 2023-04-06T13:38:01 | ---
dataset_info:
features:
- name: index
dtype: int64
- name: topic_en
dtype: string
- name: question_en
dtype: string
splits:
- name: train
num_bytes: 1044899
num_examples: 12888
- name: test
num_bytes: 162551
num_examples: 1999
- name: valid
num_bytes: 162498
num_examples: 1999
download_size: 126622
dataset_size: 1369948
---
# Dataset Card for "insurance-qa-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 555 | [
[
-0.0252685546875,
-0.0062713623046875,
0.0157470703125,
0.0203399658203125,
-0.01186370849609375,
0.00030922889709472656,
0.04248046875,
-0.0227203369140625,
0.05706787109375,
0.033843994140625,
-0.05108642578125,
-0.05767822265625,
-0.02288818359375,
-0.016... |
CM/codexglue_code2text_go | 2023-04-22T01:51:07.000Z | [
"region:us"
] | CM | null | null | 0 | 16 | 2023-04-22T01:50:51 | ---
dataset_info:
features:
- name: id
dtype: int32
- name: repo
dtype: string
- name: path
dtype: string
- name: func_name
dtype: string
- name: original_string
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
sequence: string
- name: docstring
dtype: string
- name: docstring_tokens
sequence: string
- name: sha
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 342243143
num_examples: 167288
- name: validation
num_bytes: 13721860
num_examples: 7325
- name: test
num_bytes: 16328406
num_examples: 8122
download_size: 121340474
dataset_size: 372293409
---
# Dataset Card for "codexglue_code2text_go"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 910 | [
[
-0.0202789306640625,
-0.0122528076171875,
0.0164947509765625,
0.023834228515625,
-0.01020050048828125,
0.0009212493896484375,
-0.006481170654296875,
-0.0181732177734375,
0.042236328125,
0.049835205078125,
-0.053985595703125,
-0.06298828125,
-0.0386962890625,
... |
Deojoandco/covid-qa-squad | 2023-04-30T03:49:20.000Z | [
"region:us"
] | Deojoandco | null | null | 0 | 16 | 2023-04-30T03:48:58 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
splits:
- name: train
num_bytes: 48659177
num_examples: 1417
- name: validation
num_bytes: 4315410
num_examples: 203
- name: test
num_bytes: 11609921
num_examples: 375
download_size: 2242745
dataset_size: 64584508
---
# Dataset Card for "covid-qa-squad"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 707 | [
[
-0.0343017578125,
-0.01218414306640625,
0.003879547119140625,
0.020233154296875,
-0.01226806640625,
0.0215606689453125,
0.037445068359375,
-0.0098724365234375,
0.061248779296875,
0.007541656494140625,
-0.07415771484375,
-0.047119140625,
-0.020782470703125,
-... |
sanchit-gandhi/librispeech-data | 2023-05-05T16:55:27.000Z | [
"region:us"
] | sanchit-gandhi | null | null | 0 | 16 | 2023-05-05T16:06:41 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train.clean.100
num_bytes: 6623027227.062
num_examples: 28539
- name: train.clean.360
num_bytes: 23910449107.828
num_examples: 104014
- name: train.other.500
num_bytes: 31827722515.584
num_examples: 148688
- name: validation.clean
num_bytes: 359889672.966
num_examples: 2703
- name: validation.other
num_bytes: 337620033.648
num_examples: 2864
- name: test.clean
num_bytes: 368013946.42
num_examples: 2620
- name: test.other
num_bytes: 352742113.154
num_examples: 2939
download_size: 61829574809
dataset_size: 63779464616.662
---
# Dataset Card for "librispeech-data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,064 | [
[
-0.04632568359375,
-0.0129852294921875,
0.0153961181640625,
0.01180267333984375,
-0.01483154296875,
-0.01250457763671875,
0.019195556640625,
-0.0196990966796875,
0.0721435546875,
0.0305938720703125,
-0.06353759765625,
-0.05267333984375,
-0.03326416015625,
-0... |
techiaith/banc-trawsgrifiadau-bangor | 2023-10-26T09:42:39.000Z | [
"size_categories:10K<n<100K",
"language:cy",
"license:cc0-1.0",
"verbatim transcriptions",
"speech recognition",
"region:us"
] | techiaith | Dyma fanc o 30 awr 20 munud a 41 eiliad o segmentau o leferydd naturiol dros hanner cant o gyfranwyr ar ffurf ffeiliau mp3, ynghyd â thrawsgrifiadau 'verbatim' cyfatebol o’r lleferydd ar ffurf ffeil .tsv. Mae'r mwyafrif o'r lleferydd yn leferydd digymell, naturiol. Dosbarthwn y deunydd hwn o dan drwydded agored CC0.
This resource is a bank of 30 hours 20 minutes and 41 seconds of segments of natural speech from over 50 contributors in mp3 file format, together with corresponding 'verbatim' transcripts of the speech in .tsv file format. The majority of the speech is spontaneous, natural speech. We distribute this material under a CC0 open license. | } | 1 | 16 | 2023-05-11T13:08:07 | ---
license: cc0-1.0
language:
- cy
tags:
- verbatim transcriptions
- speech recognition
pretty_name: 'Banc Trawsgrifiadau Bangor'
size_categories:
- 10K<n<100K
---
[See below for English](#bangor-transcription-bank)
# Banc Trawsgrifiadau Bangor
Dyma fanc o 30 awr 20 munud a 41 eiliad o segmentau o leferydd naturiol dros hanner cant o gyfranwyr ar ffurf ffeiliau mp3, ynghyd â thrawsgrifiadau 'verbatim' cyfatebol o’r lleferydd ar ffurf ffeil .tsv. Mae'r mwyafrif o'r lleferydd yn leferydd digymell, naturiol. Dosbarthwn y deunydd hwn o dan drwydded agored CC0.
## Pwrpas
Pwrpas y trawsgrifiadau hyn yw gweithredu fel data hyfforddi ar gyfer modelau adnabod lleferydd, gan gynnwys [ein modelau wav2vec](https://github.com/techiaith/docker-wav2vec2-cy). Ar gyfer y diben hwnnw, mae gofyn am drawsgrifiadau mwy verbatim o'r hyn a ddywedwyd na'r hyn a welir mewn trawsgrifiadau traddodiadol ac mewn isdeitlau, felly datblygwyd confensiwn arbennig ar gyfer y gwaith trawsgrifio ([gweler isod](#confensiynau_trawsgrifio)). Gydag ein modelau wav2vec, caiff cydran ychwnaegol, sef 'model iaith' ei defnyddio ar ôl y model adnabod lleferydd i safoni mwy ar allbwn y model iaith i fod yn debycach i drawsgrifiadau traddodiadol ac isdeitlau.
Rydyn ni wedi darparu 3 ffeil .tsv, sef clips.tsv, train.tsv a test.tsv. Mae clips.tsv yn cynnwys ein trawsgrifiadau i gyd. Crëwyd train.tsv a test.tsv er mewn darparu setiau 'safonol' sy'n caniatáu i ddefnyddwyr allu gymharu modelau gan wahanol hyfforddwyr yn deg,hynny yw fe'u crëwyd at bwrpas meincnodi. Mae train.tsv yn cynnwys 80% o'n trawsgrifiadau, a test.tsv yn cynnwys y 20% sy'n weddill.
Dyma enghraifft o gynnwys y data:
```
audio_filename audio_filesize transcript duration
f86a046fd0964e0386d8c1363907183d.mp3 898272 *post industrial* yym a gyda yy dwi'n ca'l deud 5092
f0c2310fdca34faaa83beca5fa7ed212.mp3 809720 sut i ymdopio felly, wedyn erbyn hyn mae o nôl yn y cartra 4590
3eec3feefe254c9790739c22dd63c089.mp3 1335392 Felly ma' hon hefyd yn ddogfen fydd yn trosglwyddo gyda'r plant bobol ifanc o un cam i'r llall ac hefyd erbyn hyn i'r coleg 'lly. 7570
```
Ceir pedair colofn yn y ffeiliau .tsv. Y cyntaf yw enw’r ffeil sain. Maint y ffeil sain yw’r ail. Y trawsgrifiad ei hun sydd yn y drydedd golofn. Hyd y clip sain sydd yn yr olaf.
Dyma'r wybodaeth am y colofnau.
| Maes| Esboniad |
| ------ | ------ |
| `audio_filename`| Enw'r ffeil sain o fewn y ffolder 'clips'|
| `audio_filesize` | Maint y ffeil|
| `transcript` | Trawsgrifiad |
| `duration` | Hyd amser y clip mewn milliseconds. |
## Y Broses o Greu’r Adnodd
Casglwyd y ffeiliau sain yn bennaf o bodlediadau Cymraeg gyda chaniatâd eu perchnogion yn ogystal â'r cyfranwyr unigol. Rydym yn ddiolchgar tu hwnt i’r bobl yna. Yn ogystal, crewyd rhywfaint o sgriptiau ar batrwm eitemau newyddion ac erthyglau a'u darllen gan ymchwilwyr yr Uned Technolegau Iaith er mwyn sicrhau bod cynnwys o'r math hwnnw yn y banc.
Gyrrwyd y ffeiliau sain trwy ein trawsgrifiwr awtomataidd mewnol i segmentu’r sain a chreu trawsgrifiadau amrwd. Defnyddiwyd pecyn trawsgrifio Elan 6.4 (ar gael o https://archive.mpi.nl/tla/elan) gan drawsgrifwyr profiadol i wrando ar a chywiro’r trawsgrifiad amrwd.
## Nodyn Ynghylch Anonymeiddio’r Cynnwys
Er tegwch i’r cyfranwyr, rydyn ni wedi anonymeiddio’r trawsgrifiadau. Penderfynwyd anonymeiddio nid yn unig enwau pobl unigol, ond hefyd unrhyw Wybodaeth Bersonol Adnabyddadwy (PII) gan gynnwys, ond nid yn gyfunedig i:
* Rhif ffôn
* Teitlau swyddi/galwedigaethau
* Gweithleoedd
* Enwau mannau cyhoeddus
* Lleoliad daearyddol
* Dyddiadau/amseroedd
Wrth drawsgrifio marciwyd pob segment oedd yn cynnwys PII gyda’r tag \<PII>, yna wnaethom hidlo allan pob segment oedd yn cynnwys tag \<PII> er mwyn sicrhau nad oedd unrhyw wybodaeth bersonol yn cael eu cyhoeddi fel rhan o’r adnodd hwn.
Rydym hefyd wedi newid trefn trawsgrifiadau i fod ar hap, felly nid ydynt wedi'u cyhoeddi yn y drefn y maent yn eu ymddangos yn y ffeiliau sain gwreiddiol.
<a name="confensiynau_trawsgrifio"></a>
## Confensiynau Trawsgrifio
Datblygwyd y confensiynau trawsgrifio hyn er mwyn sicrhau fod y trawsgrifiadau nid yn unig yn verbatim ond hefyd yn gyson. Fe’u datblygwyd trwy gyfeirio at gonfensiynau a ddefnyddir gan yr Uned yn y gorffennol, confensiynau eraill megis y rhai a defnyddiwyd yng nghorpora CorCenCC, Siarad, CIG1 a CIG2, a hefyd trwy broses o ddatblygu parhaol wrth i’r tîm ymgymryd â’r dasg o drawsgrifio.
**NODWCH** - gan ein bod wedi datblygu’r egwyddorion trawsgrifio yn rhannol wrth ymgymryd â’r dasg o drawsgrifio nid yw’r trawsgrifiadau cynnar o reidrwydd yn dilyn yr egwyddorion cant y cant. Bwriadwn wirio’r trawsgrifiadau wedi i ni fireinio’r confensiynau.
### Collnodau
Ni ddefnyddiwyd collnodau i marcio pob un llythyren a hepgorwyd gan siaradwyr. Er enghraifft, _gwitho_ (sef ynganiad o _gweithio_) sy’n gywir, nid _gw’ith’o_
Yn hytrach, defnyddiwyd collnodau i wahaniaethu rhwng gwahanol eiriau oedd yn cael eu sillafu'r union yr un fath fel arall. Er enghraifft rydym yn defnyddio collnod o flaen _’ma_ (sef _yma_) i wahaniaethu rhyngddo â _ma’_ (sef _mae_), _gor’o’_ i wahaniaethu rhwng _gorfod_ a ffurf trydydd person unigol amser dibynnol presennol _gori_, a _pwysa’_ i wahaniaethu rhwng ffurf luosog _pwys_ a nifer o ffurfiau berfol posib _pwyso_.
Fodd bynnag, ceir eithriad i’r rheol hon, a hynny pan fo sillafu gair heb gollnod yn newid sŵn y llythyren cyn neu ar ôl y collnod, ac felly _Cymra’g_ sy’n gywir, nid _Cymrag_.
### Tagiau
Wrth drawsgrifio, defnyddiwyd y tagiau hyn i recordio elfennau oedd y tu hwnt i leferydd yr unigolion:
* \<anadlu>
* \<aneglur>
* \<cerddoriaeth>
* \<chwerthin>
* \<chwythu allan>
* \<clirio gwddf>
* \<distawrwydd>
* \<ochneidio>
* \<PII>
* \<peswch>
* \<sniffian>
* \<twtian>
Rhagwelwn y bydd y rhestr hon yn chwyddo wrth i ni drawsgrifio mwy o leferydd ac wrth i ni daro ar draws mwy o elfennau sydd y tu hwnt i leferydd unigolion.
### Synau nad ydynt yn eiriol
Ymdrechwyd i drawsgrifio synau nad ydynt yn eiriol yn gyson. Er enghraifft, defnyddiwyd _yy_ bob tro (yn hytrach nag _yrr_, _yr_ neu _err_ neu gymysgedd o’r rheiny) i gynrychioli neu adlewyrchu’r sŵn a wnaethpwyd pan oedd siaradwr yn ceisio meddwl neu oedi wrth siarad.
Defnyddiwyd y canlynol wrth drawsgrifio:
* yy
* yym
* hmm
* m-hm
Eto, rhagwelwn y bydd y rhestr hon yn chwyddo wrth i ni drawsgrifio mwy o leferydd ac wrth i ni daro ar draws mwy o synau nad ydynt yn eiriol.
### Geiriau Saesneg
Rydym wedi amgylchynu bob gair neu ymadrodd Saesneg gyda sêr, er enghraifft:
> Dwi’n deall **\*sort of\***.
### Cymreigio berfenwau
Pan fo siaradwyr yn defnyddio geiriau Saesneg fel berfenwau (trwy ychwanegu _io_ ar ddiwedd y gair er enghraifft) rydym wedi ymdrechu i sillafu’r gair gan ddefnyddio confensiynau sillafu Cymreig yn hytrach nag ychwanegu _io_ at sillafiad Saesneg o’r gair. Er enghraifft rydym wedi trawsgrifio _heitio_ yn hytrach na _hateio_, a _lyfio_ yn hytrach na _loveio_.
### Cywiro cam-siarad
I sicrhau ein bod ni’n glynu at egwyddorion trawsgrifio verbatim penderfynwyd na ddylem gywiro cam-siarad neu gam-ynganu siaradwyr. Er enghraifft, yn y frawddeg ganlynol:
> enfawr fel y diffyg o fwyd yym **efallu** cam-drin
mae'n amlwg mai’r gair _efallai_ sydd dan sylw mewn gwirionedd, ond fe’i trawsgrifiwyd fel ei glywir.
### Atalnodi
Defnyddiwyd atalnodau llawn, marciau cwestiwn ac ebychnodau wrth drawsgrifio’r lleferydd.
Rydym wedi amgylchynu bob gair neu ymadrodd sydd wedi ei dyfynnu gyda _”_, er enghraifft:
> Dywedodd hi **”Dwi’n mynd”** ond aeth hi ddim.
### Nodyn ynghylch ein defnydd o gomas
Gan mai confensiwn ysgrifenedig yw coma yn y bôn, ni ddefnyddiwyd comas cymaint wrth drawsgrifio. Byddai defnyddio coma lle y disgwylir i’w weld mewn testun ysgrifenedig ddim o reidrwydd wedi adlewyrchu lleferydd yr unigolyn. Dylid cadw hynny mewn cof wrth ddarllen y trawsgrifiadau.
### Sillafu llythrennau
Sillafwyd llythrennau unigol yn hytrach na thrawsgrifio’r llythrennau unigol yn unig.
Hynny yw, hyn sy’n gywir:
> Roedd ganddo **ow si di**
**ac nid:**
> Roedd ganddo **O C D**
**na chwaith:**
> Roedd ganddo **OCD**
### Rhifau
Trawsgrifiwyd rhifau fel geiriau yn hytrach na digidau, hynny yw hyn sy’n gywir:
> Y flwyddyn dwy fil ac ugain
**ac nid:**
> Y flwyddyn 2020
### Gorffen gair ar ei hanner
Marciwyd gair oedd wedi ei orffen ar ei hanner gyda `-`. Er enghraifft:
> Ma’n rhaid i mi **ca-** cael diod.
### Gorffen brawddeg ar ei hanner/ailddechrau brawddeg
Marciwyd brawddeg oedd wedi ei gorffen ar ei hanner gyda `...`. Er enghraifft:
> Ma’n rhaid i mi ca’l... Ma’ rhaid i mi brynu diod.
### Siaradwr yn torri ar draws siaradwr arall
Ceir yn y data llawer o enghreifftiau o siaradwr yn torri ar draws y prif leferydd gan ddefnyddio synau nad ydynt yn eiriol, geiriau neu ymadroddion (megis _m-hm_, _ie_, _ydi_, _yn union_ ac ati). Pan oedd y ddau siaradwr i'w clywed yn glir ag ar wahân, rhoddwyd `...` ar ddiwedd rhan gyntaf y lleferydd toredig, a `...` arall ar ddechrau ail ran y lleferydd toredig, fel yn yr enghraifft ganlynol:
> Ond y peth yw... M-hm. ...mae’r ddau yn wir
Pan nad oedd y ddau siaradwyr i'w clywed yn glir ag ar wahân, fe hepgorwyd y lleferydd o’r data.
### Rhegfeydd
Dylid nodi ein bod ni heb hepgor rhegfeydd wrth drawsgrifio.
## Y Dyfodol
Wrth ddefnyddio’r banc trawsgrifiadau dylid cadw mewn cof mai fersiwn cychwynnol ydyw. Bwriadwn fireinio a chysoni ein trawsgrifiadau ymhellach, ac ychwanegu mwy fyth o drawsgrifiadau i’r banc yn rheolaidd dros y flwyddyn nesaf
## Cyfyngiadau
Er mwyn parchu'r cyfrannwyr, wrth lwytho'r data hwn i lawr rydych yn cytuno i beidio â cheisio adnabod y siaradwyr yn y data.
## Diolchiadau
Diolchwn i'r cyfrannwyr am eu caniatâd i ddefnyddio'u lleferydd. Rydym hefyd yn ddiolchgar i Lywodraeth Cymru am ariannu’r gwaith hwn fel rhan o broject Technoleg Testun, Lleferydd a Chyfieithu ar gyfer yr Iaith Gymraeg.
---
# Bangor Transcription Bank
This resource is a bank of 30 hours 20 minutes and 41 seconds of segments of natural speech from over 50 contributors in mp3 file format, together with corresponding 'verbatim' transcripts of the speech in .tsv file format. The majority of the speech is spontaneous, natural speech. We distribute this material under a CC0 open license.
## Purpose
The purpose of these transcripts is to act as training data for speech recognition models, including [our wav2vec models](https://github.com/techiaith/docker-wav2vec2-cy). For that purpose, transcriptions are more verbatim than what is seen in traditional transcriptions and than what is required for subtitling purposes, thus a bespoke set of conventions has been developed for the transcription work ([see below](#transcription_conventions) ). Our wav2vec models use an auxiliary component, namely a 'language model', to further standardize the speech recognition model’s output in order that it be more similar to traditional transcriptions and subtitles.
We have provided 3 .tsv files, namely clips.tsv, train.tsv and test.tsv. clips.tsv contains all of our transcripts. train.tsv and test.tsv were created to provide 'standard' sets that allow users to compare models trained by different trainers fairly, i.e. they were created as a 'benchmark'. train.tsv contains 80% of our transcripts, and test.tsv contains the remaining 20%.
Here is an example of the data content:
```
audio_filename audio_filesize transcript duration
f86a046fd0964e0386d8c1363907183d.mp3 898272 *post industrial* yym a gyda yy dwi'n ca'l deud 5092
f0c2310fdca34faaa83beca5fa7ed212.mp3 809720 sut i ymdopio felly, wedyn erbyn hyn mae o nôl yn y cartra 4590
3eec3feefe254c9790739c22dd63c089.mp3 1335392 Felly ma' hon hefyd yn ddogfen fydd yn trosglwyddo gyda'r plant bobol ifanc o un cam i'r llall ac hefyd erbyn hyn i'r coleg 'lly. 7570
```
There are four columns in the .tsv files. The first is the name of the audio file. The second is the size of the audio file. The transcript itself appears in the third column. The length of the audio clip appears in the last.
Here is the information about the columns.
| Field| Explanation |
| ------ | ------ |
| `audio_filename`| The name of the audio file within the 'clips' folder|
| `audio_filesize` | The size of the file |
| `transcript` | Transcript |
| `duration` | Duration of the clip in milliseconds. |
## The Process of Creating the Resource
The audio files were mainly collected from Welsh podcasts, after having gained the consent of the podcast owners and individual contributors to do so. We are extremely grateful to those people. In addition, some scripts were created which mimicked the pattern of news items and articles. These scripts were then read by Language Technologies Unit researchers in order to ensure that content of that type was included in the bank.
The audio files were run through our in-house automated transcriber to segment the audio and create raw transcripts. Using Elan 6.4 (available from https://archive.mpi.nl/tla/elan), experienced transcribers listened to and corrected the raw transcript.
## A Note About Content Anonymization
Out of respect to the contributors, we have anonymised all transcripts. It was decided to anonymize not only the names of individual people, but also any other Personally Identifiable Information (PII) including, but not limited to:
* Phone number
* Job titles/occupations
* Workplaces
* Names of public places
* Geographical location
* Dates/times
When transcribing, all segments containing PII were marked with the \<PII> tag, we then filtered out all segments containing a \<PII> tag to ensure no personal information was published as part of this resource.
We have also randomized the order of the segments so that they are not published in the order they appeared in the original audio files.
<a name="transcription_conventions"></a>
## Transcription Conventions
These transcription conventions were developed to ensure that the transcriptions were not only verbatim but also consistent. They were developed by referring to conventions used by the Unit in the past, conventions such as those used in the CorCenCC, Siarad, CIG1 and CIG2 corpora, and also through a process of ongoing development as the team undertook the task of transcription.
**NOTE** - as we have partially developed the conventions at the same time as undertaking the task of transcription the early transcriptions may not follow the latest principles faithfully. We intend to check the transcripts after we have refined the conventions.
### Apostrophes
Apostrophes were not used to mark every single letter omitted by speakers. For example, _gwitho_ (which is a pronunciation of _gweithio_) is correct, not _gw’ith'o_.
Rather, apostrophes were used to distinguish between different words that were otherwise spelled identically. For example we use an apostrophe in front of _'ma_ (a pronunciation of _yma_) to distinguish it from _ma'_ (a pronunciation of _mae_), _gor'o'_ to distinguish between _gorfod_ and the third person singular form of the present dependent tense _gori_, and _pwysa'_ to distinguish between the plural form of _pwys_ and a number of possible verb forms of _pwyso_.
However, there is an exception to this rule, that being when spelling a word without an apostrophe would change the sound of the letter before or after the apostrophe, thus _Cymra'g_ is correct, not _Cymrag_.
### Tags
When transcribing, these tags were used to record elements that were external to the speech of the individuals:
* \<anadlu>
* \<aneglur>
* \<cerddoriaeth>
* \<chwerthin>
* \<chwythu allan>
* \<clirio gwddf>
* \<distawrwydd>
* \<ochneidio>
* \<PII>
* \<peswch>
* \<sniffian>
* \<twtian>
We anticipate that this list will grow as we transcribe more speech and as we come across more elements that are external to the speech of individuals.
### Non-verbal sounds
Efforts were made to transcribe non-verbal sounds consistently. For example, _yy_ was always used (rather than _yrr_, _yr_ or _err_, or a mixture of those) to represent or reflect the sound made when a speaker was trying to think or paused in speaking.
The following were used in transcription:
* yy
* yym
* hmm
* m-hm
Again, we anticipate that this list will grow as we transcribe more speech and as we encounter more non-verbal sounds.
### English words
We have surrounded each English word or phrase with asterixis, for example:
> Dwi’n deall **\*sort of\***.
### Adapting English words as Welsh language infinitives
When speakers use English words as infinitives (by adding _io_ at the end of the word for example) we have endeavoured to spell the word using Welsh spelling conventions rather than adding _io_ to the English spelling of the word. For example we have transcribed _heitio_ instead of _hateio_, and _lyfio_ instead of _loveio_.
### Correction of mis-pronunciations
To ensure that we adhere to the principles of verbatim transcription it was decided that we should not correct speakers' mis-pronunciations. For example, in the following sentence:
> enfawr fel y diffyg o fwyd yym **efallu** cam-drin
it is clear that _efallai_ is the intended word, but it is transcribed as it is heard.
### Punctuation
Full stops, question marks and exclamation marks were used when transcribing the speech.
We have surrounded all quoted words or phrases with _”_, for example:
> Dywedodd hi **”Dwi’n mynd”** ond aeth hi ddim.
### A note about our use of commas
As a comma is essentially a convention used for written text, commas were not used prolifically in transcription. Using a comma where one would expected to see it in a written text during transcription would not necessarily have reflected the individual's speech. This should be borne in mind when reading the transcripts.
### Individual letters
Individual letters were spelled out rather than being transcribed as individual letters.
That is, this is correct:
> Roedd ganddo **ow si di**
**not:**
> Roedd ganddo **O C D**
**nor:**
> Roedd ganddo **OCD**
### Numbers
Numbers were transcribed as words rather than digits, thus this is correct:
> Y flwyddyn dwy fil ac ugain
**rather than:**
> Y flwyddyn 2020
### Half-finished words
Half-finished words are marked with a `-`. For example:
> Ma’n rhaid i mi **ca-** cael diod.
### Half-finished/restarted sentences
Half-finished sentences are marked with a `...`. For example:
> Ma’n rhaid i mi ca’l... Ma’ rhaid i mi brynu diod.
### Speaker interruptions
There are many examples of a speaker interrupting another speaker by using non-verbal sounds, words or phrases (such as _m-hm_, _ie_, _ydi_, _yn union_ etc.) in the data. When the two speakers could be heard clearly and distinctly, a `...` was placed at the end of the first part of the broken speech, and another `...` at the beginning of the second part of the broken speech, as in the following example:
> Ond y peth yw... M-hm. ...mae’r ddau yn wir
When the two speakers could not be heard clearly and distinctly, the speech was omitted from the data.
### Swearwords
It should be noted that we have not omitted swearwords when transcribing.
## The future
That this is an initial version of the transcript bank should be borne in mind when using this resource. We intend to refine and harmonize our transcripts further, and add yet more transcripts to the bank regularly over the next year.
## Restrictions
In order to respect the contributors, by downloading this data you agree not to attempt to identify the speakers in the data.
## Acknowledgements
We thank the contributors for their permission to use their speech. We are also grateful to the Welsh Government for funding this work as part of the Text, Speech and Translation Technology project for the Welsh Language.
| 19,725 | [
[
-0.038970947265625,
-0.034088134765625,
0.045501708984375,
0.02996826171875,
-0.052398681640625,
-0.0215911865234375,
0.0183868408203125,
-0.047607421875,
0.08685302734375,
0.01525115966796875,
-0.05145263671875,
-0.038238525390625,
-0.044281005859375,
0.028... |
diffusers-parti-prompts/karlo-v1 | 2023-05-17T16:49:02.000Z | [
"region:us"
] | diffusers-parti-prompts | null | null | 0 | 16 | 2023-05-14T22:06:00 | ---
dataset_info:
features:
- name: Prompt
dtype: string
- name: Category
dtype: string
- name: Challenge
dtype: string
- name: Note
dtype: string
- name: images
dtype: image
- name: model_name
dtype: string
- name: seed
dtype: int64
splits:
- name: train
num_bytes: 161180147.0
num_examples: 1632
download_size: 161038543
dataset_size: 161180147.0
---
# Images of Parti Prompts for "karlo-v1"
Code that was used to get the results:
```py
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained("kakaobrain/karlo-v1-alpha", torch_dtype=torch.float16)
pipe.to("cuda")
prompt = "" # a parti prompt
generator = torch.Generator("cuda").manual_seed(0)
image = pipe(prompt, prior_num_inference_steps=50, decoder_num_inference_steps=100, generator=generator).images[0]
``` | 868 | [
[
-0.0230712890625,
-0.0308380126953125,
0.052459716796875,
0.019287109375,
-0.050628662109375,
-0.0262908935546875,
0.021697998046875,
0.0125732421875,
0.01309967041015625,
0.019622802734375,
-0.053558349609375,
-0.049407958984375,
-0.057525634765625,
0.02136... |
jxu124/objects365 | 2023-05-20T20:09:43.000Z | [
"region:us"
] | jxu124 | null | null | 0 | 16 | 2023-05-20T19:55:12 | ---
dataset_info:
features:
- name: global_image_id
dtype: string
- name: image_path
dtype: string
- name: anns_id
dtype: string
- name: format
dtype: string
- name: image_info
struct:
- name: file_name
dtype: string
- name: height
dtype: int64
- name: id
dtype: int64
- name: license
dtype: int64
- name: url
dtype: string
- name: width
dtype: int64
- name: anns_info
list:
- name: area
dtype: float64
- name: bbox
sequence: float64
- name: category
dtype: string
- name: category_id
dtype: int64
- name: id
dtype: int64
- name: image_id
dtype: int64
- name: iscrowd
dtype: int64
- name: isfake
dtype: int64
- name: isreflected
dtype: int64
splits:
- name: train
num_bytes: 3000445884
num_examples: 1742292
- name: validation
num_bytes: 145616533
num_examples: 80000
download_size: 1646594676
dataset_size: 3146062417
---
# Dataset Card for "objects365"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,201 | [
[
-0.05023193359375,
-0.0005249977111816406,
0.0088348388671875,
0.016632080078125,
-0.00441741943359375,
-0.00856781005859375,
0.028228759765625,
-0.02947998046875,
0.03302001953125,
0.03375244140625,
-0.0709228515625,
-0.06365966796875,
-0.031402587890625,
-... |
dev2bit/es2bash | 2023-05-23T21:11:43.000Z | [
"task_categories:text-generation",
"language:es",
"license:apache-2.0",
"code",
"region:us"
] | dev2bit | This dataset consisting of natural language requests (in Spanish) and the bash command that resolves it. | \ | 3 | 16 | 2023-05-23T20:25:37 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- es
tags:
- code
---
# ES2Bash
This dataset contains a collection of natural language requests (in Spanish) and their corresponding bash commands. The purpose of this dataset is to provide examples of requests and their associated bash commands to facilitate machine learning and the development of natural language processing systems related to command-line operations.
# Features
The dataset consists of two main features:
* Natural Language Request (ES): This feature contains natural language requests written in Spanish. The requests represent tasks or actions to be performed using command-line commands.
* Bash Command: This feature contains the bash commands associated with each natural language request. The bash commands represent the way to execute the requested task or action using the command line.
# Initial Commands
The dataset initially contains requests related to the following commands:
* cat: Requests involving reading text files.
* ls: Requests related to obtaining information about files and directories at a specific location.
* cd: Requests to change the current directory.
# Dataset Expansion
In addition to the initial commands mentioned above, there are plans to expand this dataset to include more common command-line commands. The expansion will cover a broader range of tasks and actions that can be performed using command-line operations.
Efforts will also be made to improve the existing examples and ensure that they are clear, accurate, and representative of typical requests that users may have when working with command lines.
# Request Statistics
In the future, statistical data will be provided on the requests present in this dataset. This data may include information about the distribution of requests in different categories, the frequency of use of different commands, and any other relevant analysis to better understand the usage and needs of command-line users.
# Request Collection Process
This dataset is the result of a combination of requests generated by language models and manually added requests. The requests generated by language models were based on existing examples and prior knowledge related to the usage of command lines. A manual review was then conducted to ensure the quality and relevance of the requests. | 2,355 | [
[
-0.045196533203125,
-0.0704345703125,
0.0279388427734375,
0.01213836669921875,
-0.01023101806640625,
0.0234832763671875,
-0.005672454833984375,
-0.02783203125,
0.046539306640625,
0.09539794921875,
-0.072998046875,
-0.04473876953125,
-0.0238494873046875,
0.01... |
gretelai/symptom_to_diagnosis | 2023-05-24T17:58:04.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"medical",
"region:us"
] | gretelai | null | null | 4 | 16 | 2023-05-23T22:48:27 | ---
license: apache-2.0
task_categories:
- text-classification
task_ids:
- multi-class-classification
language:
- en
tags:
- medical
pretty_name: Gretel/symptoms_to_diagnosis
size_categories:
- 10K<n<100K
---
# Dataset Summary
This dataset contains natural language descriptions of symptoms labeled with 22 corresponding diagnoses. `Gretel/symptom_to_diagnosis` provides 1065 symptom descriptions in the English language labeled with 22 diagnoses, focusing on fine-grained single-domain diagnosis.
## Data Fields
Each row contains the following fields:
* `input_text` : A string field containing symptoms
* `output_text` : A string field containing a diagnosis
Example:
```
{
"output_text": "drug reaction",
"input_text": "I've been having headaches and migraines, and I can't sleep. My whole body shakes and twitches. Sometimes I feel lightheaded."
}
```
## Diagnoses
This table contains the count of each diagnosis in the train and test splits.
| | Diagnosis | train.jsonl | test.jsonl |
|---:|:--------------------------------|--------------:|-------------:|
| 0 | drug reaction | 40 | 8 |
| 1 | allergy | 40 | 10 |
| 2 | chicken pox | 40 | 10 |
| 3 | diabetes | 40 | 10 |
| 4 | psoriasis | 40 | 10 |
| 5 | hypertension | 40 | 10 |
| 6 | cervical spondylosis | 40 | 10 |
| 7 | bronchial asthma | 40 | 10 |
| 8 | varicose veins | 40 | 10 |
| 9 | malaria | 40 | 10 |
| 10 | dengue | 40 | 10 |
| 11 | arthritis | 40 | 10 |
| 12 | impetigo | 40 | 10 |
| 13 | fungal infection | 39 | 9 |
| 14 | common cold | 39 | 10 |
| 15 | gastroesophageal reflux disease | 39 | 10 |
| 16 | urinary tract infection | 39 | 9 |
| 17 | typhoid | 38 | 9 |
| 18 | pneumonia | 37 | 10 |
| 19 | peptic ulcer disease | 37 | 10 |
| 20 | jaundice | 33 | 7 |
| 21 | migraine | 32 | 10 |
## Data Splits
The data is split to 80% train (853 examples, 167kb) and 20% test (212 examples, 42kb).
## Dataset Creation
Data was filtered to remove unwanted categories and updated using an LLM to create language more consistent with how a patient would describe symptoms in natural language to a doctor.
## Source Data
This dataset was adapted based on the [Symptom2Disease](https://www.kaggle.com/datasets/niyarrbarman/symptom2disease) dataset from Kaggle.
## Personal and Sensitive Information
The symptoms in this dataset were modified from their original format using an LLM and do not contain personal data.
## Limitations
This dataset is licensed Apache 2.0 and free for use. | 2,455 | [
[
-0.0095367431640625,
-0.0379638671875,
0.0162811279296875,
0.031097412109375,
-0.00734710693359375,
-0.0171966552734375,
-0.02239990234375,
-0.0447998046875,
0.038482666015625,
0.04364013671875,
-0.039154052734375,
-0.07470703125,
-0.0667724609375,
0.0275878... |
adrianhenkel/lucidprots_full_data | 2023-06-15T17:12:22.000Z | [
"region:us"
] | adrianhenkel | null | null | 2 | 16 | 2023-06-15T16:58:30 | ---
dataset_info:
features:
- name: input_id_x
sequence: int64
- name: input_id_y
sequence: int64
splits:
- name: train
num_bytes: 65665021040
num_examples: 17070828
- name: test
num_bytes: 1131744
num_examples: 474
- name: valid
num_bytes: 4840024
num_examples: 1259
download_size: 5082803946
dataset_size: 65670992808
---
# Dataset Card for "lucidprots_full_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 548 | [
[
-0.034637451171875,
-0.0188446044921875,
0.0201263427734375,
0.0176544189453125,
-0.0230560302734375,
-0.00408172607421875,
0.005794525146484375,
-0.0121612548828125,
0.077392578125,
0.052490234375,
-0.046356201171875,
-0.054656982421875,
-0.028717041015625,
... |
dmayhem93/agieval-gaokao-chinese | 2023-06-18T17:18:09.000Z | [
"license:mit",
"arxiv:2304.06364",
"region:us"
] | dmayhem93 | null | null | 0 | 16 | 2023-06-18T12:47:45 | ---
dataset_info:
features:
- name: query
dtype: string
- name: choices
sequence: string
- name: gold
sequence: int64
splits:
- name: test
num_bytes: 833642
num_examples: 246
download_size: 371866
dataset_size: 833642
license: mit
---
# Dataset Card for "agieval-gaokao-chinese"
Dataset taken from https://github.com/microsoft/AGIEval and processed as in that repo.
MIT License
Copyright (c) Microsoft Corporation.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE
@misc{zhong2023agieval,
title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models},
author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan},
year={2023},
eprint={2304.06364},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 1,838 | [
[
-0.01036834716796875,
-0.037811279296875,
-0.0045623779296875,
0.0296630859375,
-0.0274505615234375,
-0.01531982421875,
0.0095367431640625,
-0.034698486328125,
-0.004669189453125,
0.034881591796875,
-0.046661376953125,
-0.039306640625,
-0.0300750732421875,
-... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.