id stringlengths 2 115 | author stringlengths 2 42 โ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 โ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 โ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ai2lumos/lumos_unified_ground_iterative | ai2lumos | 2023-10-26T06:06:47Z | 16 | 0 | null | [
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"language-agent",
"maths",
"reasoning",
"question-answering",
"web-agent",
"grounding",
"region:us"
] | 2023-10-26T06:06:47Z | 2023-10-23T05:39:02.000Z | 2023-10-23T05:39:02 | ---
license: apache-2.0
task_categories:
- conversational
- text-generation
- question-answering
language:
- en
tags:
- language-agent
- maths
- reasoning
- question-answering
- web-agent
- grounding
size_categories:
- 10K<n<100K
---
# ๐ช Lumos: Language Agents with Unified Formats, Modular Design, and Open-Source LLMs
<p align="center">
๐<a href="https://allenai.github.io/lumos">[Website]</a>
๐<a href="">[Paper]</a>
๐ค<a href="https://huggingface.co/datasets?sort=trending&search=ai2lumos">[Data]</a>
๐ค<a href="https://huggingface.co/models?sort=trending&search=ai2lumos">[Model]</a>
</p>
We introduce ๐ช**Lumos**, Language Agents with **Unified** Formats, **Modular** Design, and **Open-Source** LLMs. **Lumos** unifies a suite of complex interactive tasks and achieves competitive performance with GPT-4/3.5-based and larger open-source agents.
**Lumos** has following features:
* ๐งฉ **Modular Architecture**:
- **Lumos** consists of planning, grounding, and execution modules built based on LLAMA-2-7B.
* ๐ **Diverse Training Data**:
- **Lumos** is trained with ~40K high-quality annotations from ground-truth reasoning steps in existing benchmarks with GPT-4.
* ๐ **Competitive Performance**:
- ๐ **Lumos** outperforms **GPT-4/3.5-based** agents on complex QA and web agent tasks, and **larger open agents** on maths tasks.
- ๐ **Lumos** performs better than open agent baseline formulations including **chain-of-thoughts** and **unmodularized** training.
- ๐ **Lumos** surpasses larger open LLM agents and domain-specific agents on an unseen task, WebShop.
## Data Overview
`lumos_unified_ground_iterative` is the data for training **grounding** module on **maths**, **complex QA** and **web agent** tasks in **Lumos-Iterative (Lumos-I)** formulation.
The source of the training annotation training data is shown below:
| Task | Number |
|---|---|
|PRM800K|10000|
|GSM8K|7473|
|ASDiv|2305|
|StrategyQA|1777|
|Musique|17632|
|Mind2Web|1009|
## Models Trained with the Data
`lumos_unified_ground_iterative` is used to train the following models.
|Model|Huggingface Repo|
|---|---|
|`lumos_unified_ground_iterative`| [๐คHuggingface Repo](https://huggingface.co/ai2lumos/lumos_unified_ground_iterative) |
## Citation
If you find this work is relevant with your research, please feel free to cite our work!
```
@article{yin2023lumos,
title={Lumos: Towards Language Agents that are Unified, Modular, and Open Source},
author={Yin, Da and Brahman, Faeze and Ravichander, Abhilasha and Chandu, Khyathi and Chang, Kai-Wei and Choi, Yejin and Lin, Bill Yuchen},
year={2023}
}
``` | [
-0.10158741474151611,
-0.5535706877708435,
0.4079456925392151,
0.2614874839782715,
-0.24516040086746216,
0.034184690564870834,
-0.4709131121635437,
-0.5804031491279602,
0.30083346366882324,
0.3738647997379303,
-0.5250989198684692,
-0.6186016201972961,
-0.38023072481155396,
-0.0877607390284... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
james-burton/vet_month_1d_ordinal | james-burton | 2023-10-23T14:42:15Z | 16 | 0 | null | [
"region:us"
] | 2023-10-23T14:42:15Z | 2023-10-23T14:42:11.000Z | 2023-10-23T14:42:11 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: age_at_consult
dtype: float64
- name: Ear_or_Mastoid
dtype: int64
- name: Mental_Behavioral_or_Neuro
dtype: int64
- name: Blood_or_Blood-forming
dtype: int64
- name: Circulatory
dtype: int64
- name: Dental
dtype: int64
- name: Developmental
dtype: int64
- name: Digestive
dtype: int64
- name: Endocrine_Nutritional_or_Metabolic
dtype: int64
- name: Immune
dtype: int64
- name: Infectious_or_Parasitic
dtype: int64
- name: Skin
dtype: int64
- name: Musculoskeletal_or_Connective_Tissue
dtype: int64
- name: Neoplasms
dtype: int64
- name: Nervous
dtype: int64
- name: Visual
dtype: int64
- name: Perinatal
dtype: int64
- name: Pregnancy_Childbirth_or_Puerperium
dtype: int64
- name: Respiratory
dtype: int64
- name: Injury_Poisoning_or_External_Causes
dtype: int64
- name: Genitourinary
dtype: int64
- name: gender
dtype: float64
- name: neutered
dtype: float64
- name: species
dtype: float64
- name: insured
dtype: float64
- name: practice_id
dtype: string
- name: premise_id
dtype: string
- name: breed
dtype: string
- name: region
dtype: string
- name: record
dtype: string
- name: labels
dtype: int64
splits:
- name: train
num_bytes: 5867630
num_examples: 8552
- name: validation
num_bytes: 1037398
num_examples: 1510
- name: test
num_bytes: 1791540
num_examples: 2606
download_size: 4036706
dataset_size: 8696568
---
# Dataset Card for "vet_month_1d_ordinal"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3310704529285431,
-0.14880113303661346,
0.14407502114772797,
0.1184600293636322,
-0.47139063477516174,
-0.4357244670391083,
0.7269326448440552,
0.023575004190206528,
0.8074061870574951,
0.6453566551208496,
-0.9698222279548645,
-1.1863378286361694,
-0.36167043447494507,
-0.15565049648284... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
orafandina/wiki_long_600k | orafandina | 2023-10-23T17:17:50Z | 16 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-10-23T17:17:50Z | 2023-10-23T17:13:49.000Z | 2023-10-23T17:13:49 | ---
license: apache-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Medint/Multi-Med-conversational | Medint | 2023-10-24T09:21:47Z | 16 | 0 | null | [
"task_categories:conversational",
"size_categories:10K<n<100K",
"language:en",
"medical",
"biology",
"region:us"
] | 2023-10-24T09:21:47Z | 2023-10-24T08:50:36.000Z | 2023-10-24T08:50:36 | ---
task_categories:
- conversational
language:
- en
tags:
- medical
- biology
size_categories:
- 10K<n<100K
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
thangvip/orca-filter-half-open | thangvip | 2023-11-07T07:44:10Z | 16 | 0 | null | [
"region:us"
] | 2023-11-07T07:44:10Z | 2023-10-25T04:16:52.000Z | 2023-10-25T04:16:52 | ---
dataset_info:
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 636502840.4529436
num_examples: 655016
download_size: 338685611
dataset_size: 636502840.4529436
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "orca-filter-half-open"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5375934839248657,
-0.4946195185184479,
0.06285306811332703,
0.006593786645680666,
-0.4760870635509491,
-0.222711443901062,
0.2699283957481384,
-0.2597554922103882,
0.8354470729827881,
0.747326135635376,
-0.9324296116828918,
-0.9588097333908081,
-0.4735305905342102,
-0.33539891242980957,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
parksimon0808/prm800k-llama-verifier | parksimon0808 | 2023-11-08T21:35:49Z | 16 | 0 | null | [
"region:us"
] | 2023-11-08T21:35:49Z | 2023-10-26T00:09:40.000Z | 2023-10-26T00:09:40 | ---
dataset_info:
features:
- name: texts
dtype: string
- name: input_ids
sequence: int32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 4515439728
num_examples: 1052294
- name: test
num_bytes: 144754726
num_examples: 32408
download_size: 341805703
dataset_size: 4660194454
---
# Dataset Card for "prm800k-llama"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4921893775463104,
-0.1075231209397316,
0.23418326675891876,
0.4213508367538452,
-0.6259106993675232,
0.06623821705579758,
0.4611261785030365,
-0.12407033145427704,
1.032914638519287,
0.6887088418006897,
-0.7862118482589722,
-0.7544445395469666,
-0.7419900894165039,
-0.01861179992556572,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
naman1011/spider | naman1011 | 2023-10-26T05:37:37Z | 16 | 0 | null | [
"region:us"
] | 2023-10-26T05:37:37Z | 2023-10-26T05:06:17.000Z | 2023-10-26T05:06:17 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
CJWeiss/multishort | CJWeiss | 2023-10-26T21:34:51Z | 16 | 0 | null | [
"region:us"
] | 2023-10-26T21:34:51Z | 2023-10-26T21:34:18.000Z | 2023-10-26T21:34:18 | ---
dataset_info:
features:
- name: id
dtype: string
- name: sources
sequence: string
- name: summary/long
dtype: string
- name: summary/short
dtype: string
- name: summary/tiny
dtype: string
splits:
- name: train
num_bytes: 949594524.2185664
num_examples: 2340
- name: test
num_bytes: 189516235.24229074
num_examples: 486
- name: valid
num_bytes: 137063421.14537445
num_examples: 312
download_size: 762638149
dataset_size: 1276174180.6062317
---
# Dataset Card for "multishort"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.625534176826477,
-0.17823614180088043,
0.32446154952049255,
0.5855172872543335,
-0.4049891531467438,
0.08042406290769577,
0.3167226016521454,
-0.22398610413074493,
0.7961598634719849,
0.27647748589515686,
-0.8242238163948059,
-0.7307069301605225,
-0.757696270942688,
-0.3761948347091675,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
wisenut-nlp-team/FiD_aihub_commonsense | wisenut-nlp-team | 2023-10-30T05:47:45Z | 16 | 1 | null | [
"region:us"
] | 2023-10-30T05:47:45Z | 2023-10-27T04:35:23.000Z | 2023-10-27T04:35:23 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: question
dtype: string
- name: context
dtype: string
- name: answer
dtype: string
- name: similar_contexts
sequence: string
splits:
- name: train
num_bytes: 939634163
num_examples: 90241
- name: validation
num_bytes: 104207636
num_examples: 10027
download_size: 614695228
dataset_size: 1043841799
---
# Dataset Card for "FiD_aihub_commonsense"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7354739904403687,
-0.5178387761116028,
-0.022883739322423935,
0.0480162687599659,
-0.24290131032466888,
-0.13615503907203674,
0.36092060804367065,
-0.10206348448991776,
0.7945816516876221,
0.46390730142593384,
-0.6602675914764404,
-0.6466235518455505,
-0.5388849377632141,
-0.16798979043... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
josedonoso/apples-dataset-60 | josedonoso | 2023-10-27T23:42:15Z | 16 | 0 | null | [
"region:us"
] | 2023-10-27T23:42:15Z | 2023-10-27T23:42:13.000Z | 2023-10-27T23:42:13 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 677659.0
num_examples: 48
- name: test
num_bytes: 161130.0
num_examples: 12
download_size: 839070
dataset_size: 838789.0
---
# Dataset Card for "apples-dataset-60"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.618553876876831,
-0.1341763734817505,
0.23544539511203766,
0.16078127920627594,
-0.0703120306134224,
0.10323314368724823,
0.4419080317020416,
-0.2383979856967926,
0.7756814956665039,
0.42718076705932617,
-1.0151242017745972,
-0.7054880261421204,
-0.635825514793396,
-0.328631192445755,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
akkasi/xed_en_fi | akkasi | 2023-10-28T19:40:24Z | 16 | 0 | null | [
"region:us"
] | 2023-10-28T19:40:24Z | 2023-10-28T19:40:22.000Z | 2023-10-28T19:40:22 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: float64
- name: label2idx
dtype: string
- name: idx2label
dtype: string
splits:
- name: train
num_bytes: 5184988
num_examples: 14022
- name: test
num_bytes: 1298121
num_examples: 3506
download_size: 603616
dataset_size: 6483109
---
# Dataset Card for "xed_en_fi_new"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8394039869308472,
-0.11563440412282944,
0.10888543725013733,
-0.041379909962415695,
-0.2582800090312958,
0.17496874928474426,
0.3591393828392029,
-0.23183433711528778,
1.0241144895553589,
0.5148746371269226,
-0.964393138885498,
-0.7594119906425476,
-0.5056396126747131,
-0.21066994965076... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cxllin/economics | cxllin | 2023-10-28T22:27:36Z | 16 | 2 | null | [
"region:us"
] | 2023-10-28T22:27:36Z | 2023-10-28T21:49:43.000Z | 2023-10-28T21:49:43 | ---
# For reference on dataset card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for cxllin/economics
This dataset aims to represent knowledge within the realm of economics
## Dataset Details
Featuring Macro, Micro, and Math texbooks
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | [
-0.46179887652397156,
-0.5925436615943909,
0.3478051722049713,
0.19007107615470886,
-0.14557500183582306,
-0.008749550208449364,
-0.16345037519931793,
-0.6249460577964783,
0.35155653953552246,
0.7855766415596008,
-0.6743676662445068,
-0.944007933139801,
-0.4563147723674774,
0.0630006417632... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
exponent/tinyc4 | exponent | 2023-11-05T14:07:20Z | 16 | 1 | null | [
"region:us"
] | 2023-11-05T14:07:20Z | 2023-10-29T12:15:38.000Z | 2023-10-29T12:15:38 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kheopsai/mise_demeure_gen | kheopsai | 2023-10-31T07:42:08Z | 16 | 0 | null | [
"region:us"
] | 2023-10-31T07:42:08Z | 2023-10-31T07:41:36.000Z | 2023-10-31T07:41:36 | Entry not found | [
-0.3227647542953491,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965083122253,
0.7915717959403992,
0.07618629932403564,
0.7746022343635559,
0.2563222348690033,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
girrajjangid/guanaco-9k | girrajjangid | 2023-10-31T11:54:22Z | 16 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-10-31T11:54:22Z | 2023-10-31T11:34:42.000Z | 2023-10-31T11:34:42 | ---
license: apache-2.0
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 14091569
num_examples: 9000
download_size: 8325237
dataset_size: 14091569
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Lollitor/MyPubChem10 | Lollitor | 2023-10-31T13:03:18Z | 16 | 0 | null | [
"region:us"
] | 2023-10-31T13:03:18Z | 2023-10-31T13:02:30.000Z | 2023-10-31T13:02:30 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1482327.0
num_examples: 9000
- name: validation
num_bytes: 164703.0
num_examples: 1000
download_size: 514907
dataset_size: 1647030.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Dataset Card for "MyPubChem10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7992189526557922,
-0.19728393852710724,
0.2121015191078186,
0.4246490001678467,
-0.08189471065998077,
-0.016750071197748184,
0.28152820467948914,
-0.07597488164901733,
0.9295178055763245,
0.49304646253585815,
-0.8174377083778381,
-0.546852707862854,
-0.5024399161338806,
-0.1002680808305... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
youyu0105/llm-MIDI4 | youyu0105 | 2023-10-31T13:55:47Z | 16 | 0 | null | [
"region:us"
] | 2023-10-31T13:55:47Z | 2023-10-31T13:55:41.000Z | 2023-10-31T13:55:41 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 570535
num_examples: 335
download_size: 131987
dataset_size: 570535
---
# Dataset Card for "llm-MIDI4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.681243896484375,
-0.07544247061014175,
0.5717120170593262,
0.22671383619308472,
-0.23991434276103973,
0.16865360736846924,
0.2857315242290497,
-0.12140911817550659,
0.8034325242042542,
0.5135128498077393,
-1.0070420503616333,
-0.9396316409111023,
-0.5466881990432739,
-0.1589097678661346... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
stsudharsan/veshti-controlnet-v4-canny | stsudharsan | 2023-10-31T15:07:34Z | 16 | 0 | null | [
"region:us"
] | 2023-10-31T15:07:34Z | 2023-10-31T15:07:26.000Z | 2023-10-31T15:07:26 | ---
dataset_info:
features:
- name: image
dtype: image
- name: conditioning_img
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 29728534.0
num_examples: 143
download_size: 28847175
dataset_size: 29728534.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "veshti-controlnet-v4-canny"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.32769832015037537,
-0.0015007136389613152,
0.1132977232336998,
0.32525232434272766,
-0.4037424325942993,
0.09472203999757767,
0.3397572934627533,
-0.21571853756904602,
1.0724072456359863,
0.7161296010017395,
-0.8782733678817749,
-0.7752742767333984,
-0.5768036842346191,
-0.0766193196177... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
minoosh/shEMO_speech | minoosh | 2023-11-01T06:35:49Z | 16 | 0 | null | [
"region:us"
] | 2023-11-01T06:35:49Z | 2023-11-01T06:34:38.000Z | 2023-11-01T06:34:38 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: emotion
dtype:
class_label:
names:
'0': A
'1': H
'2': N
'3': S
'4': W
'5': F
splits:
- name: train
num_bytes: 856321868.0
num_examples: 2400
- name: test
num_bytes: 100721512.0
num_examples: 300
- name: valid
num_bytes: 105982082.0
num_examples: 300
download_size: 1043899986
dataset_size: 1063025462.0
---
# Dataset Card for "shEMO_speech"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3857187330722809,
-0.284849613904953,
-0.06010802462697029,
0.07834389805793762,
-0.22955472767353058,
0.04000261798501015,
-0.1325548142194748,
-0.10746913403272629,
0.5381865501403809,
0.4186125695705414,
-0.8371317386627197,
-0.8237480521202087,
-0.7538790702819824,
-0.54587650299072... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ESGBERT/social_2k | ESGBERT | 2023-11-03T16:12:24Z | 16 | 0 | null | [
"region:us"
] | 2023-11-03T16:12:24Z | 2023-11-02T13:53:35.000Z | 2023-11-02T13:53:35 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
anamhira/ios_action | anamhira | 2023-11-14T19:14:17Z | 16 | 0 | null | [
"region:us"
] | 2023-11-14T19:14:17Z | 2023-11-02T20:50:12.000Z | 2023-11-02T20:50:12 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: prompt
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 482012
num_examples: 233
- name: valid
num_bytes: 5762
num_examples: 3
download_size: 79950
dataset_size: 487774
---
# Dataset Card for "ios_action"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4155105650424957,
-0.2635815143585205,
0.034222979098558426,
0.39057302474975586,
-0.12723305821418762,
-0.10497935861349106,
0.5982116460800171,
-0.0364595390856266,
1.1571085453033447,
0.45658645033836365,
-0.8222128748893738,
-0.6987029910087585,
-0.5209060907363892,
-0.5014651417732... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
alexemanuel27/org_acad | alexemanuel27 | 2023-11-04T17:11:41Z | 16 | 0 | null | [
"region:us"
] | 2023-11-04T17:11:41Z | 2023-11-04T17:05:39.000Z | 2023-11-04T17:05:39 | ---
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
dataset_info:
features:
- name: question
dtype: string
- name: context
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
- name: title
dtype: string
- name: id
dtype: string
splits:
- name: validation
num_bytes: 628748
num_examples: 100
download_size: 33141
dataset_size: 628748
---
# Dataset Card for "org_acad"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5162050724029541,
-0.25440284609794617,
0.16012735664844513,
0.03910098969936371,
-0.15016251802444458,
0.16102834045886993,
0.41114747524261475,
-0.1430445909500122,
0.7301710247993469,
0.35627463459968567,
-0.6150678992271423,
-0.8736128211021423,
-0.5621635317802429,
-0.1917202621698... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AdvayK/SFD_7 | AdvayK | 2023-11-06T17:32:48Z | 16 | 0 | null | [
"region:us"
] | 2023-11-06T17:32:48Z | 2023-11-06T17:32:07.000Z | 2023-11-06T17:32:07 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 382894422.7379618
num_examples: 625
- name: test
num_bytes: 164473290.26203808
num_examples: 268
download_size: 444577398
dataset_size: 547367712.9999999
---
# Dataset Card for "SFD_7"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6356401443481445,
-0.21738463640213013,
0.32102569937705994,
0.40923282504081726,
-0.39840322732925415,
0.05204830691218376,
0.5088659524917603,
-0.154813751578331,
0.7236523032188416,
0.7353574633598328,
-0.7407948970794678,
-0.7845777869224548,
-0.5353339314460754,
-0.0302799884229898... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Konthee/en-th-dataset | Konthee | 2023-11-10T20:03:33Z | 16 | 0 | null | [
"region:us"
] | 2023-11-10T20:03:33Z | 2023-11-10T15:55:10.000Z | 2023-11-10T15:55:10 | ---
dataset_info:
features:
- name: src_input_ids
sequence: int64
- name: src_attention_mask
sequence: int64
- name: trg_input_ids
sequence: int64
- name: trg_attention_mask
sequence: int64
splits:
- name: train
num_bytes: 15243224112
num_examples: 7385283
download_size: 257016533
dataset_size: 15243224112
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "en-th-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6994622945785522,
-0.28051653504371643,
0.20866543054580688,
0.21346542239189148,
-0.2737329602241516,
0.06957493722438812,
0.1836165338754654,
-0.295892596244812,
1.041951060295105,
0.5032457709312439,
-0.8798971176147461,
-0.7624788880348206,
-0.6705771684646606,
-0.08118633925914764,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gmongaras/BERT_Base_Cased_512_Dataset_NoPunct | gmongaras | 2023-11-11T04:13:47Z | 16 | 0 | null | [
"region:us"
] | 2023-11-11T04:13:47Z | 2023-11-11T02:28:11.000Z | 2023-11-11T02:28:11 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 26481229962
num_examples: 109375187
download_size: 10242692263
dataset_size: 26481229962
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
Dataset using the bert-cased tokenizer, cutoff sentences to 512 length (not sentence pairs), all sentence pairs extracted.
Original datasets:
- https://huggingface.co/datasets/bookcorpus
- https://huggingface.co/datasets/wikipedia Variant: 20220301.en | [
-0.5910875797271729,
-0.6827324628829956,
0.1776169091463089,
0.4760671555995941,
-0.3766648471355438,
-0.2749967873096466,
-0.3083949089050293,
-0.23882059752941132,
0.5522283911705017,
0.7554016709327698,
-0.9177096486091614,
-0.5005934238433838,
-0.3876032829284668,
0.28186991810798645,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
csupiisc/tariffplan3k | csupiisc | 2023-11-11T06:22:10Z | 16 | 0 | null | [
"region:us"
] | 2023-11-11T06:22:10Z | 2023-11-11T06:09:29.000Z | 2023-11-11T06:09:29 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2626673
num_examples: 2000
- name: test
num_bytes: 1312983
num_examples: 1000
download_size: 364794
dataset_size: 3939656
---
# Dataset Card for "tariffplan3k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5949693918228149,
0.15821018815040588,
0.17776146531105042,
0.61729496717453,
-0.2379460483789444,
-0.0655076876282692,
0.4866260886192322,
-0.11243404448032379,
0.7111794948577881,
0.8687366247177124,
-0.6297191977500916,
-0.8235676288604736,
-0.36913594603538513,
-0.2576866149902344,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
cmu-mlsp/hubert_layer9-librispeech-asr100h_tokenized | cmu-mlsp | 2023-11-11T20:36:12Z | 16 | 0 | null | [
"region:us"
] | 2023-11-11T20:36:12Z | 2023-11-11T20:35:58.000Z | 2023-11-11T20:35:58 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 1337768164
num_examples: 57078
- name: validation
num_bytes: 126705828
num_examples: 5406
- name: test
num_bytes: 122815120
num_examples: 5240
download_size: 110156012
dataset_size: 1587289112
---
# Dataset Card for "hubert_layer9-librispeech-asr100h_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3857317566871643,
-0.30186066031455994,
0.02850000374019146,
0.4818037450313568,
-0.13552726805210114,
0.1932157576084137,
0.15058955550193787,
-0.15230529010295868,
0.9953410625457764,
0.6340152621269226,
-0.6765329837799072,
-0.6214157938957214,
-0.4781477749347687,
-0.299693137407302... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mmcho1157/attackgpt_base | mmcho1157 | 2023-11-12T12:47:20Z | 16 | 0 | null | [
"region:us"
] | 2023-11-12T12:47:20Z | 2023-11-12T12:47:19.000Z | 2023-11-12T12:47:19 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 16440
num_examples: 70
download_size: 2433
dataset_size: 16440
---
# Dataset Card for "attackgpt_base"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6951307654380798,
-0.45428773760795593,
0.036651719361543655,
0.17690788209438324,
-0.1325826644897461,
-0.027664504945278168,
0.3099825978279114,
-0.015944121405482292,
0.7126320600509644,
0.4604608416557312,
-0.5912579298019409,
-0.7029094696044922,
-0.7872254848480225,
-0.48406305909... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
wt-golf/acronym-identification-1k | wt-golf | 2023-11-12T13:35:10Z | 16 | 0 | null | [
"region:us"
] | 2023-11-12T13:35:10Z | 2023-11-12T13:35:06.000Z | 2023-11-12T13:35:06 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: labels
sequence: int64
- name: tokens
sequence: string
splits:
- name: train
num_bytes: 555254
num_examples: 1000
- name: validation
num_bytes: 536083
num_examples: 1000
- name: test
num_bytes: 568935
num_examples: 1000
download_size: 312635
dataset_size: 1660272
---
# Dataset Card for "acronym-identification-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5825343132019043,
-0.2837797999382019,
-0.03588871657848358,
0.2793649435043335,
-0.4985833466053009,
0.2593156397342682,
0.6018189191818237,
-0.21204374730587006,
1.0860177278518677,
0.1710529327392578,
-0.8827574253082275,
-0.738406240940094,
-0.7171754240989685,
0.008719001896679401,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
obalcells/advbench | obalcells | 2023-11-13T10:17:11Z | 16 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-13T10:17:11Z | 2023-11-13T09:45:30.000Z | 2023-11-13T09:45:30 | ---
license: mit
dataset_info:
features:
- name: goal
dtype: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 84165
num_examples: 520
download_size: 35093
dataset_size: 84165
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
OliverYoung/threejs | OliverYoung | 2023-11-13T14:08:25Z | 16 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-13T14:08:25Z | 2023-11-13T13:00:13.000Z | 2023-11-13T13:00:13 | ---
license: mit
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zxvix/amazon_review_automotive_nonautomotive | zxvix | 2023-11-14T07:33:05Z | 16 | 0 | null | [
"region:us"
] | 2023-11-14T07:33:05Z | 2023-11-14T07:33:01.000Z | 2023-11-14T07:33:01 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: original_text
dtype: string
splits:
- name: test
num_bytes: 104083.0
num_examples: 100
download_size: 70736
dataset_size: 104083.0
---
# Dataset Card for "amazon_review_automotive_nonautomotive"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6345458030700684,
-0.2291933298110962,
0.16268102824687958,
0.22357186675071716,
-0.323953241109848,
0.16565001010894775,
0.2855570316314697,
-0.35905131697654724,
0.6831247210502625,
0.3088967800140381,
-1.0362963676452637,
-0.6741628050804138,
-0.2911720275878906,
-0.22628627717494965... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zxvix/amazon_review_automotive_academic | zxvix | 2023-11-14T07:38:51Z | 16 | 0 | null | [
"region:us"
] | 2023-11-14T07:38:51Z | 2023-11-14T07:38:48.000Z | 2023-11-14T07:38:48 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: original_text
dtype: string
splits:
- name: test
num_bytes: 120225.0
num_examples: 100
download_size: 81344
dataset_size: 120225.0
---
# Dataset Card for "amazon_review_automotive_academic"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6580350399017334,
-0.13489478826522827,
0.27745795249938965,
0.2376277893781662,
-0.10065990686416626,
0.2517743706703186,
0.2826730012893677,
-0.3936622142791748,
0.3946422338485718,
0.18484055995941162,
-0.9144619703292847,
-0.7302451133728027,
-0.21703532338142395,
-0.254199117422103... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BEE-spoke-data/medium-articles-en | BEE-spoke-data | 2023-11-14T21:36:02Z | 16 | 0 | null | [
"task_categories:text-classification",
"task_categories:text-generation",
"size_categories:100K<n<1M",
"source_datasets:fabiochiu/medium-articles",
"language:en",
"license:mit",
"region:us"
] | 2023-11-14T21:36:02Z | 2023-11-14T21:26:15.000Z | 2023-11-14T21:26:15 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: title
dtype: string
- name: text
dtype: string
- name: url
dtype: string
- name: authors
dtype: string
- name: timestamp
dtype: string
- name: tags
dtype: string
- name: token_count
dtype: int64
splits:
- name: train
num_bytes: 930797692.9172074
num_examples: 171340
- name: validation
num_bytes: 24494962.048346493
num_examples: 4509
- name: test
num_bytes: 24494962.048346493
num_examples: 4509
download_size: 615394671
dataset_size: 979787617.0139004
license: mit
language:
- en
size_categories:
- 100K<n<1M
source_datasets: fabiochiu/medium-articles
task_categories:
- text-classification
- text-generation
---
# Dataset Card for "medium-articles-en"
`fabiochiu/medium-articles` filtered for `en` only and 100 GPT-4 tiktoken tokens or more. | [
-0.6397663950920105,
-0.4309876263141632,
0.44705963134765625,
0.41011857986450195,
-1.1845492124557495,
0.19803419709205627,
-0.25096815824508667,
-0.2707413136959076,
0.7372432947158813,
0.57168048620224,
-0.8350663781166077,
-0.9700013995170593,
-0.7256289124488831,
0.5263996124267578,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ai-shift/ameba_faq_search | ai-shift | 2023-11-15T06:31:08Z | 16 | 4 | null | [
"task_categories:question-answering",
"size_categories:100K<n<1M",
"language:ja",
"license:cc-by-nd-4.0",
"region:us"
] | 2023-11-15T06:31:08Z | 2023-11-15T04:58:19.000Z | 2023-11-15T04:58:19 | ---
task_categories:
- question-answering
language:
- ja
size_categories:
- 100K<n<1M
license: cc-by-nd-4.0
---
# AMEBA Blog FAQ Search Dataset
This data was obtained by crawling [this website](https://helps.ameba.jp/faq/).
The FAQ Data was processed to remove HTML tags and other formatting after crawling, and entries containing excessively long content were excluded.
The Query Data was generated using a Large Language Model (LLM). Please refer to the following blog for information about the generation process.
- https://www.ai-shift.co.jp/techblog/3710
- https://www.ai-shift.co.jp/techblog/3761
## Column description
FAQ Data (target_faq.csv)
- ID: Unique ID of the FAQ
- Title: Title of the FAQ
- Content: Answer content of the FAQ
Query Data (queries_{train/validation/test}.csv)
- ID: Unique ID of the correct FAQ
- Query: Question text
- difficulty: The difficulty level of the problem
- Whether the problem is related to the correct FAQ in the training set.
- If "difficult", it is included in the train data, and if "easy", it is not included in the train data.
- The train data are all "easy". | [
-0.5423260927200317,
-0.9504138231277466,
0.3450299799442291,
0.2604636549949646,
-0.2435697466135025,
0.002334152115508914,
-0.02830379083752632,
-0.026031315326690674,
0.32311534881591797,
0.7659603357315063,
-0.8281928896903992,
-0.9075445532798767,
-0.2101341187953949,
0.25190815329551... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
lramriez/dominoplays | lramriez | 2023-11-16T02:49:24Z | 16 | 0 | null | [
"license:apache-2.0",
"region:us"
] | 2023-11-16T02:49:24Z | 2023-11-16T02:48:03.000Z | 2023-11-16T02:48:03 | ---
license: apache-2.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
V12X-ksr/FOCALtask | V12X-ksr | 2023-11-16T10:46:54Z | 16 | 0 | null | [
"task_categories:token-classification",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"astronomy",
"region:us"
] | 2023-11-16T10:46:54Z | 2023-11-16T04:08:33.000Z | 2023-11-16T04:08:33 | ---
annotations_creators:
- expert-generated
license: cc-by-4.0
task_categories:
- token-classification
language:
- en
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
tags:
- astronomy
dataset_info:
features:
- name: Functions Text
sequence: string
- name: Functions Label
sequence: string
splits:
- name: train
num_bytes: 542275
num_examples: 2421
- name: val
num_bytes: 542275
num_examples: 411
- name: test
num_bytes: 542275
num_examples: 410
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | [
-0.5322356224060059,
-0.5534716844558716,
0.1290130317211151,
0.23470577597618103,
-0.39626216888427734,
-0.11762470006942749,
-0.03545305132865906,
-0.6389272212982178,
0.5699822306632996,
0.7838326692581177,
-0.7834625840187073,
-0.9173274040222168,
-0.55633145570755,
0.13078093528747559... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
promptora11/QandA | promptora11 | 2023-11-16T09:46:41Z | 16 | 0 | null | [
"region:us"
] | 2023-11-16T09:46:41Z | 2023-11-16T09:46:37.000Z | 2023-11-16T09:46:37 | ---
dataset_info:
features:
- name: Query
dtype: string
- name: Response
dtype: string
splits:
- name: train
num_bytes: 8148
num_examples: 40
download_size: 6814
dataset_size: 8148
---
# Dataset Card for "QandA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5385589003562927,
-0.07884776592254639,
0.21102924644947052,
0.3082347810268402,
-0.40450015664100647,
0.12573465704917908,
0.5362929105758667,
-0.27568283677101135,
0.9954246878623962,
0.3490508794784546,
-0.7822370529174805,
-0.787483811378479,
-0.531536340713501,
-0.2258782535791397,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
iamkaikai/fonts | iamkaikai | 2023-11-16T17:50:16Z | 16 | 0 | null | [
"region:us"
] | 2023-11-16T17:50:16Z | 2023-11-16T17:50:13.000Z | 2023-11-16T17:50:13 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 75777720.32
num_examples: 5016
download_size: 4942032
dataset_size: 75777720.32
---
# Dataset Card for "fonts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6429925560951233,
-0.27480101585388184,
0.09997981786727905,
0.37681859731674194,
-0.16385044157505035,
0.029338698834180832,
0.1438450962305069,
-0.23984402418136597,
0.7578831911087036,
0.4645249843597412,
-0.7851982712745667,
-0.7923531532287598,
-0.7060340046882629,
-0.1400725841522... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
renumics/cloome_demo | renumics | 2023-11-16T19:43:36Z | 16 | 0 | null | [
"region:us"
] | 2023-11-16T19:43:36Z | 2023-11-16T19:14:19.000Z | 2023-11-16T19:14:19 | ---
dataset_info:
features:
- name: SAMPLE_KEY_mol
dtype: string
- name: SAMPLE_KEY_img
dtype: string
- name: SMILES
dtype: string
- name: mol_embedding_reduced
sequence: float64
- name: img_embedding_reduced
sequence: float64
- name: mol_embedding
sequence: float32
- name: img_embedding
sequence: float32
- name: image
dtype: image
- name: distance
dtype: float64
- name: index
dtype: int64
- name: smiles_image
dtype: image
splits:
- name: train
num_bytes: 975216313.25
num_examples: 30403
download_size: 1002070493
dataset_size: 975216313.25
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
This is a mirror to the example dataset for the "CLOOME: a new search engine unlocks bioimaging databases for queries with chemical structures" paper by Sanchez-Fernandez et al.
Paper: https://www.biorxiv.org/content/10.1101/2022.11.17.516915v1
Code: https://github.com/ml-jku/cloome

| [
-0.3033643662929535,
-0.4539898633956909,
0.8895414471626282,
-0.1648167073726654,
-0.24006932973861694,
-0.42693832516670227,
-0.07288553565740585,
-0.18123993277549744,
0.5990828275680542,
0.526200532913208,
-0.965286374092102,
-0.8072574734687805,
-0.21301458775997162,
0.314878463745117... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zoharli/sst2_priv | zoharli | 2023-11-17T08:55:56Z | 16 | 0 | null | [
"region:us"
] | 2023-11-17T08:55:56Z | 2023-11-17T08:55:55.000Z | 2023-11-17T08:55:55 | ---
dataset_info:
features:
- name: idx
dtype: int32
- name: sentence
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 514988
num_examples: 6734
download_size: 374542
dataset_size: 514988
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jimregan/eatd_corpus | jimregan | 2023-11-17T12:32:03Z | 16 | 1 | null | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"language:zh",
"license:other",
"region:us"
] | 2023-11-17T12:32:03Z | 2023-11-17T12:24:04.000Z | 2023-11-17T12:24:04 | ---
license: other
task_categories:
- automatic-speech-recognition
- audio-classification
language:
- zh
---
The EATD Corpus is hosted in [this github repository](https://github.com/speechandlanguageprocessing/ICASSP2022-Depression).
Follow the instructions there to download and unzip the data.
This dataset can be used with the following line of code, changing the path of `data_dir` to the one appropriate to your system:
```python
dataset = load_dataset('jimregan/eatd_corpus', data_dir='/tmp/EATD-Corpus/')
```
| [
-0.4162033498287201,
-0.4024879038333893,
0.2688017785549164,
0.18851426243782043,
-0.016659870743751526,
0.17650167644023895,
-0.3680814504623413,
-0.2780308127403259,
0.8891909122467041,
0.4917849004268646,
-0.16818006336688995,
-0.7433291077613831,
-0.5376495122909546,
0.326948136091232... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
maxspin/medibot_dataset | maxspin | 2023-11-18T16:24:18Z | 16 | 0 | null | [
"license:mit",
"region:us"
] | 2023-11-18T16:24:18Z | 2023-11-18T16:10:09.000Z | 2023-11-18T16:10:09 | ---
license: mit
---
The "medibot_chat.csv" file contains data specifically designed for training the LlaMA2 chat model. The dataset is structured as follows:
<s>[INST]{user_query 1}[/INST]{chatbot_response 1}[INST]{user_query 2}[/INST]{chatbot_response 2}....[INST]{user_query n}[/INST]{chatbot response n}</s>
Please note that this dataset was generated with the assistance of ChatGPT 3.5 and may not adhere to medical standards. It is crucial not to integrate this model into any real-life medical applications. For such applications, it is recommended to create a more accurate and verified dataset. The current dataset is intended solely for the purpose of training the LlaMA2 chat model and evaluating the effectiveness of fine-tuning. | [
0.11966478824615479,
-0.7735380530357361,
0.03753141313791275,
0.2968015968799591,
-0.47842028737068176,
0.24482333660125732,
0.027016736567020416,
-0.399146169424057,
0.3224470019340515,
0.8687745928764343,
-0.8959911465644836,
-0.6575331091880798,
-0.5749180316925049,
-0.0788070112466812... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Mauregato/leaf_disease_segmentation | Mauregato | 2023-11-19T17:18:30Z | 16 | 0 | null | [
"region:us"
] | 2023-11-19T17:18:30Z | 2023-11-19T14:18:19.000Z | 2023-11-19T14:18:19 | ---
dataset_info:
features:
- name: image
dtype: image
- name: mask
dtype: image
splits:
- name: train
num_bytes: 678815118.255
num_examples: 2205
- name: val
num_bytes: 51994848.0
num_examples: 294
- name: test
num_bytes: 72520572.0
num_examples: 441
download_size: 480478012
dataset_size: 803330538.255
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
healthcorum/autotrain-data-tu9p-fvi7-zb2n | healthcorum | 2023-11-19T20:48:35Z | 16 | 0 | null | [
"region:us"
] | 2023-11-19T20:48:35Z | 2023-11-19T20:48:34.000Z | 2023-11-19T20:48:34 | ---
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: responses
dtype: string
- name: autotrain_text
dtype: string
splits:
- name: train
num_bytes: 36088167
num_examples: 9998
- name: validation
num_bytes: 36088167
num_examples: 9998
download_size: 12071286
dataset_size: 72176334
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Dataset Card for "autotrain-data-tu9p-fvi7-zb2n"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5086972117424011,
0.09821312129497528,
0.03789885342121124,
0.306129515171051,
-0.3284705877304077,
0.15600165724754333,
0.37004396319389343,
0.03458811342716217,
0.5928740501403809,
0.028947066515684128,
-0.7715803384780884,
-0.33730804920196533,
-0.46474653482437134,
-0.29395824670791... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
danielz01/xView2 | danielz01 | 2023-11-19T23:43:11Z | 16 | 0 | null | [
"region:us"
] | 2023-11-19T23:43:11Z | 2023-11-19T23:37:30.000Z | 2023-11-19T23:37:30 | ---
dataset_info:
config_name: competition
features:
- name: image1
dtype: image
- name: image2
dtype: image
- name: mask1
dtype: image
- name: mask2
dtype: image
- name: objects1
struct:
- name: bbox
sequence:
sequence: int32
- name: feature_type
sequence: string
- name: uid
sequence: string
- name: objects2
struct:
- name: bbox
sequence:
sequence: int32
- name: feature_type
sequence: string
- name: subtype
sequence: string
- name: uid
sequence: string
- name: meta1
struct:
- name: features
struct:
- name: lng_lat
list:
- name: properties
struct:
- name: feature_type
dtype: string
- name: uid
dtype: string
- name: wkt
dtype: string
- name: xy
list:
- name: properties
struct:
- name: feature_type
dtype: string
- name: uid
dtype: string
- name: wkt
dtype: string
- name: metadata
struct:
- name: capture_date
dtype: string
- name: catalog_id
dtype: string
- name: disaster
dtype: string
- name: disaster_type
dtype: string
- name: gsd
dtype: float64
- name: height
dtype: int64
- name: id
dtype: string
- name: img_name
dtype: string
- name: off_nadir_angle
dtype: float64
- name: original_height
dtype: int64
- name: original_width
dtype: int64
- name: pan_resolution
dtype: float64
- name: provider_asset_type
dtype: string
- name: sensor
dtype: string
- name: sun_azimuth
dtype: float64
- name: sun_elevation
dtype: float64
- name: target_azimuth
dtype: float64
- name: width
dtype: int64
- name: meta2
struct:
- name: features
struct:
- name: lng_lat
list:
- name: properties
struct:
- name: feature_type
dtype: string
- name: subtype
dtype: string
- name: uid
dtype: string
- name: wkt
dtype: string
- name: xy
list:
- name: properties
struct:
- name: feature_type
dtype: string
- name: subtype
dtype: string
- name: uid
dtype: string
- name: wkt
dtype: string
- name: metadata
struct:
- name: capture_date
dtype: string
- name: catalog_id
dtype: string
- name: disaster
dtype: string
- name: disaster_type
dtype: string
- name: gsd
dtype: float64
- name: height
dtype: int64
- name: id
dtype: string
- name: img_name
dtype: string
- name: off_nadir_angle
dtype: float64
- name: original_height
dtype: int64
- name: original_width
dtype: int64
- name: pan_resolution
dtype: float64
- name: provider_asset_type
dtype: string
- name: sensor
dtype: string
- name: sun_azimuth
dtype: float64
- name: sun_elevation
dtype: float64
- name: target_azimuth
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 8588187300.178
num_examples: 2799
- name: test
num_bytes: 2860401182.0
num_examples: 933
download_size: 11309747563
dataset_size: 11448588482.178001
configs:
- config_name: competition
data_files:
- split: train
path: competition/train-*
- split: test
path: competition/test-*
---
# Dataset Card for "xView2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5807663202285767,
-0.038101356476545334,
0.22337637841701508,
0.4979392886161804,
-0.3103850185871124,
-0.3284398317337036,
0.4787200391292572,
-0.16126452386379242,
0.49970927834510803,
0.5543174147605896,
-0.928282618522644,
-0.5760989785194397,
-0.5052407383918762,
-0.219848185777664... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
argilla/ultrafeedback-binarized-avg-rating-for-dpo-filtered | argilla | 2023-11-20T17:49:04Z | 16 | 0 | null | [
"region:us"
] | 2023-11-20T17:49:04Z | 2023-11-20T17:48:41.000Z | 2023-11-20T17:48:41 | ---
dataset_info:
features:
- name: source
dtype: string
- name: instruction
dtype: string
- name: chosen_response
dtype: string
- name: rejected_response
dtype: string
- name: chosen_avg_rating
dtype: float64
- name: rejected_avg_rating
dtype: float64
- name: chosen_model
dtype: string
splits:
- name: train
num_bytes: 184744511.83915183
num_examples: 57741
download_size: 102559579
dataset_size: 184744511.83915183
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853392958641052,
-0.18616779148578644,
0.6529127955436707,
0.49436280131340027,
-0.19319361448287964,
0.23607419431209564,
0.36072003841400146,
0.050563063472509384,
0.579365611076355,
0.7400140762329102,
-0.6508104205131531,
-0.23783954977989197,
-0.7102249264717102,
-0.0478260256350... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
potsawee/alpaca-finance-43k-en-original-cleaned | potsawee | 2023-11-21T10:48:58Z | 16 | 0 | null | [
"region:us"
] | 2023-11-21T10:48:58Z | 2023-11-21T10:48:55.000Z | 2023-11-21T10:48:55 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 27758795.02654428
num_examples: 43032
download_size: 17437468
dataset_size: 27758795.02654428
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "alpaca-finance-43k-en-original-cleaned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6277191042900085,
-0.31857386231422424,
-0.08699344843626022,
0.14670798182487488,
-0.6791415810585022,
-0.09222646802663803,
0.12833428382873535,
-0.3569793701171875,
1.0602455139160156,
0.9632295370101929,
-1.0156476497650146,
-0.8199489712715149,
-0.4958990514278412,
-0.1203035935759... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Globaly/segments-195k | Globaly | 2023-11-21T22:00:53Z | 16 | 0 | null | [
"region:us"
] | 2023-11-21T22:00:53Z | 2023-11-21T15:37:21.000Z | 2023-11-21T15:37:21 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
xwjzds/pretrain_sts_similarity | xwjzds | 2023-11-24T22:07:30Z | 16 | 0 | null | [
"arxiv:2310.15296",
"region:us"
] | 2023-11-24T22:07:30Z | 2023-11-21T23:28:47.000Z | 2023-11-21T23:28:47 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 8335942
num_examples: 41191
download_size: 5350395
dataset_size: 8335942
---
Dataset Card for Sentence Paraphase Collections
Dataset Description Repository: Paper: DeTiME: Diffusion-Enhanced Topic Modeling using Encoder-decoder based LLM https://arxiv.org/abs/2310.15296
Leaderboard: Point of Contact: Weijie Xu
Dataset Summary Sentence_Paraphase is a combination of sentences paraphase tasks from various sources such as paraphase using ChatGPT, Paraphrase Adversaries from Word Scrambling (PAWS) and STS benchmark. We filtered out pairs that are detected as non english, too short or not have high similarity score.
Category Count Paraphrase 223241
Dataset Structure Data Instances An example of data as follows: {'input': 'U.S. prosecutors have arrested more than 130 individuals and have seized more than $17 million in a continuing crackdown on Internet fraud and abuse.', 'output': 'More than 130 people have been arrested and $17 million worth of property seized in an Internet fraud sweep announced Friday by three U.S. government agencies.'}
Data Fields The data fields are as follows:
input and output are paraphrase of a sentence or paragraph.
Dataset Creation Curation Rationale [More Information Needed]
Source Data Initial Data Collection and Normalization [More Information Needed]
Who are the source language producers? [More Information Needed]
Annotations Annotation process [More Information Needed]
Who are the annotators? [More Information Needed]
Personal and Sensitive Information [More Information Needed]
Considerations for Using the Data Social Impact of Dataset [More Information Needed]
Discussion of Biases [More Information Needed]
Other Known Limitations [More Information Needed]
Additional Information Dataset Curators [More Information Needed]
Licensing Information The dataset is available under the Creative Commons NonCommercial (CC BY-NC 4.0).
Citation Information @misc{xu2023detime, title={DeTiME: Diffusion-Enhanced Topic Modeling using Encoder-decoder based LLM}, author={Weijie Xu and Wenxiang Hu and Fanyou Wu and Srinivasan Sengamedu}, year={2023}, eprint={2310.15296}, archivePrefix={arXiv}, primaryClass={cs.CL} } | [
-0.12994293868541718,
-0.9521583318710327,
0.3138667345046997,
0.15240433812141418,
-0.44390445947647095,
-0.22088120877742767,
0.0069707585498690605,
-0.02070636861026287,
0.31268197298049927,
0.9509537816047668,
-0.3543524444103241,
-0.6873332858085632,
-0.60820072889328,
0.1942054331302... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
argilla/distilabel-docs | argilla | 2023-11-22T13:57:20Z | 16 | 0 | null | [
"region:us"
] | 2023-11-22T13:57:20Z | 2023-11-22T13:57:18.000Z | 2023-11-22T13:57:18 | ---
dataset_info:
features:
- name: input
dtype: string
- name: generation_model
dtype: string
- name: generation_prompt
dtype: string
- name: raw_generation_responses
list:
- name: choices
list:
- name: finish_reason
dtype: string
- name: index
dtype: int64
- name: logprobs
dtype: 'null'
- name: text
dtype: string
- name: created
dtype: int64
- name: id
dtype: string
- name: model
dtype: string
- name: object
dtype: string
- name: usage
struct:
- name: completion_tokens
dtype: int64
- name: prompt_tokens
dtype: int64
- name: total_tokens
dtype: int64
- name: generations
sequence: string
- name: labelling_model
dtype: string
- name: labelling_prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: raw_labelling_response
dtype: string
- name: rating
sequence: float64
- name: areas
list:
- name: Authenticity & Reliability
struct:
- name: rating
dtype: string
- name: rationale
dtype: string
- name: Clarity & Transparency
struct:
- name: rating
dtype: string
- name: rationale
dtype: string
- name: Compliance with Intent
struct:
- name: rating
dtype: string
- name: rationale
dtype: string
- name: Practical Accuracy
struct:
- name: rating
dtype: string
- name: rationale
dtype: string
splits:
- name: train
num_bytes: 79809
num_examples: 5
download_size: 100998
dataset_size: 79809
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "distilabel-docs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6329087018966675,
-0.27784618735313416,
0.3354816436767578,
0.12263520061969757,
-0.25127974152565,
0.21662360429763794,
0.11268198490142822,
0.06300101429224014,
0.6389575004577637,
0.12344235926866531,
-0.8153177499771118,
-0.916576623916626,
-1.039696455001831,
-0.0743822231888771,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Globaly/families-195k | Globaly | 2023-11-22T15:36:42Z | 16 | 0 | null | [
"region:us"
] | 2023-11-22T15:36:42Z | 2023-11-22T15:32:57.000Z | 2023-11-22T15:32:57 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Globaly/bricks-195k | Globaly | 2023-11-22T15:58:52Z | 16 | 0 | null | [
"region:us"
] | 2023-11-22T15:58:52Z | 2023-11-22T15:51:59.000Z | 2023-11-22T15:51:59 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
peterbeamish/environment-env-instruct1 | peterbeamish | 2023-11-23T21:02:43Z | 16 | 0 | null | [
"region:us"
] | 2023-11-23T21:02:43Z | 2023-11-23T00:16:29.000Z | 2023-11-23T00:16:29 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
splits:
- name: train
num_bytes: 32209217
num_examples: 914
- name: test
num_bytes: 29810746
num_examples: 915
download_size: 21565229
dataset_size: 62019963
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mmcho1157/apg_sft_dataset | mmcho1157 | 2023-11-29T00:34:04Z | 16 | 0 | null | [
"region:us"
] | 2023-11-29T00:34:04Z | 2023-11-23T06:48:18.000Z | 2023-11-23T06:48:18 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2957960
num_examples: 6804
download_size: 1277172
dataset_size: 2957960
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nthakur/gpl-nfcorpus | nthakur | 2023-11-24T14:04:51Z | 16 | 0 | null | [
"region:us"
] | 2023-11-24T14:04:51Z | 2023-11-23T23:59:16.000Z | 2023-11-23T23:59:16 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mlabonne/bactrian-fr | mlabonne | 2023-11-24T20:15:04Z | 16 | 0 | null | [
"region:us"
] | 2023-11-24T20:15:04Z | 2023-11-24T20:15:02.000Z | 2023-11-24T20:15:02 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: id
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 41488334
num_examples: 50000
download_size: 24344870
dataset_size: 41488334
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853369116783142,
-0.18616779148578644,
0.6529126167297363,
0.49436280131340027,
-0.193193256855011,
0.2360745668411255,
0.36071979999542236,
0.05056314915418625,
0.5793651342391968,
0.740013837814331,
-0.6508103013038635,
-0.23783960938453674,
-0.7102248668670654,
-0.04782580211758613... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Xnhyacinth/Image | Xnhyacinth | 2023-11-25T13:44:33Z | 16 | 0 | null | [
"region:us"
] | 2023-11-25T13:44:33Z | 2023-11-25T12:46:20.000Z | 2023-11-25T12:46:20 | ---
dataset_info:
config_name: NQ
features:
- name: id
dtype: int64
- name: question
dtype: string
- name: answers
sequence: string
- name: ctxs
list:
- name: id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
- name: compressed_ctxs_1
struct:
- name: compressed_prompt
dtype: string
- name: compressed_tokens
dtype: int64
- name: origin_tokens
dtype: int64
- name: ratio
dtype: string
- name: saving
dtype: string
- name: compressed_ctxs_5
struct:
- name: compressed_prompt
dtype: string
- name: compressed_tokens
dtype: int64
- name: origin_tokens
dtype: int64
- name: ratio
dtype: string
- name: saving
dtype: string
- name: compressed_ctxs_10
struct:
- name: compressed_prompt
dtype: string
- name: compressed_tokens
dtype: int64
- name: origin_tokens
dtype: int64
- name: ratio
dtype: string
- name: saving
dtype: string
- name: compressed_ctxs_20
struct:
- name: compressed_prompt
dtype: string
- name: compressed_tokens
dtype: int64
- name: origin_tokens
dtype: int64
- name: ratio
dtype: string
- name: saving
dtype: string
- name: compressed_ctxs_50
struct:
- name: compressed_prompt
dtype: string
- name: compressed_tokens
dtype: int64
- name: origin_tokens
dtype: int64
- name: ratio
dtype: string
- name: saving
dtype: string
- name: compressed_ctxs_100
struct:
- name: compressed_prompt
dtype: string
- name: compressed_tokens
dtype: int64
- name: origin_tokens
dtype: int64
- name: ratio
dtype: string
- name: saving
dtype: string
splits:
- name: train
num_bytes: 6106425228
num_examples: 79168
- name: eval
num_bytes: 675422872
num_examples: 8757
- name: test
num_bytes: 279441134
num_examples: 3610
download_size: 3931027405
dataset_size: 7061289234
configs:
- config_name: NQ
data_files:
- split: train
path: NQ/train-*
- split: eval
path: NQ/eval-*
- split: test
path: NQ/test-*
---
| [
-0.12853369116783142,
-0.18616779148578644,
0.6529126167297363,
0.49436280131340027,
-0.193193256855011,
0.2360745668411255,
0.36071979999542236,
0.05056314915418625,
0.5793651342391968,
0.740013837814331,
-0.6508103013038635,
-0.23783960938453674,
-0.7102248668670654,
-0.04782580211758613... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
petrpan26/typescript-jest | petrpan26 | 2023-11-25T17:14:10Z | 16 | 0 | null | [
"region:us"
] | 2023-11-25T17:14:10Z | 2023-11-25T13:29:46.000Z | 2023-11-25T13:29:46 | ---
dataset_info:
features:
- name: level_0
dtype: int64
- name: index
dtype: int64
- name: repo_id
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 564108784
num_examples: 11324
download_size: 199094377
dataset_size: 564108784
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
| [
-0.12853369116783142,
-0.18616779148578644,
0.6529126167297363,
0.49436280131340027,
-0.193193256855011,
0.2360745668411255,
0.36071979999542236,
0.05056314915418625,
0.5793651342391968,
0.740013837814331,
-0.6508103013038635,
-0.23783960938453674,
-0.7102248668670654,
-0.04782580211758613... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
adasgaleus/unannotated-wids | adasgaleus | 2023-11-26T11:36:55Z | 16 | 0 | null | [
"region:us"
] | 2023-11-26T11:36:55Z | 2023-11-26T11:36:54.000Z | 2023-11-26T11:36:54 | ---
dataset_info:
features:
- name: context
dtype: string
splits:
- name: test
num_bytes: 13509
num_examples: 50
download_size: 12242
dataset_size: 13509
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
| [
-0.12853369116783142,
-0.18616779148578644,
0.6529126167297363,
0.49436280131340027,
-0.193193256855011,
0.2360745668411255,
0.36071979999542236,
0.05056314915418625,
0.5793651342391968,
0.740013837814331,
-0.6508103013038635,
-0.23783960938453674,
-0.7102248668670654,
-0.04782580211758613... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
EnKop/dan_test_QA_dataset | EnKop | 2023-11-27T10:20:43Z | 16 | 0 | null | [
"region:us"
] | 2023-11-27T10:20:43Z | 2023-11-27T08:33:51.000Z | 2023-11-27T08:33:51 | [
{
"id": "1",
"context": "Sรธrg for at din hรฅnd er sรฅ afslappet som muligt, mens du stadig rammer alle tonerne korrekt - prรธv ogsรฅ at undgรฅ at lave for mange ekstra bevรฆgelser med fingrene. Pรฅ denne mรฅde udmatter du dig selv sรฅ lidt som muligt. Husk at der ingen grund er til at ramme tangenterne hรฅrdt for at fรฅ mere lyd ligesom pรฅ klaveret. For at fรฅ ekstra lydstyrke pรฅ harmonika bruger man blรฆsebรฆlgene med hรธjere tryk eller hastighed.",
"question": "Hvad ville ifรธlge afsnittet vรฆre et unรธjagtigt tip, nรฅr det drejer sig om at spille korrekt pรฅ en harmonika?",
"answer": "For at fรฅ mere lyd, skal du trykke hรฅrdere pรฅ tangenterne",
"start_position": 108,
"end_position": 129
},
{
"id": "2",
"context": "Danmark er et land med en rig historie og kultur. Landet er hjemsted for mange forskellige museer, der fortรฆller historien om Danmark og dets folk. Et af de mest populรฆre museer i Danmark er Nationalmuseet, der ligger i Kรธbenhavn. Nationalmuseet har en samling pรฅ over 1 million genstande, der dรฆkker alt fra forhistorisk tid til i dag.",
"question": "Hvad er hovedstaden i Danmark?",
"answer": "Kรธbenhavn",
"start_position": 31,
"end_position": 38
},
{
"id": "3",
"context": "Den danske madkultur er prรฆget af en blanding af skandinaviske, tyske og franske traditioner. Nogle af de mest populรฆre danske retter er stegt flรฆsk med persillesovs, frikadeller og smรธrrebrรธd.",
"question": "Hvad er navnet pรฅ den traditionelle danske ret, der bestรฅr af stegt flรฆsk, persillesovs og kartofler?",
"answer": "Stegt flรฆsk med persillesovs",
"start_position": 68,
"end_position": 89
},
{
"id": "4",
"context": "Danmark er et land med en stรฆrk socialdemokratisk tradition. Landet har et veludviklet socialt sikkerhedsnet, der sikrer borgerne en rรฆkke rettigheder og ydelser.",
"question": "Hvad er navnet pรฅ det danske parti, der er det stรธrste i Folketinget?",
"answer": "Socialdemokratiet",
"start_position": 80,
"end_position": 98
},
{
"id": "5",
"context": "Danmark er et land med en befolkning pรฅ omkring 5,8 millioner mennesker. Landet er et af de mest veludviklede lande i verden og har en hรธj levestandard.",
"question": "Hvad er den officielle religion i Danmark?",
"answer": "Folkekirken",
"start_position": 108,
"end_position": 119
},
{
"id": "6",
"context": "Danmark er et land med en lang kystlinje. Landet har mange smukke strande, der er populรฆre blandt turister.",
"question": "Hvad er navnet pรฅ den danske รธ, der er hjemsted for Roskilde Festival?",
"answer": "Sjรฆlland",
"start_position": 65,
"end_position": 77
}
] | [
-0.7442089319229126,
-0.696036159992218,
0.4676395654678345,
0.23052486777305603,
-0.5274671912193298,
-0.1981334686279297,
0.05449063330888748,
-0.2137575000524521,
0.6860392689704895,
0.43517088890075684,
-0.6046465039253235,
-0.615389883518219,
-0.6123551726341248,
0.514087975025177,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
justinqbui/covid_fact_checked_polifact | justinqbui | 2021-12-13T00:33:36Z | 15 | 2 | null | [
"region:us"
] | 2021-12-13T00:33:36Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | This dataset was gathered by using an automated web scraper that scraped [polifact covid fact checker](https://www.politifact.com/coronavirus/). This dataset contains three columns, the text, the rating given by polifact (half-true, full-flop, pants-fire, barely-true true, mostly-true, and false), and the adjusted rating.
The adjusted rating was created by mapping the raw rating given by polifact
```
true -> true
mostly-true -> true
half-true -> misleading
barely-true -> misleading
false -> false
pants-fire -> false
full-flop -> false
```
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
languages:
- en-US
licenses:
- unknown
multilinguality:
- monolingual
pretty_name: polifact-covid-fact-checker
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-classification
- question-answering
task_ids:
- fact-checking
- multi-label-classification
- sentiment-classification
- closed-domain-qa
- extractive-qa | [
-0.3349331021308899,
-0.43947818875312805,
0.15684378147125244,
0.36810266971588135,
-0.3915517032146454,
0.19698409736156464,
-0.009325842373073101,
-0.22497864067554474,
0.3754454255104065,
0.3541643023490906,
-0.2741049826145172,
-0.8758041262626648,
-0.5879219770431519,
0.3509188890457... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nateraw/auto-cats-and-dogs | nateraw | 2021-07-13T07:32:53Z | 15 | 0 | null | [
"task_categories:other",
"auto-generated",
"image-classification",
"region:us"
] | 2021-07-13T07:32:53Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 |
---
task_categories:
- other
task_ids:
- other-image-classification
- image-classification
tags:
- auto-generated
- image-classification
---
# nateraw/auto-cats-and-dogs
Image Classification Dataset
## Usage
```python
from PIL import Image
from datasets import load_dataset
def pil_loader(path: str):
with open(path, 'rb') as f:
im = Image.open(f)
return im.convert('RGB')
def image_loader(example_batch):
example_batch['image'] = [
pil_loader(f) for f in example_batch['file']
]
return example_batch
ds = load_dataset('nateraw/auto-cats-and-dogs')
ds = ds.with_transform(image_loader)
```
| [
-0.5324965715408325,
-0.3885525166988373,
-0.12550991773605347,
0.1931726485490799,
-0.3810724914073944,
-0.07233331352472305,
0.03083069436252117,
-0.1507747769355774,
0.10166604071855545,
0.5464150309562683,
-0.3073974847793579,
-0.4350658059120178,
-0.6365491151809692,
0.327772110700607... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nateraw/beans | nateraw | 2022-10-20T18:41:18Z | 15 | 0 | null | [
"task_categories:other",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:mit",
"region:us"
] | 2022-10-20T18:41:18Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: Beans
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- other
task_ids:
- other-other-image-classification
---
# Dataset Card for Beans
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[Beans Homepage](https://github.com/AI-Lab-Makerere/ibean/)
- **Repository:**[AI-Lab-Makerere/ibean](https://github.com/AI-Lab-Makerere/ibean/)
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** N/A
### Dataset Summary
Beans leaf dataset with images of diseased and health leaves.
### Supported Tasks and Leaderboards
- image-classification
### Languages
English
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```
{
'image_file_path': '/root/.cache/huggingface/datasets/downloads/extracted/0aaa78294d4bf5114f58547e48d91b7826649919505379a167decb629aa92b0a/train/bean_rust/bean_rust_train.109.jpg',
'labels': 1
}
```
### Data Fields
The data instances have the following fields:
- `image_file_path`: a `string` filepath to an image.
- `labels`: an `int` classification label.
### Data Splits
| name |train|validation|test|
|----------|----:|----:|----:|
|beans|1034|133|128|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@ONLINE {beansdata,
author="Makerere AI Lab",
title="Bean disease dataset",
month="January",
year="2020",
url="https://github.com/AI-Lab-Makerere/ibean/"
}
```
### Contributions
Thanks to [@nateraw](https://github.com/nateraw) for adding this dataset.
| [
-0.5556142330169678,
-0.6135097146034241,
0.21987254917621613,
0.346105694770813,
-0.14182429015636444,
0.08109205961227417,
-0.22087883949279785,
-0.5524974465370178,
0.4309658706188202,
0.39866864681243896,
-0.5402764081954956,
-0.9762070178985596,
-0.8758946061134338,
0.0839961990714073... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nateraw/cats_vs_dogs | nateraw | 2022-10-20T18:41:56Z | 15 | 0 | null | [
"task_categories:other",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | 2022-10-20T18:41:56Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- unknown
multilinguality:
- monolingual
pretty_name: Cats and Dogs
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- other
task_ids:
- other-other-image-classification
---
# Dataset Card for Cats Vs. Dogs
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[Cats vs Dogs Dataset](https://www.microsoft.com/en-us/download/details.aspx?id=54765)
- **Repository:** N/A
- **Paper:**[Paper](https://www.microsoft.com/en-us/research/wp-content/uploads/2007/10/CCS2007.pdf)
- **Leaderboard:** N/A
- **Point of Contact:** N/A
### Dataset Summary
A large set of images of cats and dogs. There are 1738 corrupted images that are dropped.
### Supported Tasks and Leaderboards
- image-classification
### Languages
English
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```
{
'image': '/root/.cache/huggingface/datasets/downloads/extracted/6e1e8c9052e9f3f7ecbcb4b90860668f81c1d36d86cc9606d49066f8da8bfb4f/PetImages/Cat/1.jpg',
'label': 0
}
```
### Data Fields
The data instances have the following fields:
- `image_file_path`: a `string` filepath to an image.
- `labels`: an `int` classification label.
### Data Splits
| name |train|
|----------|----:|
|cats_and_dogs|23410|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@Inproceedings (Conference){asirra-a-captcha-that-exploits-interest-aligned-manual-image-categorization,
author = {Elson, Jeremy and Douceur, John (JD) and Howell, Jon and Saul, Jared},
title = {Asirra: A CAPTCHA that Exploits Interest-Aligned Manual Image Categorization},
booktitle = {Proceedings of 14th ACM Conference on Computer and Communications Security (CCS)},
year = {2007},
month = {October},
publisher = {Association for Computing Machinery, Inc.},
url = {https://www.microsoft.com/en-us/research/publication/asirra-a-captcha-that-exploits-interest-aligned-manual-image-categorization/},
edition = {Proceedings of 14th ACM Conference on Computer and Communications Security (CCS)},
}
```
### Contributions
Thanks to [@nateraw](https://github.com/nateraw) for adding this dataset.
| [
-0.5576833486557007,
-0.44083625078201294,
-0.09389708936214447,
0.20049791038036346,
-0.3610168695449829,
0.21478556096553802,
-0.1403067260980606,
-0.6012926697731018,
0.5312137603759766,
0.5367978811264038,
-0.6586659550666809,
-0.7911175489425659,
-0.5772013664245605,
0.300508469343185... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pierreguillou/lener_br_finetuning_language_model | pierreguillou | 2022-10-25T09:54:32Z | 15 | 2 | lener-br | [
"task_ids:language-modeling",
"multilinguality:monolingual",
"language:pt",
"lener_br",
"region:us"
] | 2022-10-25T09:54:32Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
language:
- pt
multilinguality:
- monolingual
task_ids:
- language-modeling
paperswithcode_id: lener-br
pretty_name: LeNER-Br language modeling
datasets:
- lener_br
tags:
- lener_br
---
# Dataset Card for "LeNER-Br language modeling"
## Dataset Summary
The LeNER-Br language modeling dataset is a collection of legal texts in Portuguese from the [LeNER-Br](https://huggingface.co/datasets/lener_br) dataset ([official site](https://cic.unb.br/~teodecampos/LeNER-Br/)).
The legal texts were downloaded from this [link](https://cic.unb.br/~teodecampos/LeNER-Br/LeNER-Br.zip) (93.6MB) and processed to create a `DatasetDict` with train and validation dataset (20%).
The LeNER-Br language modeling dataset allows the finetuning of language models as BERTimbau [base](https://huggingface.co/neuralmind/bert-base-portuguese-cased) and [large](https://huggingface.co/neuralmind/bert-large-portuguese-cased).
## Language
Portuguese from Brazil.
## Blog post
[NLP | Modelos e Web App para Reconhecimento de Entidade Nomeada (NER) no domรญnio jurรญdico brasileiro](https://medium.com/@pierre_guillou/nlp-modelos-e-web-app-para-reconhecimento-de-entidade-nomeada-ner-no-dom%C3%ADnio-jur%C3%ADdico-b658db55edfb) (29/12/2021)
## Dataset structure
```
DatasetDict({
validation: Dataset({
features: ['text'],
num_rows: 3813
})
train: Dataset({
features: ['text'],
num_rows: 15252
})
})
```
## Use
```
!pip install datasets
from datasets import load_dataset
dataset = load_dataset("pierreguillou/lener_br_finetuning_language_model")
``` | [
-0.3559870421886444,
-0.7160823941230774,
-0.03392189368605614,
0.3496050238609314,
-0.38818100094795227,
-0.30471333861351013,
-0.4077463746070862,
-0.24024972319602966,
0.19324646890163422,
0.7149352431297302,
-0.4582994878292084,
-0.8631309866905212,
-0.3857305347919464,
-0.005076444242... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pritamdeka/cord-19-fulltext | pritamdeka | 2022-02-05T02:29:13Z | 15 | 1 | null | [
"region:us"
] | 2022-02-05T02:29:13Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | # Dataset Card for [pritamdeka/cord-19-fulltext]
## Dataset Description
### Dataset Summary
This is a modified [cord19](https://huggingface.co/datasets/cord19) dataset which contains only the fulltext field. This can be used directly for language modelling tasks.
### Languages
English
### Citation Information
```
@article{Wang2020CORD19TC,
title={CORD-19: The Covid-19 Open Research Dataset},
author={Lucy Lu Wang and Kyle Lo and Yoganand Chandrasekhar and Russell Reas and Jiangjiang Yang and Darrin Eide and
K. Funk and Rodney Michael Kinney and Ziyang Liu and W. Merrill and P. Mooney and D. Murdick and Devvret Rishi and
Jerry Sheehan and Zhihong Shen and B. Stilson and A. Wade and K. Wang and Christopher Wilhelm and Boya Xie and
D. Raymond and Daniel S. Weld and Oren Etzioni and Sebastian Kohlmeier},
journal={ArXiv},
year={2020}
}
```
| [
-0.012474234215915203,
-0.7791414856910706,
0.012916887179017067,
0.4139971435070038,
-0.30139559507369995,
-0.18587854504585266,
-0.5954802632331848,
-0.23496770858764648,
0.16944493353366852,
0.3889668583869934,
-0.5732516050338745,
-0.8082697987556458,
-0.16269107162952423,
0.1287804096... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sebastiaan/test-cefr | sebastiaan | 2021-11-30T17:15:26Z | 15 | 3 | null | [
"region:us"
] | 2021-11-30T17:15:26Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tesemnikov-av/toxic_dataset_classification | tesemnikov-av | 2022-02-06T09:18:17Z | 15 | 0 | null | [
"region:us"
] | 2022-02-06T09:18:17Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
teven/stackexchange | teven | 2021-12-03T18:36:21Z | 15 | 0 | null | [
"region:us"
] | 2021-12-03T18:36:21Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
valurank/news-12factor | valurank | 2022-10-21T13:35:36Z | 15 | 0 | null | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-10-21T13:35:36Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 |
---
license:
- other
language:
- en
multilinguality:
- monolingual
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# Dataset Card for news-12factor
## Table of Contents
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Source Data](#source-data)
- [Annotations](#annotations)
## Dataset Description
80+ news articles with url, title, body text, scored on 12 quality factors and assigned a single rank.
## Languages
The text in the dataset is in English
## Dataset Structure
[Needs More Information]
## Source Data
URL data was scraped using [news-please](https://github.com/fhamborg/news-please)
## Annotations
Articles were manually annotated by Alex on a 12-factor score card.
| [
-0.4879319965839386,
-0.4256684184074402,
0.24391286075115204,
0.4639696776866913,
-0.5848236680030823,
0.35414934158325195,
0.16164271533489227,
-0.2965962290763855,
0.5068910121917725,
0.38886743783950806,
-0.6778777837753296,
-0.9719124436378479,
-0.6303983926773071,
0.36494624614715576... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
valurank/offensive-multi | valurank | 2022-10-25T09:57:14Z | 15 | 0 | null | [
"task_categories:text-classification",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:derived",
"language:en",
"license:other",
"region:us"
] | 2022-10-25T09:57:14Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
language:
- en
license: other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- derived
task_categories:
- text-classification
---
# Dataset Card for hate-multi
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
## Dataset Description
### Dataset Summary
This dataset contains a collection of text labeled as offensive (class 1) or not (class 0).
## Dataset Creation
The dataset was creating by aggregating multiple publicly available datasets.
### Source Data
The following datasets were used:
* https://huggingface.co/datasets/hate_speech_offensive - Tweet text cleaned by lower casing, removing mentions and urls. Dropped instanced labeled as 'hate speech'
* https://sites.google.com/site/offensevalsharedtask/olid - Tweet text cleaned by lower casing, removing mentions and urls. Used 'subtask_a' column for labeling.
| [
-0.48227131366729736,
-0.5834022164344788,
-0.2304946631193161,
0.1849149912595749,
-0.4128311276435852,
0.3100093603134155,
-0.10644640028476715,
-0.23883183300495148,
0.5388877391815186,
0.3199889361858368,
-0.8316562175750732,
-0.9649580121040344,
-0.9589733481407166,
0.0003218810888938... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
yonesuke/Vicsek | yonesuke | 2022-02-17T05:34:34Z | 15 | 0 | null | [
"license:mit",
"region:us"
] | 2022-02-17T05:34:34Z | 2022-03-02T23:29:22.000Z | 2022-03-02T23:29:22 | ---
license: mit
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Biomedical-TeMU/SPACCC_Sentence-Splitter | Biomedical-TeMU | 2022-03-11T02:09:00Z | 15 | 0 | null | [
"license:cc-by-4.0",
"region:us"
] | 2022-03-11T02:09:00Z | 2022-03-11T01:59:57.000Z | 2022-03-11T01:59:57 | ---
license: cc-by-4.0
---
# The Sentence Splitter (SS) for Clinical Cases Written in Spanish
## Introduction
This repository contains the sentence splitting model trained using the SPACCC_SPLIT corpus (https://github.com/PlanTL-SANIDAD/SPACCC_SPLIT). The model was trained using the 90% of the corpus (900 clinical cases) and tested against the 10% (100 clinical cases). This model is a great resource to split sentences in biomedical documents, specially clinical cases written in Spanish. This model obtains a F-Measure of 98.75%.
This model was created using the Apache OpenNLP machine learning toolkit (https://opennlp.apache.org/), with the release number 1.8.4, released in December 2017.
This repository contains the model, training set, testing set, Gold Standard, executable file, and the source code.
## Prerequisites
This software has been compiled with Java SE 1.8 and it should work with recent versions. You can download Java from the following website: https://www.java.com/en/download
The executable file already includes the Apache OpenNLP dependencies inside, so the download of this toolkit is not necessary. However, you may download the latest version from this website: https://opennlp.apache.org/download.html
The library file we have used to compile is "opennlp-tools-1.8.4.jar". The source code should be able to compile with the latest version of OpenNLP, "opennlp-tools-*RELEASE_NUMBER*.jar". In case there are compilation or execution errors, please let us know and we will make all the necessary updates.
## Directory structure
<pre>
exec/
An executable file that can be used to apply the sentence splitter to your documents.
You can find the notes about its execution below in section "Usage".
gold_standard/
The clinical cases used as gold standard to evaluate the model's performance.
model/
The sentence splitting model, "es-sentence-splitter-model-spaccc.bin", a binary file.
src/
The source code to create the model (CreateModelSS.java) and evaluate it (EvaluateModelSS.java).
The directory includes an example about how to use the model inside your code (SentenceSplitter.java).
File "abbreviations.dat" contains a list of abbreviations, essential to build the model.
test_set/
The clinical cases used as test set to evaluate the model's performance.
train_set/
The clinical cases used to build the model. We use a single file with all documents present in
directory "train_set_docs" concatented.
train_set_docs/
The clinical cases used to build the model. For each record the sentences are already splitted.
</pre>
## Usage
The executable file *SentenceSplitter.jar* is the program you need to split the sentences of the document. For this program, two arguments are needed: (1) the text file to split the sentences, and (2) the model file (*es-sentence-splitter-model-spaccc.bin*). The program will display all sentences splitted in the terminal, with one sentence per line.
From the `exec` folder, type the following command in your terminal:
<pre>
$ java -jar SentenceSplitter.jar INPUT_FILE MODEL_FILE
</pre>
## Examples
Assuming you have the executable file, the input file and the model file in the same directory:
<pre>
$ java -jar SentenceSplitter.jar file_with_sentences_not_splitted.txt es-sentence-splitter-model-spaccc.bin
</pre>
## Model creation
To create this sentence splitting model, we used the following training parameters (class *TrainingParameters* in OpenNLP) to get the best performance:
- Number of iterations: 4000.
- Cutoff parameter: 3.
- Trainer type parameter: *EventTrainer.EVENT_VALUE*.
- Algorithm: Maximum Entropy (*ModelType.MAXENT.name()*).
Meanwhile, we used the following parameters for the sentence split builder (class *SentenceDetectorFactory* in OpenNLP) to get the best performance:
- Subclass name: null value.
- Language code: *es* (for Spanish).
- Use token end: true.
- Abbreviation dictionary: file "abbreviations.dat" (included in the `src/` directory).
- End of file characters: ".", "?" and "!".
## Model evaluation
After tuning the model using different values for each parameter mentioned above, we got the best performance with the values mentioned above.
| | Value |
| ----------------------------------------: | :------ |
| Number of sentences in the gold standard | 1445 |
| Number of sentences generated | 1447 |
| Number of sentences correctly splitted | 1428 |
| Number of sentences wrongly splitted | 12 |
| Number of sentences missed | 5 |
| **Precision** | **98.69%** |
| **Recall** | **98.82%** |
| **F-Measure** | **98.75%**|
Table 1: Evaluation statistics for the sentence splitting model.
## Contact
Ander Intxaurrondo (ander.intxaurrondo@bsc.es)
## License
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.
Copyright (c) 2018 Secretarรญa de Estado para el Avance Digital (SEAD)
| [
-0.28651267290115356,
-0.8372567296028137,
0.30328068137168884,
0.3177461624145508,
-0.460227906703949,
-0.27261999249458313,
-0.11652175337076187,
-0.31993377208709717,
0.22128143906593323,
0.6769852638244629,
-0.5456869602203369,
-0.6350799202919006,
-0.670804500579834,
0.146763533353805... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
chiarab/dct-keyword-uk | chiarab | 2022-03-13T19:24:53Z | 15 | 0 | null | [
"region:us"
] | 2022-03-13T19:24:53Z | 2022-03-13T09:21:55.000Z | 2022-03-13T09:21:55 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
stjokerli/TextToText_multirc_seqio | stjokerli | 2022-03-19T12:45:30Z | 15 | 0 | null | [
"region:us"
] | 2022-03-19T12:45:30Z | 2022-03-13T09:31:12.000Z | 2022-03-13T09:31:12 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
wanyu/IteraTeR_human_doc | wanyu | 2022-10-24T18:58:15Z | 15 | 1 | null | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"conditional-text-generation",
"text-editing",
"arxiv:2203.03802",
"region:us"
] | 2022-10-24T18:58:15Z | 2022-03-13T20:48:31.000Z | 2022-03-13T20:48:31 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
pretty_name: IteraTeR-human-doc
language_bcp47:
- en-US
tags:
- conditional-text-generation
- text-editing
---
Paper: [Understanding Iterative Revision from Human-Written Text](https://arxiv.org/abs/2203.03802)
Authors: Wanyu Du, Vipul Raheja, Dhruv Kumar, Zae Myung Kim, Melissa Lopez, Dongyeop Kang
Github repo: https://github.com/vipulraheja/IteraTeR
| [
-0.07486411184072495,
-0.4984819293022156,
0.7287437319755554,
0.13309958577156067,
-0.3330046236515045,
0.22748687863349915,
-0.2611616849899292,
-0.25515037775039673,
0.011397427879273891,
0.7972144484519958,
-0.6495076417922974,
-0.40806400775909424,
-0.23328770697116852,
0.230275005102... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/pile-curse-full_test | tomekkorbak | 2022-03-17T21:57:14Z | 15 | 0 | null | [
"region:us"
] | 2022-03-17T21:57:14Z | 2022-03-17T17:09:04.000Z | 2022-03-17T17:09:04 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/pile-curse-chunk-0 | tomekkorbak | 2022-03-18T21:40:36Z | 15 | 0 | null | [
"region:us"
] | 2022-03-18T21:40:36Z | 2022-03-18T21:39:05.000Z | 2022-03-18T21:39:05 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/pile-curse-chunk-3 | tomekkorbak | 2022-03-18T21:40:21Z | 15 | 0 | null | [
"region:us"
] | 2022-03-18T21:40:21Z | 2022-03-18T21:39:05.000Z | 2022-03-18T21:39:05 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/pile-curse-chunk-5 | tomekkorbak | 2022-03-18T21:40:26Z | 15 | 0 | null | [
"region:us"
] | 2022-03-18T21:40:26Z | 2022-03-18T21:39:49.000Z | 2022-03-18T21:39:49 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/pile-curse-chunk-6 | tomekkorbak | 2022-03-18T21:40:28Z | 15 | 0 | null | [
"region:us"
] | 2022-03-18T21:40:28Z | 2022-03-18T21:40:01.000Z | 2022-03-18T21:40:01 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/pile-curse-chunk-4 | tomekkorbak | 2022-03-18T21:40:31Z | 15 | 0 | null | [
"region:us"
] | 2022-03-18T21:40:31Z | 2022-03-18T21:40:03.000Z | 2022-03-18T21:40:03 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/pile-curse-chunk-16 | tomekkorbak | 2022-03-18T21:40:54Z | 15 | 0 | null | [
"region:us"
] | 2022-03-18T21:40:54Z | 2022-03-18T21:40:37.000Z | 2022-03-18T21:40:37 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/pile-curse-chunk-15 | tomekkorbak | 2022-03-18T21:41:13Z | 15 | 0 | null | [
"region:us"
] | 2022-03-18T21:41:13Z | 2022-03-18T21:41:01.000Z | 2022-03-18T21:41:01 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/pile-curse-chunk-14 | tomekkorbak | 2022-03-18T22:06:38Z | 15 | 0 | null | [
"region:us"
] | 2022-03-18T22:06:38Z | 2022-03-18T22:03:49.000Z | 2022-03-18T22:03:49 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/pile-curse-chunk-13 | tomekkorbak | 2022-03-18T22:06:34Z | 15 | 0 | null | [
"region:us"
] | 2022-03-18T22:06:34Z | 2022-03-18T22:03:49.000Z | 2022-03-18T22:03:49 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/pile-curse-chunk-8 | tomekkorbak | 2022-03-18T22:06:33Z | 15 | 0 | null | [
"region:us"
] | 2022-03-18T22:06:33Z | 2022-03-18T22:04:35.000Z | 2022-03-18T22:04:35 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/pile-curse-chunk-9 | tomekkorbak | 2022-03-18T22:07:14Z | 15 | 0 | null | [
"region:us"
] | 2022-03-18T22:07:14Z | 2022-03-18T22:04:53.000Z | 2022-03-18T22:04:53 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/pile-curse-chunk-20 | tomekkorbak | 2022-03-18T22:06:10Z | 15 | 0 | null | [
"region:us"
] | 2022-03-18T22:06:10Z | 2022-03-18T22:05:00.000Z | 2022-03-18T22:05:00 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/pile-curse-chunk-7 | tomekkorbak | 2022-03-18T22:06:13Z | 15 | 0 | null | [
"region:us"
] | 2022-03-18T22:06:13Z | 2022-03-18T22:05:10.000Z | 2022-03-18T22:05:10 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/pile-curse-chunk-17 | tomekkorbak | 2022-03-18T22:06:07Z | 15 | 0 | null | [
"region:us"
] | 2022-03-18T22:06:07Z | 2022-03-18T22:05:12.000Z | 2022-03-18T22:05:12 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/pile-curse-chunk-21 | tomekkorbak | 2022-03-18T22:06:05Z | 15 | 0 | null | [
"region:us"
] | 2022-03-18T22:06:05Z | 2022-03-18T22:05:13.000Z | 2022-03-18T22:05:13 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/pile-curse-chunk-22 | tomekkorbak | 2022-03-18T22:05:58Z | 15 | 0 | null | [
"region:us"
] | 2022-03-18T22:05:58Z | 2022-03-18T22:05:14.000Z | 2022-03-18T22:05:14 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/pile-curse-chunk-10 | tomekkorbak | 2022-03-18T22:06:03Z | 15 | 0 | null | [
"region:us"
] | 2022-03-18T22:06:03Z | 2022-03-18T22:05:21.000Z | 2022-03-18T22:05:21 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/pile-curse-chunk-26 | tomekkorbak | 2022-03-18T22:05:50Z | 15 | 0 | null | [
"region:us"
] | 2022-03-18T22:05:50Z | 2022-03-18T22:05:25.000Z | 2022-03-18T22:05:25 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/pile-curse-chunk-11 | tomekkorbak | 2022-03-18T22:06:16Z | 15 | 0 | null | [
"region:us"
] | 2022-03-18T22:06:16Z | 2022-03-18T22:05:27.000Z | 2022-03-18T22:05:27 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tomekkorbak/pile-curse-chunk-27 | tomekkorbak | 2022-03-18T22:06:23Z | 15 | 0 | null | [
"region:us"
] | 2022-03-18T22:06:23Z | 2022-03-18T22:05:33.000Z | 2022-03-18T22:05:33 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
kingabzpro/savtadepth-flags-V2 | kingabzpro | 2023-03-20T09:16:00Z | 15 | 2 | null | [
"region:us"
] | 2023-03-20T09:16:00Z | 2022-03-19T07:08:03.000Z | 2022-03-19T07:08:03 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
IsaacRodgz/Fake-news-latam-omdena | IsaacRodgz | 2022-03-23T00:20:36Z | 15 | 1 | null | [
"region:us"
] | 2022-03-23T00:20:36Z | 2022-03-22T23:58:35.000Z | 2022-03-22T23:58:35 | # Dataset Card for Fake-news-latam-omdena
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[latam-chapters-news-detector](https://github.com/OmdenaAI/latam-chapters-news-detector)
- **Repository:**[latam-chapters-news-detector](https://github.com/OmdenaAI/latam-chapters-news-detector)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Since the Cambridge Analytica scandal a pandora box has been opened around the world, bringing to light campaigns even involving our current Latinamerica leaders manipulating public opinion through social media to win an election. There is a common and simple pattern that includes platforms such as facebook and fake news, where the candidates are able to build a nefarious narrative for their own benefit. This fact is a growing concern for our democracies, as many of these practices have been widely spread across the region and more people are gaining access to the internet. Thus, it is a necessity to be able to advise the population, and for that we have to be able to quickly spot these plots on the net before the damage is irreversible.
Therefore, an initial effort was taken to collect this dataset which gathered news from different news sources in Mexico, Colombia and El Salvador. With the objective to train a classification model and deploy it as part of the Politics Fake News Detector in LATAM (Latin America) project [https://github.com/OmdenaAI/latam-chapters-news-detector].
Website articles and tweets were considered.
### Supported Tasks and Leaderboards
Binary fake news classification [with classes "True" and "Fake"]
### Languages
Spanish only
## Dataset Structure
### Data Instances
* Train: 2782
* Test: 310
### Data Fields
[More Information Needed]
### Data Splits
Train and test. Each split was generated with a stratified procedure in order to have the same proportion of fake news in both train and test.
Around 1/3 of the observations in each split have the label 'Fake', while 2/3 have the label 'True'.
## Dataset Creation
### Curation Rationale
For a more specific flow of how the labeling was done, follow this link: https://github.com/OmdenaAI/latam-chapters-news-detector/blob/main/Fake-news_Flowchart.pdf
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
Once the capacity to somewhat detect irregularities in the news activity on the internet is developed, we might be able to counter the disinformation with the help of additional research. As we reduce the time spent in looking for those occurrences, more time can be used in validating the results and uncovering the truth; enabling researchers, journalists and organizations to help people make an informed decision whether the public opinion is true or not, so that they can identify on their own if someone is trying to manipulate them for a certain political benefit.
If this matter isnโt tackled with enough urgency, we might see the rise of a new dark era in latin america politics, where many unscrupulous parties and people will manage to gain power and control the lives of many people.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to the Omdena local chapter members from Mexico, Colombia and El Salvador for their amazing effort to collect and curate this dataset. | [
-0.41522812843322754,
-0.7800273895263672,
0.46134263277053833,
0.30857449769973755,
-0.596572756767273,
0.25874224305152893,
-0.17976005375385284,
-0.335860013961792,
0.5660380125045776,
0.5059900283813477,
-0.43939730525016785,
-0.7363638877868652,
-0.720546543598175,
-0.1032222732901573... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.