author stringlengths 2 29 ⌀ | cardData null | citation stringlengths 0 9.58k ⌀ | description stringlengths 0 5.93k ⌀ | disabled bool 1 class | downloads float64 1 1M ⌀ | gated bool 2 classes | id stringlengths 2 108 | lastModified stringlengths 24 24 | paperswithcode_id stringlengths 2 45 ⌀ | private bool 2 classes | sha stringlengths 40 40 | siblings list | tags list | readme_url stringlengths 57 163 | readme stringlengths 0 977k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
VanessaSchenkel | null | null | null | false | 2 | false | VanessaSchenkel/pt-all-words | 2022-10-15T01:59:29.000Z | sbwce | false | 8068419f931b965fce6f7ee08a2ad07d7397d039 | [] | [
"annotations_creators:no-annotation",
"language:pt",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"task_categories:other",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
... | https://huggingface.co/datasets/VanessaSchenkel/pt-all-words/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language:
- pt
language_creators:
- expert-generated
license: []
multilinguality:
- monolingual
paperswithcode_id: sbwce
pretty_name: "Dicion\xE1rio em Portugu\xEAs"
size_categories:
- 100K<n<1M
source_datasets:
- original
tags: []
task_categories:
- other
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
---
# Dataset Card for Dicionário Português
It is a list of portuguese words with its inflections
How to use it:
```
from datasets import load_dataset
remote_dataset = load_dataset("VanessaSchenkel/pt-all-words")
remote_dataset
```
|
omerist | null | null | null | false | 1 | false | omerist/arab-ds-mini | 2022-10-15T01:12:49.000Z | null | false | d5c7c07268056a1b294d5815bdf012f92c327c1d | [] | [] | https://huggingface.co/datasets/omerist/arab-ds-mini/resolve/main/README.md | ---
dataset_info:
features:
- name: title
dtype: string
- name: review
dtype: string
- name: review_length
dtype: int64
splits:
- name: train
num_bytes: 87011869.13722204
num_examples: 27116
- name: validation
num_bytes: 9668342.001417983
num_examples: 3013
download_size: 49392988
dataset_size: 96680211.13864002
---
# Dataset Card for "arab-ds-mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vialibre | null | @misc{https://doi.org/10.48550/arxiv.2207.06591,
doi = {10.48550/ARXIV.2207.06591},
url = {https://arxiv.org/abs/2207.06591},
author = {Alemany, Laura Alonso and Benotti, Luciana and González, Lucía and Maina, Hernán and Busaniche, Beatriz and Halvorsen, Alexia and Bordone, Matías and Sánchez, Jorge},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI),
FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {A tool to overcome technical barriers for bias assessment in human language technologies},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
} | null | false | 91 | false | vialibre/spanish3bwc_extended_vocab | 2022-10-19T16:28:13.000Z | null | false | b68c752e044cf833209e0d758b4f96535a071d2e | [] | [
"arxiv:2207.06591",
"language:es",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"license:mit"
] | https://huggingface.co/datasets/vialibre/spanish3bwc_extended_vocab/resolve/main/README.md | ---
language:
- 'es'
multilinguality:
- monolingual
size_categories:
- "1M<n<10M"
pretty_name: "Vocabulary info - Spanish 3 Billion Words Corpora"
license: mit
---
# Dataset Card for "Vocabulary info - Spanish 3 Billion Words Corpora"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Source Data](#source-data)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Paper:** https://arxiv.org/abs/2207.06591
### Dataset Summary
* Number of words: 1.220.051 (1.2M)
* Minimum absolute frequency: 5.
### Languages
* Spanish
### Source Data
- **Repository:** https://huggingface.co/datasets/nanom/splittedspanish3bwc
## Dataset Structure
### Data Fields
| Field |Descriptions|
|-----------------------|----:|
|word | Word|
|freq | Absolute word frequency|
|percentile | percentile = $ith_{cum\_freqf}/max_{cum\_freq}$|
|splits | Block name of the complete data set containing the word|
|in_subset | Dictionary of parent set name (from which each block is extracted) and number of occurrences of the word|
|embedding | Normalized word embedding. Algorithm: FastTex, Parameters: (min subword-ngram=3, max subword-ngram=6, minCount=5, epochs=20, dim=300) |
|pca | Principal Component Analysis (dim=2) |
Example:
```
word administrar
freq 28723
percentile 0.05882
splits "[allwikis_288, OpenSubtitles2018_501, multiUN_...]"
in_subset "{'allwikis': 5028, 'OpenSubtitles2018': 2447, ...}"
embedding "[-0.0661974624, -0.0224714149, -0.0005360196, ...]"
pca "[-0.0098309302, 0.1858775107]"
```
### Data Splits
| name |train|
|-----------------------|----:|
|full |1220051|
|half |618734|
|quarter |318958|
|mini |56|
## Additional Information
### Licensing Information
* [MIT Licence](https://huggingface.co/datasets/vialibre/spanish3bwc_extended_vocab/resolve/main/LICENSE)
### Citation Information
```
@misc{https://doi.org/10.48550/arxiv.2207.06591,
doi = {10.48550/ARXIV.2207.06591},
url = {https://arxiv.org/abs/2207.06591},
author = {Alemany, Laura Alonso and Benotti, Luciana and González, Lucía and Maina, Hernán and Busaniche, Beatriz and Halvorsen, Alexia and Bordone, Matías and Sánchez, Jorge},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI),
FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {A tool to overcome technical barriers for bias assessment in human language technologies},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
}
```
|
randomwalksky | null | null | null | false | null | false | randomwalksky/toy | 2022-10-15T03:30:33.000Z | null | false | 042361486f09031154629eff1e6059a609456f5a | [] | [
"license:openrail"
] | https://huggingface.co/datasets/randomwalksky/toy/resolve/main/README.md | ---
license: openrail
---
|
zhenzi | null | @software{2022,
title=数据集标题,
author=zhenzi,
year={2022},
month={March},
publisher = {GitHub}
} | 数据集描述. | false | 192 | false | zhenzi/imagenette | 2022-10-19T03:37:03.000Z | null | false | 5dd31b4c66365c698c3e2e92d86b0d11ec6598cc | [] | [] | https://huggingface.co/datasets/zhenzi/imagenette/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
config_name: tests
splits:
- name: train
num_bytes: 459616258
num_examples: 10500
download_size: 467583804
dataset_size: 459616258
---
|
mrajbrahma | null | null | null | false | null | false | mrajbrahma/bodo-words | 2022-10-15T04:56:25.000Z | null | false | da9a982d6ee573ec8c72df9e6e78a0d92fa56eb2 | [] | [
"license:cc-by-sa-4.0"
] | https://huggingface.co/datasets/mrajbrahma/bodo-words/resolve/main/README.md | ---
license: cc-by-sa-4.0
---
|
CrispyShark | null | null | null | false | null | false | CrispyShark/emoji_hairpin | 2022-10-15T14:27:59.000Z | null | false | 1974c2c4a875f5da8848ce9adf4821f825352382 | [] | [] | https://huggingface.co/datasets/CrispyShark/emoji_hairpin/resolve/main/README.md | |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-squad_v2-squad_v2-ea058a-1765461442 | 2022-10-15T06:31:40.000Z | null | false | 2eefce06256e84521bdff3e3a0df0248bd28cb27 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:squad_v2"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-squad_v2-squad_v2-ea058a-1765461442/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad_v2
eval_info:
task: extractive_question_answering
model: husnu/bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3TQUAD2-finetuned_lr-2e-05_epochs-1
metrics: []
dataset_name: squad_v2
dataset_config: squad_v2
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: husnu/bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3TQUAD2-finetuned_lr-2e-05_epochs-1
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Jets](https://huggingface.co/Jets) for evaluating this model. |
bartelds | null | null | null | false | 76 | false | bartelds/gos | 2022-10-15T07:14:03.000Z | null | false | f6201982f95694dd4ce064d4c9865c7bfebf80ce | [] | [] | https://huggingface.co/datasets/bartelds/gos/resolve/main/README.md | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: variant
dtype: string
- name: text
dtype: string
- name: seconds
dtype: float64
splits:
- name: train
num_bytes: 1357575493.104
num_examples: 3706
- name: validation
num_bytes: 162994097.0
num_examples: 412
download_size: 1423740383
dataset_size: 1520569590.104
---
# Dataset Card for "gos"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Shushant | null | null | null | false | 3 | false | Shushant/PubmedQuestionAnsweringDataset | 2022-10-15T07:22:44.000Z | null | false | 54b7e788d34f58904c6a02941ca9270f5179db65 | [] | [
"license:other"
] | https://huggingface.co/datasets/Shushant/PubmedQuestionAnsweringDataset/resolve/main/README.md | ---
license: other
---
|
une | null | null | null | false | null | false | une/uneune_image1 | 2022-10-15T09:07:58.000Z | null | false | 2ccad53104e75b5ec10f8abc1ac16f4c5f7ea384 | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/une/uneune_image1/resolve/main/README.md | ---
license: cc-by-4.0
---
# Dataset Card for uneune_image1
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
今まで私が描いたイラスト100枚のデータセットです。
512×512にトリミングしてあります。
さっくりとstableDiffusionでの学習用に使えるデータセットが欲しかったので作りました。
This is a data set of 100 illustrations I have drawn so far.
Cropped to 512x512.
I wanted a dataset that can be used for learning with stableDiffusion, so I made it. |
Harsit | null | null | null | false | 2 | false | Harsit/xnli2.0_train_urdu | 2022-10-15T09:30:11.000Z | null | false | 3c01cebd3e2d75dbf0987f1bc4c2b424923d733d | [] | [] | https://huggingface.co/datasets/Harsit/xnli2.0_train_urdu/resolve/main/README.md | language: ["Urdu"] |
CG80499 | null | null | null | false | 119 | false | CG80499/Inverse-scaling-test | 2022-10-16T11:33:06.000Z | null | false | d11e6d5bb369ca02a87fd48611f640afa98c7962 | [] | [
"license:bigscience-openrail-m",
"task_categories:multiple-choice",
"task_categories:question-answering",
"task_categories:zero-shot-classification"
] | https://huggingface.co/datasets/CG80499/Inverse-scaling-test/resolve/main/README.md | ---
license: bigscience-openrail-m
task_categories:
- multiple-choice
- question-answering
- zero-shot-classification
train-eval-index:
- config: inverse-scaling-test
task: text-generation
task_id: text_zero_shot_classification
splits:
eval_split: train
col_mapping:
prompt: text
classes: classes
answer_index: target
---
|
siberspace | null | null | null | false | null | false | siberspace/elisabeth-borne | 2022-10-15T11:58:16.000Z | null | false | 3a1b88eba215ea26ae74e6884e793bda02d2442f | [] | [] | https://huggingface.co/datasets/siberspace/elisabeth-borne/resolve/main/README.md | |
siberspace | null | null | null | false | null | false | siberspace/ricardo | 2022-10-15T16:19:40.000Z | null | false | 70cdab03f29a290ff14d21f9f8080286cd72dd86 | [] | [] | https://huggingface.co/datasets/siberspace/ricardo/resolve/main/README.md | |
omerist | null | null | null | false | 2 | false | omerist/turknews-mini | 2022-10-15T12:38:10.000Z | null | false | d563042b2a16501be4c7eeb7b71998db3a24adec | [] | [] | https://huggingface.co/datasets/omerist/turknews-mini/resolve/main/README.md | ---
dataset_info:
features:
- name: review
dtype: string
- name: title
dtype: string
- name: review_length
dtype: int64
splits:
- name: train
num_bytes: 9064933.18105424
num_examples: 3534
- name: validation
num_bytes: 1008069.8189457601
num_examples: 393
download_size: 5732599
dataset_size: 10073003.0
---
# Dataset Card for "turknews-mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nielsr | null | null | null | false | 7 | false | nielsr/balloon | 2022-10-15T13:02:05.000Z | null | false | c15baed0307c4fcc7b375258a182ea49ef2d4e8b | [] | [] | https://huggingface.co/datasets/nielsr/balloon/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 30808803.0
num_examples: 61
- name: validation
num_bytes: 8076058.0
num_examples: 13
download_size: 38814125
dataset_size: 38884861.0
---
# Dataset Card for "balloon"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Fantomas78 | null | null | null | false | null | false | Fantomas78/Tamburro | 2022-10-15T13:15:32.000Z | null | false | f6b502b946c723ef3dd51efcbe15f1753cbad6a1 | [] | [] | https://huggingface.co/datasets/Fantomas78/Tamburro/resolve/main/README.md | |
Michaelber123 | null | null | null | false | null | false | Michaelber123/mike | 2022-10-15T13:26:50.000Z | null | false | b2d765c28484069c071934ac7858b682c4e798e8 | [] | [
"license:artistic-2.0"
] | https://huggingface.co/datasets/Michaelber123/mike/resolve/main/README.md | ---
license: artistic-2.0
---
|
Sethyyann3572 | null | null | null | false | 1 | false | Sethyyann3572/glue-topia | 2022-10-15T13:32:42.000Z | null | false | 519c6f85f8dc6cbbf4878ebdb71dd39054c5357d | [] | [
"license:openrail"
] | https://huggingface.co/datasets/Sethyyann3572/glue-topia/resolve/main/README.md | ---
license: openrail
---
topia
Sport
topia
Documentaire
topia
Song Of Topia
topia |
deman539 | null | null | null | false | null | false | deman539/celebrity_in_movie_demo | 2022-10-15T14:50:25.000Z | null | false | a0bd554a17af724da30bd7b22b77022d9cb67991 | [] | [] | https://huggingface.co/datasets/deman539/celebrity_in_movie_demo/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
0: output
splits:
- name: train
num_bytes: 2237547.0
num_examples: 5
download_size: 1373409
dataset_size: 2237547.0
---
# Dataset Card for "celebrity_in_movie_demo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
biglam | null | null | null | false | 54 | false | biglam/gutenberg-poetry-corpus | 2022-10-18T10:53:52.000Z | null | false | fcd42e249fed48dbd1d3b9b969528ef9298d3464 | [] | [
"annotations_creators:no-annotation",
"language:en",
"language_creators:found",
"license:cc0-1.0",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"tags:poetry",
"tags:stylistics",
"tags:poems",
"tags:gutenberg",
"task_categories:text-generation",
"task_ids:language-modeling"
] | https://huggingface.co/datasets/biglam/gutenberg-poetry-corpus/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license:
- cc0-1.0
multilinguality:
- monolingual
pretty_name: Gutenberg Poetry Corpus
size_categories:
- 1M<n<10M
source_datasets: []
tags:
- poetry
- stylistics
- poems
- gutenberg
task_categories:
- text-generation
task_ids:
- language-modeling
---
# Allison Parrish's Gutenberg Poetry Corpus
This corpus was originally published under the CC0 license by [Allison Parrish](https://www.decontextualize.com/). Please visit Allison's fantastic [accompanying GitHub repository](https://github.com/aparrish/gutenberg-poetry-corpus) for usage inspiration as well as more information on how the data was mined, how to create your own version of the corpus, and examples of projects using it.
This dataset contains 3,085,117 lines of poetry from hundreds of Project Gutenberg books. Each line has a corresponding `gutenberg_id` (1191 unique values) from project Gutenberg.
```python
Dataset({
features: ['line', 'gutenberg_id'],
num_rows: 3085117
})
```
A row of data looks like this:
```python
{'line': 'And retreated, baffled, beaten,', 'gutenberg_id': 19}
```
|
svnfs | null | null | null | false | 14 | false | svnfs/depth-of-field | 2022-11-13T23:33:39.000Z | null | false | e078a9a8bb873844031a65f6a0cc198ddcc1c6a5 | [] | [
"license:apache-2.0",
"annotations_creators:Stavros Niafas",
"image_size:(200,300,3)",
"source_dataset:unsplash",
"task_categories:image-classification",
"task_categories:image-segmentation"
] | https://huggingface.co/datasets/svnfs/depth-of-field/resolve/main/README.md | ---
license: apache-2.0
annotations_creators:
- Stavros Niafas
sample_number:
- 1200
class_number:
- 2
image_size:
- (200,300,3)
source_dataset:
- unsplash
task_categories:
- image-classification
- image-segmentation
dataset_info:
- config_name: depth-of-field
features:
- name: image
dtype: string
- name: class
dtype:
class_label:
names:
0: bokeh
1: no-bokeh
- config_name: default
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
0: '0'
1: '1'
splits:
- name: train
num_bytes: 192150
num_examples: 1200
download_size: 38792692
dataset_size: 192150
---
## Dataset Summary
Depth-of-Field(DoF) dataset is comprised of 1200 annotated images, binary annotated with(0) and without(1) bokeh effect, shallow or deep depth of field. It is a forked data set from the [Unsplash 25K](https://github.com/unsplash/datasets) data set.
## Dataset Description
- **Repository:** [https://github.com/sniafas/photography-style-analysis](https://github.com/sniafas/photography-style-analysis)
- **Paper:** [More Information Needed](https://www.researchgate.net/publication/355917312_Photography_Style_Analysis_using_Machine_Learning)
### Citation Information
```
@article{sniafas2021,
title={DoF: An image dataset for depth of field classification},
author={Niafas, Stavros},
doi= {10.13140/RG.2.2.29880.62722},
url= {https://www.researchgate.net/publication/364356051_DoF_depth_of_field_datase}
year={2021}
}
```
Note that each DoF dataset has its own citation. Please see the source to
get the correct citation for each contained dataset. |
SamHernandez | null | null | null | false | null | false | SamHernandez/my-style | 2022-10-15T14:17:13.000Z | null | false | 0eea994c2f3958629e34934373d4b48ccd53c20e | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/SamHernandez/my-style/resolve/main/README.md | ---
license: afl-3.0
---
|
Mbermudez | null | null | null | false | 1 | false | Mbermudez/mike | 2022-10-15T15:43:53.000Z | null | false | 6c9e42b0a14c5b017947313f7098d871fb498b91 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/Mbermudez/mike/resolve/main/README.md | ---
license: openrail
---
|
ghoumrassi | null | null | null | false | 21 | false | ghoumrassi/clothes_sample | 2022-10-15T18:07:22.000Z | null | false | 75321e3f022839c10b67ba9c08bb6efac8e17aca | [] | [] | https://huggingface.co/datasets/ghoumrassi/clothes_sample/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 20078406.0
num_examples: 990
download_size: 0
dataset_size: 20078406.0
---
# Dataset Card for "clothes_sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jaxmetaverse | null | null | null | false | null | false | jaxmetaverse/wukong | 2022-10-16T02:07:16.000Z | null | false | b302b4605dd1a192ee9999e260009eadd110fd7d | [] | [
"license:openrail"
] | https://huggingface.co/datasets/jaxmetaverse/wukong/resolve/main/README.md | ---
license: openrail
---
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-inverse-scaling__NeQA-inverse-scaling__NeQA-4df82b-1769161494 | 2022-10-15T16:03:51.000Z | null | false | 540de892a1be8640934c938b4177e1de14ca3559 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:inverse-scaling/NeQA"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-inverse-scaling__NeQA-inverse-scaling__NeQA-4df82b-1769161494/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- inverse-scaling/NeQA
eval_info:
task: text_zero_shot_classification
model: gpt2-xl
metrics: []
dataset_name: inverse-scaling/NeQA
dataset_config: inverse-scaling--NeQA
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: gpt2-xl
* Dataset: inverse-scaling/NeQA
* Config: inverse-scaling--NeQA
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@rololbot](https://huggingface.co/rololbot) for evaluating this model. |
blancoloureiro | null | null | null | false | null | false | blancoloureiro/fotos | 2022-10-15T17:14:17.000Z | null | false | 4516f87f63964a28cae1eda838ebc267388703ea | [] | [
"license:openrail"
] | https://huggingface.co/datasets/blancoloureiro/fotos/resolve/main/README.md | ---
license: openrail
---
|
alexinigoc | null | null | null | false | null | false | alexinigoc/DatasetTraining | 2022-10-15T21:04:07.000Z | null | false | 65bb3029428ccce24e597b76531e6af13b389f19 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/alexinigoc/DatasetTraining/resolve/main/README.md | ---
license: afl-3.0
---
|
arbml | null | null | null | false | null | false | arbml/Iraqi_Dialect | 2022-10-15T21:17:07.000Z | null | false | efce2cf816cf1abad0c590e9e737e5289e1f9394 | [] | [] | https://huggingface.co/datasets/arbml/Iraqi_Dialect/resolve/main/README.md | ---
dataset_info:
features:
- name: No.
dtype: string
- name: ' Tex'
dtype: string
- name: label
dtype:
class_label:
names:
0: 'False'
1: IDK
2: N
3: 'True'
splits:
- name: train
num_bytes: 365478
num_examples: 1672
download_size: 134999
dataset_size: 365478
---
# Dataset Card for "Iraqi_Dialect"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
oscarmutante | null | null | null | false | null | false | oscarmutante/oscar | 2022-10-15T21:28:32.000Z | null | false | 991d85ba7b296eb212731f44c61e7cc3e1543700 | [] | [
"license:unlicense"
] | https://huggingface.co/datasets/oscarmutante/oscar/resolve/main/README.md | ---
license: unlicense
---
|
arbml | null | null | null | false | null | false | arbml/Sudanese_Dialect_Tweet | 2022-10-15T21:40:01.000Z | null | false | ee7fc57264b8056f8341f8215e5307a680a78f0a | [] | [] | https://huggingface.co/datasets/arbml/Sudanese_Dialect_Tweet/resolve/main/README.md | ---
dataset_info:
features:
- name: Tweet
dtype: string
- name: Annotator 1
dtype: string
- name: Annotator 2
dtype: string
- name: Annotator 3
dtype: string
- name: Mode
dtype: string
- name: Date
dtype: string
splits:
- name: train
num_bytes: 345088
num_examples: 2123
download_size: 141675
dataset_size: 345088
---
# Dataset Card for "Sudanese_Dialect_Tweet"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
arbml | null | null | null | false | null | false | arbml/Sudanese_Dialect_Tweet_Tele | 2022-10-15T21:47:19.000Z | null | false | 8e2e32d0832c597e4ba2b1f252e59cec765a8c37 | [] | [] | https://huggingface.co/datasets/arbml/Sudanese_Dialect_Tweet_Tele/resolve/main/README.md | ---
dataset_info:
features:
- name: Tweet ID
dtype: string
- name: Tweet Text
dtype: string
- name: Date
dtype: string
- name: label
dtype:
class_label:
names:
0: NEGATIVE
1: POSITIVE
2: OBJECTIVE
splits:
- name: train
num_bytes: 872272
num_examples: 5346
download_size: 353611
dataset_size: 872272
---
# Dataset Card for "Sudanese_Dialect_Tweet_Tele"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Romecr | null | null | null | false | null | false | Romecr/testImages | 2022-10-15T21:49:01.000Z | null | false | c3746b6d2227edbea4a21d80311bb798433f6189 | [] | [
"license:other"
] | https://huggingface.co/datasets/Romecr/testImages/resolve/main/README.md | ---
license: other
---
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-lener_br-lener_br-c4cf3f-1771961515 | 2022-10-15T21:53:08.000Z | null | false | 1bf5e6c1c2761f004eb867b20ad5d8a173ace8da | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:lener_br"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-lener_br-lener_br-c4cf3f-1771961515/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- lener_br
eval_info:
task: entity_extraction
model: Luciano/xlm-roberta-base-finetuned-lener-br
metrics: []
dataset_name: lener_br
dataset_config: lener_br
dataset_split: test
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-base-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-lener_br-lener_br-c4cf3f-1771961516 | 2022-10-15T21:53:37.000Z | null | false | 8b2593845c16fa3deed61cb75900f4d472fc90f5 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:lener_br"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-lener_br-lener_br-c4cf3f-1771961516/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- lener_br
eval_info:
task: entity_extraction
model: Luciano/xlm-roberta-large-finetuned-lener-br
metrics: []
dataset_name: lener_br
dataset_config: lener_br
dataset_split: test
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-large-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. |
shivi | null | null | null | false | 352 | false | shivi/cheques_sample_data | 2022-11-05T21:31:01.000Z | null | false | cd13b81d7a5f2a2097052eee7be3652d71c7e698 | [] | [] | https://huggingface.co/datasets/shivi/cheques_sample_data/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: test
num_bytes: 7518544.0
num_examples: 400
- name: train
num_bytes: 56481039.4
num_examples: 2800
- name: validation
num_bytes: 15034990.0
num_examples: 800
download_size: 58863727
dataset_size: 79034573.4
---
# Dataset Card for "cheques_sample_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
arbml | null | null | null | false | null | false | arbml/Satirical_Fake_News | 2022-10-15T22:37:57.000Z | null | false | c14be6279b7e817d010409aaad46df114f0af3f5 | [] | [] | https://huggingface.co/datasets/arbml/Satirical_Fake_News/resolve/main/README.md | ---
dataset_info:
features:
- name: Text
dtype: string
splits:
- name: train
num_bytes: 6131349
num_examples: 3221
download_size: 3223892
dataset_size: 6131349
---
# Dataset Card for "Satirical_Fake_News"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
arbml | null | null | null | false | null | false | arbml/NArabizi | 2022-10-15T22:48:18.000Z | null | false | 4be22018d039ee657dbeb7ff2e62fc9ae8eefdb6 | [] | [] | https://huggingface.co/datasets/arbml/NArabizi/resolve/main/README.md | ---
dataset_info:
features:
- name: ID
dtype: string
- name: label
dtype:
class_label:
names:
0: NEU
1: NEG
2: MIX
3: POS
splits:
- name: test
num_bytes: 4034
num_examples: 144
- name: train
num_bytes: 27839
num_examples: 998
- name: validation
num_bytes: 3823
num_examples: 137
download_size: 12217
dataset_size: 35696
---
# Dataset Card for "NArabizi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
arbml | null | null | null | false | null | false | arbml/ArSAS | 2022-10-15T22:51:35.000Z | null | false | 619c18ba46019c28099c82a430e773e98471b5db | [] | [] | https://huggingface.co/datasets/arbml/ArSAS/resolve/main/README.md | ---
dataset_info:
features:
- name: '#Tweet_ID'
dtype: string
- name: Tweet_text
dtype: string
- name: Topic
dtype: string
- name: Sentiment_label_confidence
dtype: string
- name: Speech_act_label
dtype: string
- name: Speech_act_label_confidence
dtype: string
- name: label
dtype:
class_label:
names:
0: Negative
1: Neutral
2: Positive
3: Mixed
splits:
- name: train
num_bytes: 6147723
num_examples: 19897
download_size: 2998319
dataset_size: 6147723
---
# Dataset Card for "ArSAS"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Alfitauwu | null | null | null | false | null | false | Alfitauwu/Pruebitaaaxd | 2022-10-15T23:48:35.000Z | null | false | 2d787d3f9d73323bcafa04c7fd3edb791aff5589 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/Alfitauwu/Pruebitaaaxd/resolve/main/README.md | ---
license: openrail
---
|
olm | null | null | null | false | 253 | false | olm/olm-CC-MAIN-2022-40-sampling-ratio-0.15894621295 | 2022-11-04T17:14:25.000Z | null | false | 0281194d215c73170d30add87e5f16f9dec1d641 | [] | [
"annotations_creators:no-annotation",
"language:en",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"tags:pretraining",
"tags:language modelling",
"tags:common crawl",
"tags:web"
] | https://huggingface.co/datasets/olm/olm-CC-MAIN-2022-40-sampling-ratio-0.15894621295/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: OLM September/October 2022 Common Crawl
size_categories:
- 10M<n<100M
source_datasets: []
tags:
- pretraining
- language modelling
- common crawl
- web
task_categories: []
task_ids: []
---
# Dataset Card for OLM September/October 2022 Common Crawl
Cleaned and deduplicated pretraining dataset, created with the OLM repo [here](https://github.com/huggingface/olm-datasets) from 16% of the September/October 2022 Common Crawl snapshot.
Note: `last_modified_timestamp` was parsed from whatever a website returned in it's `Last-Modified` header; there are likely a small number of outliers that are incorrect, so we recommend removing the outliers before doing statistics with `last_modified_timestamp`. |
seraldu | null | null | null | false | null | false | seraldu/sergio_prueba | 2022-10-16T08:02:35.000Z | null | false | 3cba5a6b651b0ec3ad8ecef4efa9906f5b764a7f | [] | [
"license:bigscience-openrail-m"
] | https://huggingface.co/datasets/seraldu/sergio_prueba/resolve/main/README.md | ---
license: bigscience-openrail-m
---
|
ohtaras | null | null | null | false | null | false | ohtaras/Kn | 2022-10-16T10:19:18.000Z | null | false | 4f2dc2ad903dd9e297a4169a7fb54c4492af8a22 | [] | [
"license:unknown"
] | https://huggingface.co/datasets/ohtaras/Kn/resolve/main/README.md | ---
license: unknown
---
|
akashrai | null | null | null | false | null | false | akashrai/dreambooth_image_training | 2022-10-16T10:43:49.000Z | null | false | 78f73995b25140373869016fcd809fbd710b4c9c | [] | [
"license:unknown"
] | https://huggingface.co/datasets/akashrai/dreambooth_image_training/resolve/main/README.md | ---
license: unknown
---
|
siberspace | null | null | null | false | null | false | siberspace/carton | 2022-10-16T10:43:33.000Z | null | false | 02bbbd4aedd6c9809d7c4527bb5d9f3fb6fefbdc | [] | [] | https://huggingface.co/datasets/siberspace/carton/resolve/main/README.md | |
poojaruhal | null | null | null | false | null | false | poojaruhal/NLBSE-class-comment-classification | 2022-10-16T11:24:39.000Z | null | false | 6b01b9b18c3b40be4aac81fac9952fd37ca2e4dc | [] | [
"license:cc-by-nc-sa-4.0"
] | https://huggingface.co/datasets/poojaruhal/NLBSE-class-comment-classification/resolve/main/README.md | ---
license: cc-by-nc-sa-4.0
---
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-lener_br-lener_br-39d19a-1775961623 | 2022-10-16T11:37:33.000Z | null | false | d370089b399492cc158548e9589fc3af76f4712a | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:lener_br"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-lener_br-lener_br-39d19a-1775961623/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- lener_br
eval_info:
task: entity_extraction
model: Luciano/bertimbau-base-lener-br-finetuned-lener-br
metrics: []
dataset_name: lener_br
dataset_config: lener_br
dataset_split: test
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/bertimbau-base-lener-br-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-inverse-scaling__NeQA-inverse-scaling__NeQA-e4c053-1775661622 | 2022-10-16T11:47:19.000Z | null | false | 2bf5a7e1402a6f32c2073a75c61d75f4c9cca2e7 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:inverse-scaling/NeQA"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-inverse-scaling__NeQA-inverse-scaling__NeQA-e4c053-1775661622/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- inverse-scaling/NeQA
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-1b1
metrics: []
dataset_name: inverse-scaling/NeQA
dataset_config: inverse-scaling--NeQA
dataset_split: train
col_mapping:
text: prompt
classes: classes
target: answer_index
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b1
* Dataset: inverse-scaling/NeQA
* Config: inverse-scaling--NeQA
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@CG80499](https://huggingface.co/CG80499) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-lener_br-lener_br-b36dee-1776161639 | 2022-10-16T12:08:13.000Z | null | false | 1261b0ab43c1f488e329bb4b8e0fae03ece768c4 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:lener_br"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-lener_br-lener_br-b36dee-1776161639/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- lener_br
eval_info:
task: entity_extraction
model: Luciano/bertimbau-base-lener-br-finetuned-lener-br
metrics: []
dataset_name: lener_br
dataset_config: lener_br
dataset_split: validation
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/bertimbau-base-lener-br-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-lener_br-lener_br-b36dee-1776161640 | 2022-10-16T12:08:15.000Z | null | false | ac7d6e4063103d3c15fbef1983c89e4760be6f4f | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:lener_br"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-lener_br-lener_br-b36dee-1776161640/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- lener_br
eval_info:
task: entity_extraction
model: Luciano/bertimbau-base-lener_br
metrics: []
dataset_name: lener_br
dataset_config: lener_br
dataset_split: validation
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/bertimbau-base-lener_br
* Dataset: lener_br
* Config: lener_br
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-lener_br-lener_br-b36dee-1776161641 | 2022-10-16T12:08:41.000Z | null | false | 7fac43a456593157221805407acd8171014c9259 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:lener_br"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-lener_br-lener_br-b36dee-1776161641/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- lener_br
eval_info:
task: entity_extraction
model: Luciano/bertimbau-large-lener_br
metrics: []
dataset_name: lener_br
dataset_config: lener_br
dataset_split: validation
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/bertimbau-large-lener_br
* Dataset: lener_br
* Config: lener_br
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-lener_br-lener_br-b36dee-1776161642 | 2022-10-16T12:08:35.000Z | null | false | 8a6f9b98bdf89c8fef01ee76b1fab91d5ce74981 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:lener_br"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-lener_br-lener_br-b36dee-1776161642/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- lener_br
eval_info:
task: entity_extraction
model: Luciano/xlm-roberta-base-finetuned-lener-br
metrics: []
dataset_name: lener_br
dataset_config: lener_br
dataset_split: validation
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-base-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-lener_br-lener_br-b36dee-1776161643 | 2022-10-16T12:09:02.000Z | null | false | 799c1d1c06895d834d846f0c09bbff283499a0ca | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:lener_br"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-lener_br-lener_br-b36dee-1776161643/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- lener_br
eval_info:
task: entity_extraction
model: Luciano/xlm-roberta-large-finetuned-lener-br
metrics: []
dataset_name: lener_br
dataset_config: lener_br
dataset_split: validation
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-large-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. |
pythonist | null | null | null | false | 24 | false | pythonist/PubMedQA | 2022-11-10T10:15:08.000Z | null | false | f1c3b175632a7bffcefefea70fa7a92d4e36d1ed | [] | [] | https://huggingface.co/datasets/pythonist/PubMedQA/resolve/main/README.md | ---
train-eval-index:
- config: pythonist--PubMedQA
task: question-answering
task_id: extractive_question_answering
splits:
eval_split: train
col_mapping:
id: answers.answer_start
---
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-lener_br-lener_br-c186f5-1776861660 | 2022-10-16T12:52:21.000Z | null | false | 1f70cbec44ea6b75058fdad68ff55b8de9d4a522 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:lener_br"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-lener_br-lener_br-c186f5-1776861660/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- lener_br
eval_info:
task: entity_extraction
model: Luciano/bertimbau-large-lener_br
metrics: []
dataset_name: lener_br
dataset_config: lener_br
dataset_split: train
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/bertimbau-large-lener_br
* Dataset: lener_br
* Config: lener_br
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-lener_br-lener_br-c186f5-1776861659 | 2022-10-16T12:51:17.000Z | null | false | 230cdaa657e88026dab1f182c34af5653d8a55ef | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:lener_br"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-lener_br-lener_br-c186f5-1776861659/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- lener_br
eval_info:
task: entity_extraction
model: Luciano/bertimbau-base-lener_br
metrics: []
dataset_name: lener_br
dataset_config: lener_br
dataset_split: train
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/bertimbau-base-lener_br
* Dataset: lener_br
* Config: lener_br
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-lener_br-lener_br-c186f5-1776861661 | 2022-10-16T12:51:38.000Z | null | false | 7f85845a7030e1397ee63b931b90de06a6ee7847 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:lener_br"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-lener_br-lener_br-c186f5-1776861661/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- lener_br
eval_info:
task: entity_extraction
model: Luciano/xlm-roberta-base-finetuned-lener-br
metrics: []
dataset_name: lener_br
dataset_config: lener_br
dataset_split: train
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-base-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-lener_br-lener_br-c186f5-1776861662 | 2022-10-16T12:52:40.000Z | null | false | 4ac218a6129db895ec2ed0e960154742245b0d61 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:lener_br"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-lener_br-lener_br-c186f5-1776861662/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- lener_br
eval_info:
task: entity_extraction
model: Luciano/xlm-roberta-large-finetuned-lener-br
metrics: []
dataset_name: lener_br
dataset_config: lener_br
dataset_split: train
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-large-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-lener_br-lener_br-280a5d-1776961678 | 2022-10-16T13:19:26.000Z | null | false | 2ad1b139cd7f1240d4046d69387149f0d2f52938 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:lener_br"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-lener_br-lener_br-280a5d-1776961678/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- lener_br
eval_info:
task: entity_extraction
model: pierreguillou/ner-bert-base-cased-pt-lenerbr
metrics: []
dataset_name: lener_br
dataset_config: lener_br
dataset_split: test
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: pierreguillou/ner-bert-base-cased-pt-lenerbr
* Dataset: lener_br
* Config: lener_br
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-lener_br-lener_br-280a5d-1776961679 | 2022-10-16T13:19:52.000Z | null | false | cffeb21da0785a570afcf98be56916319f867852 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:lener_br"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-lener_br-lener_br-280a5d-1776961679/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- lener_br
eval_info:
task: entity_extraction
model: pierreguillou/ner-bert-large-cased-pt-lenerbr
metrics: []
dataset_name: lener_br
dataset_config: lener_br
dataset_split: test
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: pierreguillou/ner-bert-large-cased-pt-lenerbr
* Dataset: lener_br
* Config: lener_br
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-lener_br-lener_br-2a71c5-1777061680 | 2022-10-16T13:19:34.000Z | null | false | 530cf9d0b4b5d10007e8722680b6175b5d11d4bb | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:lener_br"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-lener_br-lener_br-2a71c5-1777061680/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- lener_br
eval_info:
task: entity_extraction
model: pierreguillou/ner-bert-base-cased-pt-lenerbr
metrics: []
dataset_name: lener_br
dataset_config: lener_br
dataset_split: validation
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: pierreguillou/ner-bert-base-cased-pt-lenerbr
* Dataset: lener_br
* Config: lener_br
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-lener_br-lener_br-2a71c5-1777061681 | 2022-10-16T13:20:02.000Z | null | false | 0679fb25b4bb759691c22388d45706c8f85ba4b2 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:lener_br"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-lener_br-lener_br-2a71c5-1777061681/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- lener_br
eval_info:
task: entity_extraction
model: pierreguillou/ner-bert-large-cased-pt-lenerbr
metrics: []
dataset_name: lener_br
dataset_config: lener_br
dataset_split: validation
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: pierreguillou/ner-bert-large-cased-pt-lenerbr
* Dataset: lener_br
* Config: lener_br
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-lener_br-lener_br-851daf-1777161682 | 2022-10-16T13:21:40.000Z | null | false | 449dcb775af777bad2fe5cb43070e97c76f65e05 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:lener_br"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-lener_br-lener_br-851daf-1777161682/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- lener_br
eval_info:
task: entity_extraction
model: pierreguillou/ner-bert-base-cased-pt-lenerbr
metrics: []
dataset_name: lener_br
dataset_config: lener_br
dataset_split: train
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: pierreguillou/ner-bert-base-cased-pt-lenerbr
* Dataset: lener_br
* Config: lener_br
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-lener_br-lener_br-851daf-1777161683 | 2022-10-16T13:22:52.000Z | null | false | 13f46098ec2521c788887fd931319674601c0f47 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:lener_br"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-lener_br-lener_br-851daf-1777161683/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- lener_br
eval_info:
task: entity_extraction
model: pierreguillou/ner-bert-large-cased-pt-lenerbr
metrics: []
dataset_name: lener_br
dataset_config: lener_br
dataset_split: train
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: pierreguillou/ner-bert-large-cased-pt-lenerbr
* Dataset: lener_br
* Config: lener_br
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. |
whatafok | null | null | null | false | null | false | whatafok/lul | 2022-10-16T13:46:03.000Z | null | false | 78e20d4b2a97e79041a4cb8d8d270a31c4de8f2c | [] | [
"license:other"
] | https://huggingface.co/datasets/whatafok/lul/resolve/main/README.md | ---
license: other
---
|
alfredodeza | null | null | null | false | 1 | false | alfredodeza/temporary-dataset | 2022-10-16T14:05:33.000Z | null | false | f142a503e35ed32d56bf8a1d202195df1b5b9b2b | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/alfredodeza/temporary-dataset/resolve/main/README.md | ---
license: apache-2.0
---
|
vialibre | null | @misc{https://doi.org/10.48550/arxiv.2207.06591,
doi = {10.48550/ARXIV.2207.06591},
url = {https://arxiv.org/abs/2207.06591},
author = {Alemany, Laura Alonso and Benotti, Luciana and González, Lucía and Maina, Hernán and Busaniche, Beatriz and Halvorsen, Alexia and Bordone, Matías and Sánchez, Jorge},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI),
FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {A tool to overcome technical barriers for bias assessment in human language technologies},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
} | null | false | 5 | false | vialibre/spanish3bwc_embedding | 2022-10-16T16:01:51.000Z | null | false | 519956881185cdae861a7155139f0b8f713c7a5a | [] | [
"arxiv:2207.06591",
"language:es",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"license:mit"
] | https://huggingface.co/datasets/vialibre/spanish3bwc_embedding/resolve/main/README.md | ---
language:
- 'es'
multilinguality:
- monolingual
size_categories:
- "1M<n<10M"
pretty_name: "Vocabulary embedding - Spanish 3 Billion Words Corpora"
license: mit
---
# Dataset Card for "Vocabulary info - Spanish 3 Billion Words Corpora"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Source Data](#source-data)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Paper:** https://arxiv.org/abs/2207.06591
### Dataset Summary
* Number of words: 1.220.051 (1.2M)
* Minimum absolute frequency: 5.
### Languages
* Spanish
### Source Data
- **Repository:** https://huggingface.co/datasets/nanom/splittedspanish3bwc
## Dataset Structure
### Data Fields
| Field |Descriptions|
|-----------------------|----:|
|word | Word|
|embedding | Normalized word embedding. Algorithm: FastTex, Parameters: (min subword-ngram=3, max subword-ngram=6, minCount=5, epochs=20, dim=300) |
|pca | Principal Component Analysis (dim=2) |
Example:
```
word administrar
embedding "[-0.0661974624, -0.0224714149, -0.0005360196, ...]"
pca "[-0.0098309302, 0.1858775107]"
```
### Data Splits
| name |train|
|------------------------|----:|
|full |1220051|
## Additional Information
### Licensing Information
* [MIT Licence](https://huggingface.co/datasets/vialibre/spanish3bwc_embedding/resolve/main/LICENSE)
### Citation Information
```
@misc{https://doi.org/10.48550/arxiv.2207.06591,
doi = {10.48550/ARXIV.2207.06591},
url = {https://arxiv.org/abs/2207.06591},
author = {Alemany, Laura Alonso and Benotti, Luciana and González, Lucía and Maina, Hernán and Busaniche, Beatriz and Halvorsen, Alexia and Bordone, Matías and Sánchez, Jorge},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI),
FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {A tool to overcome technical barriers for bias assessment in human language technologies},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
}
```
|
51la5 | null | null | null | false | 9 | false | 51la5/standups_sentiment | 2022-10-16T20:26:47.000Z | null | false | 4d88450dd3a4e714e17a393403f34de1b113eb00 | [] | [] | https://huggingface.co/datasets/51la5/standups_sentiment/resolve/main/README.md | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
0: negative
1: positive
splits:
- name: train
num_bytes: 42085
num_examples: 509
download_size: 22422
dataset_size: 42085
---
# Dataset Card for "standups_sentiment"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
awacke1 | null | null | null | false | null | false | awacke1/MusicGenreLearnerAI | 2022-10-16T17:06:36.000Z | null | false | bccf1066fa8172c13cd146e59793f7a09ff06023 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/awacke1/MusicGenreLearnerAI/resolve/main/README.md | ---
license: apache-2.0
---
|
tiagoblima | null | null | null | false | 57 | false | tiagoblima/punctuation-nilc | 2022-11-13T17:36:48.000Z | null | false | 645e5c61ccbabf973360d107b6cb9a35e75780d4 | [] | [] | https://huggingface.co/datasets/tiagoblima/punctuation-nilc/resolve/main/README.md | ---
dataset_info:
features:
- name: text_id
dtype: int64
- name: text
dtype: string
- name: level
dtype: string
- name: tokens
sequence: string
- name: labels
sequence: string
splits:
- name: test
num_bytes: 1177684.2701598366
num_examples: 2604
- name: train
num_bytes: 4224993.504240118
num_examples: 9371
- name: validation
num_bytes: 479472.5920696906
num_examples: 1041
download_size: 1802076
dataset_size: 5882150.366469645
---
# Dataset Card for "punctuation-nilc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
alvez29 | null | null | null | false | null | false | alvez29/myface | 2022-10-16T18:50:25.000Z | null | false | 52759bdb3898dd3c5b6154cf8c5e19a96cb14f78 | [] | [] | https://huggingface.co/datasets/alvez29/myface/resolve/main/README.md | |
Surn | null | null | null | false | null | false | Surn/Fett | 2022-10-16T19:48:14.000Z | null | false | 8250a56c880f2b98abc5c77adf3e19732a09c6e3 | [] | [
"license:gpl-3.0"
] | https://huggingface.co/datasets/Surn/Fett/resolve/main/README.md | ---
license: gpl-3.0
---
|
conceptofmind | null | null | null | false | 132 | false | conceptofmind/code-train-dedup | 2022-10-16T23:13:37.000Z | null | false | ae4b42173eec5be533db80da6fc14909fb3ed8a8 | [] | [] | https://huggingface.co/datasets/conceptofmind/code-train-dedup/resolve/main/README.md | ---
dataset_info:
features:
- name: content
dtype: string
splits:
- name: train
num_bytes: 21444084443
num_examples: 2767170
download_size: 7862859211
dataset_size: 21444084443
---
# Dataset Card for "code-train-dedup"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
conceptofmind | null | null | null | false | 23 | false | conceptofmind/code-valid-dedup | 2022-10-16T23:14:59.000Z | null | false | 48739c7b3b776830f7a25ab899aabad7ae7d1ac4 | [] | [] | https://huggingface.co/datasets/conceptofmind/code-valid-dedup/resolve/main/README.md | ---
dataset_info:
features:
- name: content
dtype: string
splits:
- name: train
num_bytes: 414689500
num_examples: 45404
download_size: 151013163
dataset_size: 414689500
---
# Dataset Card for "code-valid-dedup"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pythonist | null | null | null | false | null | false | pythonist/newdata | 2022-10-17T03:20:56.000Z | null | false | 4ae054e170ac4a829f86b90606264dc1118790da | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/pythonist/newdata/resolve/main/README.md | ---
license: apache-2.0
---
|
w0lfandbehem0th | null | null | null | false | null | false | w0lfandbehem0th/test-images | 2022-10-17T03:46:04.000Z | null | false | 13f42fa99ebc10fedecf1e7ee52546d8fd9e6667 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/w0lfandbehem0th/test-images/resolve/main/README.md | ---
license: apache-2.0
---
|
Bingsu | null | null | null | false | 50 | false | Bingsu/laion2b_multi_korean_subset_with_image | 2022-11-03T05:10:40.000Z | null | false | 899b75095a573984a124727dcce8de7e30ad67dc | [] | [
"annotations_creators:crowdsourced",
"language:ko",
"language_creators:crowdsourced",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:extended|laion/laion2B-multi",
"task_categories:feature-extraction"
] | https://huggingface.co/datasets/Bingsu/laion2b_multi_korean_subset_with_image/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language:
- ko
language_creators:
- crowdsourced
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: 'laion2b multi korean subset with image'
size_categories:
- 1M<n<10M
source_datasets:
- extended|laion/laion2B-multi
tags: []
task_categories:
- feature-extraction
task_ids: []
---
# laion2b_multi_korean_subset_with_image
## Dataset Description
- **Download Size** 342 GB
img2dataset을 통해 다운로드에 성공한 [Bingsu/laion2B-multi-korean-subset](https://huggingface.co/datasets/Bingsu/laion2B-multi-korean-subset) 이미지를 정리한 데이터셋입니다.
이미지는 9,800,137장입니다.
이미지는 짧은 쪽 길이가 256이 되도록 리사이즈 되었으며, 품질 100인 webp파일로 다운로드 되었습니다.
## Usage
### 1. datasets
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("Bingsu/laion2b_multi_korean_subset_with_image", streaming=True, split="train")
>>> dataset.features
{'image': Image(decode=True, id=None),
'text': Value(dtype='string', id=None),
'width': Value(dtype='int32', id=None),
'height': Value(dtype='int32', id=None)}
>>> next(iter(dataset))
{'image': <PIL.WebPImagePlugin.WebPImageFile image mode=RGB size=256x256>,
'text': '소닉기어 에어폰5 휴대용 스테레오 블루투스 헤드폰',
'width': 256,
'height': 256}
```
### 2. webdataset
이 데이터셋은 [webdataset](https://github.com/webdataset/webdataset)으로 사용할 수 있도록 구성되어있습니다. 데이터를 다운로드하지 않고 스트리밍으로 처리한다면 1번 방법보다 훨씬 빠릅니다.
!! 아래 방법은 Windows에서는 에러가 발생합니다.
```python
>>> import webdataset as wds
>>> url = "https://huggingface.co/datasets/Bingsu/laion2b_multi_korean_subset_with_image/resolve/main/data/{00000..02122}.tar"
>>> dataset = wds.WebDataset(url).shuffle(1000).decode("pil").to_tuple("webp", "json")
```
```python
>>> next(iter(dataset))
...
```
이 글을 작성하는 현재(22-10-18), webp이미지의 자동 디코딩을 지원하지 않고 있기 때문에([PR #215](https://github.com/webdataset/webdataset/pull/215)), 직접 디코딩해야 합니다.
```python
import io
import webdataset as wds
from PIL import Image
def preprocess(data):
webp, jsn = data
img = Image.open(io.BytesIO(webp))
out = {
"image": img,
"text": jsn["caption"],
"width": jsn["width"],
"height": jsn["height"]
}
return out
url = "https://huggingface.co/datasets/Bingsu/laion2b_multi_korean_subset_with_image/resolve/main/data/{00000..02122}.tar"
dataset = wds.WebDataset(url).shuffle(1000).decode("pil").to_tuple("webp", "json").map(preprocess)
```
```python
>>> next(iter(dataset))
{'image': <PIL.WebPImagePlugin.WebPImageFile image mode=RGB size=427x256>,
'text': '[따블리에]유아동 미술가운, 미술 전신복',
'width': 427,
'height': 256}
```
## Note

각각의 tar 파일은 위 처럼 구성되어 있습니다.
다운로드에 실패한 이미지는 건너뛰어져있기 때문에 파일 이름은 완전히 연속적이지는 않습니다.
각각의 json 파일은 다음처럼 되어있습니다.
```json
{
"caption": "\ub514\uc790\uc778 \uc53d\ud0b9\uacfc \ub514\uc9c0\ud138 \ud2b8\ub79c\uc2a4\ud3ec\uba54\uc774\uc158",
"url": "https://image.samsungsds.com/kr/insights/dt1.jpg?queryString=20210915031642",
"key": "014770069",
"status": "success",
"error_message": null,
"width": 649,
"height": 256,
"original_width": 760,
"original_height": 300,
"exif": "{}"
}
```
txt파일은 json파일의 "caption"을 담고 있습니다.
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-squad-plain_text-f76498-1781661804 | 2022-10-17T05:20:41.000Z | null | false | 86820e0d48153d64153a2f70ace60c1090697f07 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:squad"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-squad-plain_text-f76498-1781661804/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad
eval_info:
task: extractive_question_answering
model: csarron/bert-base-uncased-squad-v1
metrics: []
dataset_name: squad
dataset_config: plain_text
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: csarron/bert-base-uncased-squad-v1
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model. |
eliwill | null | null | null | false | 3 | false | eliwill/Watts | 2022-10-17T05:50:50.000Z | null | false | 1a2e776f38e29c4e70e3ce299b76b1933b463e60 | [] | [] | https://huggingface.co/datasets/eliwill/Watts/resolve/main/README.md | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 5040818
num_examples: 17390
- name: validation
num_bytes: 99856
num_examples: 399
download_size: 2976066
dataset_size: 5140674
---
# Dataset Card for "Watts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
inmortalkaktus | null | @InProceedings{huggingface:dataset,
title = {A pixel art Pokemon sprites dataset},
author={InmortalKaktus / aleoli.
},
year={2022}
} | This new dataset is designed to solve this great NLP task and is crafted with a lot of care. | false | 42 | false | inmortalkaktus/pokemon-pixel-art | 2022-10-18T06:46:02.000Z | null | false | 6cce5a33286e7ed4d15bfd98ef6b9b476093cc97 | [] | [] | https://huggingface.co/datasets/inmortalkaktus/pokemon-pixel-art/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 228723
num_examples: 49
download_size: 178327
dataset_size: 228723
---
|
Jamesroyal | null | null | null | false | null | false | Jamesroyal/blogging | 2022-10-17T06:54:05.000Z | null | false | 18235dd7fd84b7f6868f9cc1a046d799e18da82f | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/Jamesroyal/blogging/resolve/main/README.md | ---
license: afl-3.0
---
|
acurious | null | null | null | false | null | false | acurious/testdreambooth | 2022-10-17T08:05:53.000Z | null | false | e2303edabe49ef87791c19764cb0dbdb39177b1d | [] | [] | https://huggingface.co/datasets/acurious/testdreambooth/resolve/main/README.md | test |
vvincentt | null | null | null | false | 5 | false | vvincentt/lsf_dataset | 2022-10-17T10:41:39.000Z | null | false | 09e8932bb30d97e57848c117429e8f944acd3dfd | [] | [] | https://huggingface.co/datasets/vvincentt/lsf_dataset/resolve/main/README.md | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
splits:
- name: train
num_bytes: 1391968
num_examples: 1400
- name: validation
num_bytes: 497849
num_examples: 500
download_size: 629433
dataset_size: 1889817
---
# Dataset Card for "lsf_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
davanstrien | null | null | null | false | null | false | davanstrien/autotrain-data-genre | 2022-10-17T12:54:06.000Z | null | true | 0103deba71459c76bce99f8cf7c27779f51f4c20 | [] | [
"language:en",
"task_categories:text-classification"
] | https://huggingface.co/datasets/davanstrien/autotrain-data-genre/resolve/main/README.md | |
cjvt | null | @article{skvorc2022mice,
title = {MICE: Mining Idioms with Contextual Embeddings},
journal = {Knowledge-Based Systems},
volume = {235},
pages = {107606},
year = {2022},
issn = {0950-7051},
doi = {https://doi.org/10.1016/j.knosys.2021.107606},
url = {https://www.sciencedirect.com/science/article/pii/S0950705121008686},
author = {{\v S}kvorc, Tadej and Gantar, Polona and Robnik-{\v S}ikonja, Marko},
} | SloIE is a manually labelled dataset of Slovene idiomatic expressions.
It contains 29,400 sentences with 75 different expressions that can occur with either a literal or an idiomatic meaning,
with appropriate manual annotations for each token. The idiomatic expressions were selected from the Slovene Lexical
Database (http://hdl.handle.net/11356/1030). Only expressions that can occur with both a literal and an idiomatic
meaning were selected. The sentences were extracted from the Gigafida corpus. | false | 14 | false | cjvt/sloie | 2022-10-21T07:36:18.000Z | null | false | 81eca6cc0ab38a851d62f8fe8632acbd6c12c531 | [] | [
"annotations_creators:expert-generated",
"language_creators:found",
"language:sl",
"license:cc-by-nc-sa-4.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"size_categories:100K<n<1M",
"task_categories:text-classification",
"task_categories:token-classification",
"tags:idiom-detecti... | https://huggingface.co/datasets/cjvt/sloie/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- sl
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
- 100K<n<1M
source_datasets: []
task_categories:
- text-classification
- token-classification
task_ids: []
pretty_name: Dataset of Slovene idiomatic expressions SloIE
tags:
- idiom-detection
- multiword-expression-detection
---
# Dataset Card for SloIE
### Dataset Summary
SloIE is a manually labelled dataset of Slovene idiomatic expressions. It contains 29399 sentences with 75 different expressions that can occur with either a literal or an idiomatic meaning, with appropriate manual annotations for each token. The idiomatic expressions were selected from the [Slovene Lexical Database]( (http://hdl.handle.net/11356/1030). Only expressions that can occur with both a literal and an idiomatic meaning were selected. The sentences were extracted from the Gigafida corpus.
For a more detailed description of the dataset, please see the paper Škvorc et al. (2022) - see below.
### Supported Tasks and Leaderboards
Idiom detection.
### Languages
Slovenian.
## Dataset Structure
### Data Instances
A sample instance from the dataset:
```json
{
'sentence': 'Fantje regljajo v enem kotu, deklice pa svoje obrazke barvajo s pisanimi barvami.',
'expression': 'barvati kaj s črnimi barvami',
'word_order': [11, 10, 12, 13, 14],
'sentence_words': ['Fantje', 'regljajo', 'v', 'enem', 'kotu,', 'deklice', 'pa', 'svoje', 'obrazke', 'barvajo', 's', 'pisanimi', 'barvami.'],
'is_idiom': ['*', '*', '*', '*', '*', '*', '*', '*', 'NE', 'NE', 'NE', 'NE', 'NE']
}
```
In this `sentence`, the words of the expression "barvati kaj s črnimi barvami" are used in a literal sense, as indicated by the "NE" annotations inside `is_idiom`. The "*" annotations indicate the words are not part of the expression.
### Data Fields
- `sentence`: raw sentence in string form - **WARNING**: this is at times slightly different from the words inside `sentence_words` (e.g., "..." here could be "." in `sentence_words`);
- `expression`: the annotated idiomatic expression;
- `word_order`: numbers indicating the positions of tokens that belong to the expression;
- `sentence_words`: words in the sentence;
- `is_idiom`: a string denoting whether each word has an idiomatic (`"DA"`), literal (`"NE"`), or ambiguous (`"NEJASEN ZGLED"`) meaning. `"*"` means that the word is not part of the expression.
## Additional Information
### Dataset Curators
Tadej Škvorc, Polona Gantar, Marko Robnik-Šikonja.
### Licensing Information
CC BY-NC-SA 4.0.
### Citation Information
```
@article{skvorc2022mice,
title = {MICE: Mining Idioms with Contextual Embeddings},
journal = {Knowledge-Based Systems},
volume = {235},
pages = {107606},
year = {2022},
doi = {https://doi.org/10.1016/j.knosys.2021.107606},
url = {https://www.sciencedirect.com/science/article/pii/S0950705121008686},
author = {{\v S}kvorc, Tadej and Gantar, Polona and Robnik-{\v S}ikonja, Marko},
}
```
### Contributions
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
|
sfujiwara | null | null | null | false | 5 | false | sfujiwara/sample | 2022-10-18T21:27:18.000Z | null | false | 42237ba9cdc8ce88397b1874e73925abba4f338a | [] | [] | https://huggingface.co/datasets/sfujiwara/sample/resolve/main/README.md | # Sample
|
Mrdanizm | null | null | null | false | null | false | Mrdanizm/Mestablediffusion | 2022-10-17T14:02:07.000Z | null | false | e3bf91f4447ac06d3138f7d1591e551911197362 | [] | [
"license:other"
] | https://huggingface.co/datasets/Mrdanizm/Mestablediffusion/resolve/main/README.md | ---
license: other
---
|
creaoy | null | null | null | false | null | false | creaoy/me | 2022-10-17T14:25:02.000Z | null | false | 73f431e1a8282900c1475d0eea185ca3e2af0b0f | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/creaoy/me/resolve/main/README.md | ---
license: apache-2.0
---
|
Nyckelpiga | null | null | null | false | null | false | Nyckelpiga/images | 2022-10-17T17:19:59.000Z | null | false | 850a76c5c794ae87d5e4a15665b3de5bd2e61d95 | [] | [
"license:other"
] | https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/README.md | ---
license: other
---
#@markdown Add here the URLs to the images of the concept you are adding. 3-5 should be fine
urls = [
"https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/CRW_3228-01_512.png",
"https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/CRW_3228-01_512_02.png",
"https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/CRW_3229-01_512.png",
"https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/CRW_3229-01_512.png",
"https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/CRW_3229-01_512_02.png",
"https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/CRW_3870-01-edit-02_crop_512.png",
"https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/CRW_4520_512.png",
"https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/CRW_4589-01_512.png",
"https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/CRW_4622-01-crop_512.png",
"https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/ScanImage066_crop_512.png",
"https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/CRW_4589-01_512.png",
"https://huggingface.co/datasets/Nyckelpiga/images/resolve/main/CRW_3348-01_512.png",
] |
shahidul034 | null | null | null | false | 2 | false | shahidul034/error_correction_model_dataset_raw | 2022-10-17T14:54:36.000Z | null | false | d6c91cf96df74ce879bb4e8837f4a59a8e7341f0 | [] | [] | https://huggingface.co/datasets/shahidul034/error_correction_model_dataset_raw/resolve/main/README.md | ---
dataset_info:
features:
- name: content
dtype: string
splits:
- name: train
num_bytes: 706774777.602151
num_examples: 4141927
download_size: 301173004
dataset_size: 706774777.602151
---
# Dataset Card for "error_correction_model_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
noahgift | null | null | null | false | 31 | false | noahgift/social-power-nba | 2022-10-17T16:07:45.000Z | null | false | 69b23927e6d5f4d09321dad6df33479be3ddee12 | [] | [
"license:cc-by-nc-nd-4.0"
] | https://huggingface.co/datasets/noahgift/social-power-nba/resolve/main/README.md | ---
license: cc-by-nc-nd-4.0
---
A dataset that has NBA data as well as social media data including twitter and wikipedia
|
barkermrl | null | null | null | false | null | false | barkermrl/mnist-c | 2022-10-17T17:28:57.000Z | null | false | ac8e5493caf159f4f717379cc7f434ad3c52e2f6 | [] | [
"arxiv:1906.02337",
"license:apache-2.0"
] | https://huggingface.co/datasets/barkermrl/mnist-c/resolve/main/README.md | ---
license: apache-2.0
---
Source: [https://github.com/google-research/mnist-c](https://github.com/google-research/mnist-c)
# MNIST-C
This repository contains the source code used to create the MNIST-C dataset, a
corrupted MNIST benchmark for testing out-of-distribution robustness of computer
vision models.
Please see our full paper [https://arxiv.org/abs/1906.02337](https://arxiv.org/abs/1906.02337) for more details.
## Dataset
The static dataset is available for download at [https://zenodo.org/record/3239543](https://zenodo.org/record/3239543). |
pachi107 | null | null | null | false | null | false | pachi107/autotrain-data-ethos-sentiments | 2022-10-17T16:28:44.000Z | null | false | ddf8d4553b13ae4aae9c5218e53851e38d721096 | [] | [
"language:en",
"task_categories:text-classification"
] | https://huggingface.co/datasets/pachi107/autotrain-data-ethos-sentiments/resolve/main/README.md | ---
language:
- en
task_categories:
- text-classification
---
# AutoTrain Dataset for project: ethos-sentiments
## Dataset Description
This dataset has been automatically processed by AutoTrain for project ethos-sentiments.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "well it's clear now why europeans can't differ niggers and shitskins from human",
"target": 0
},
{
"text": "These boys will then grow up with people making fun of them and they will then hate their parents for ruining their lives.",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=2, names=['hate_speech', 'no_hate_speech'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 798 |
| valid | 200 |
|
gr8brit | null | null | null | false | null | false | gr8brit/noahnerd | 2022-10-17T16:47:29.000Z | null | false | 8b13d64114106c6b43907f587001677a75070655 | [] | [
"license:other"
] | https://huggingface.co/datasets/gr8brit/noahnerd/resolve/main/README.md | ---
license: other
---
|
elisachen | null | null | null | false | null | false | elisachen/example_dataset | 2022-10-17T17:55:11.000Z | null | false | 6b3560071a612197aae2199e792e0300d859933b | [] | [
"license:bsd"
] | https://huggingface.co/datasets/elisachen/example_dataset/resolve/main/README.md | ---
license: bsd
---
|
Emmawang | null | null | null | false | 1 | false | Emmawang/newsdataset | 2022-10-19T17:37:48.000Z | null | false | f4a8763be5f754cf18d1c70b25fba896ba7fe7d1 | [] | [
"license:bsd"
] | https://huggingface.co/datasets/Emmawang/newsdataset/resolve/main/README.md | ---
license: bsd
---
|
suzyanil | null | null | null | false | 6 | false | suzyanil/nba-data | 2022-10-17T17:54:58.000Z | null | false | 743025dc0b2c16c83448073633cf74307faa013a | [] | [
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/suzyanil/nba-data/resolve/main/README.md | ---
license: creativeml-openrail-m
---
|
emmanuelr | null | null | null | false | null | false | emmanuelr/yelp-review | 2022-10-17T17:55:12.000Z | null | false | e0187676daa260497dc399625e90ecfc30f70de0 | [] | [
"license:other"
] | https://huggingface.co/datasets/emmanuelr/yelp-review/resolve/main/README.md | ---
license: other
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.