author stringlengths 2 29 ⌀ | cardData null | citation stringlengths 0 9.58k ⌀ | description stringlengths 0 5.93k ⌀ | disabled bool 1 class | downloads float64 1 1M ⌀ | gated bool 2 classes | id stringlengths 2 108 | lastModified stringlengths 24 24 | paperswithcode_id stringlengths 2 45 ⌀ | private bool 2 classes | sha stringlengths 40 40 | siblings list | tags list | readme_url stringlengths 57 163 | readme stringlengths 0 977k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
julien-c | null | null | null | false | 2 | false | julien-c/autotrain-data-dog-classifiers | 2022-09-02T16:13:38.000Z | null | false | 499bfa2c7cd0923311f8f2c4b24c5ffe462db922 | [] | [
"task_categories:image-classification"
] | https://huggingface.co/datasets/julien-c/autotrain-data-dog-classifiers/resolve/main/README.md | ---
task_categories:
- image-classification
---
# AutoTrain Dataset for project: dog-classifiers
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project dog-classifiers.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<474x592 RGB PIL image>",
"target": 1
},
{
"image": "<474x296 RGB PIL image>",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(num_classes=5, names=['akita inu', 'corgi', 'leonberger', 'samoyed', 'shiba inu'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 598 |
| valid | 150 |
|
mrm8488 | null | null | null | false | 3 | false | mrm8488/sst2-es-mt | 2022-09-03T16:41:42.000Z | null | false | 61c35ebc14a9aec260ece1cb8061d3997663ea37 | [] | [
"language:es",
"license:unknown",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:sst2",
"task_categories:text-classification",
"task_ids:sentiment-classification"
] | https://huggingface.co/datasets/mrm8488/sst2-es-mt/resolve/main/README.md | ---
language:
- es
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- sst2
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: Stanford Sentiment Treebank v2
---
# STT-2 Spanish
## A Spanish translation (using [EasyNMT](https://github.com/UKPLab/EasyNMT)) of the [SST-2 Dataset](https://huggingface.co/datasets/sst2)
#### For more information check the official [Model Card](https://huggingface.co/datasets/sst2) |
mrm8488 | null | null | null | false | 1 | false | mrm8488/go_emotions-es-mt | 2022-10-20T19:23:36.000Z | null | false | f881ecdb455e1ef7b7e70164df594a98ddf3424e | [] | [
"annotations_creators:crowdsourced",
"language_creators:found",
"language:es",
"license:apache-2.0",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"source_datasets:go_emotions",
"task_categories:text-classification",
"task_ids:multi-class-classification"... | https://huggingface.co/datasets/mrm8488/go_emotions-es-mt/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- es
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10K<n<100K
source_datasets:
- go_emotions
task_categories:
- text-classification
task_ids:
- multi-class-classification
- multi-label-classification
pretty_name: GoEmotions
tags:
- emotion
---
# GoEmotions Spanish
## A Spanish translation (using [EasyNMT](https://github.com/UKPLab/EasyNMT)) of the [GoEmotions](https://huggingface.co/datasets/sst2) dataset.
#### For more information check the official [Model Card](https://huggingface.co/datasets/go_emotions) |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-billsum-default-6d3727-15406134 | 2022-09-03T15:34:02.000Z | null | false | 21747468e4ffa56f4d4352d1cac863e46ca6b68f | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:billsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-billsum-default-6d3727-15406134/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- billsum
eval_info:
task: summarization
model: pszemraj/led-large-book-summary
metrics: []
dataset_name: billsum
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/led-large-book-summary
* Dataset: billsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. |
sagawa | null | null | null | false | 1 | false | sagawa/ord-uniq-canonicalized | 2022-09-04T02:41:10.000Z | null | false | 0bb175d32c10b0d335b2b6c845f63669f7f7cc41 | [] | [
"license:apache-2.0",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"tags:ord",
"tags:chemical",
"tags:reaction",
"task_categories:text2text-generation",
"task_categories:translation"
] | https://huggingface.co/datasets/sagawa/ord-uniq-canonicalized/resolve/main/README.md | ---
annotations_creators: []
language_creators: []
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: canonicalized ORD
size_categories:
- 1M<n<10M
source_datasets:
- original
tags:
- ord
- chemical
- reaction
task_categories:
- text2text-generation
- translation
task_ids: []
---
### dataset description
We downloaded open-reaction-database(ORD) dataset from [here](https://github.com/open-reaction-database/ord-data). As a preprocess, we removed overlapping data and canonicalized them using RDKit.
We used the following function to canonicalize the data and removed some SMILES that cannot be read by RDKit.
```python:
from rdkit import Chem
def canonicalize(mol):
mol = Chem.MolToSmiles(Chem.MolFromSmiles(mol),True)
return mol
```
We randomly split the preprocessed data into train, validation and test. The ratio is 8:1:1. |
sagawa | null | null | null | false | 4 | false | sagawa/pubchem-10m-canonicalized | 2022-09-04T02:18:37.000Z | null | false | f83219601635a0a80fc99c13a9ca37f99ef34f0a | [] | [
"language_creators:expert-generated",
"license:apache-2.0",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"tags:PubChem",
"tags:chemical",
"tags:SMILES"
] | https://huggingface.co/datasets/sagawa/pubchem-10m-canonicalized/resolve/main/README.md | ---
annotations_creators: []
language: []
language_creators:
- expert-generated
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: canonicalized PubChem-10m
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- PubChem
- chemical
- SMILES
task_categories: []
task_ids: []
---
### dataset description
We downloaded PubChem-10m dataset from [here](https://deepchemdata.s3-us-west-1.amazonaws.com/datasets/pubchem_10m.txt.zip) and canonicalized it.
We used the following function to canonicalize the data and removed some SMILES that cannot be read by RDKit.
```python:
from rdkit import Chem
def canonicalize(mol):
mol = Chem.MolToSmiles(Chem.MolFromSmiles(mol),True)
return mol
```
We randomly split the preprocessed data into train and validation. The ratio is 9 : 1. |
sagawa | null | null | null | false | 33 | false | sagawa/ZINC-canonicalized | 2022-09-04T02:21:08.000Z | null | false | 5497e797c551617bc1d94a859e4f3429f3d0b32d | [] | [
"language_creators:expert-generated",
"license:apache-2.0",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"tags:ZINC",
"tags:chemical",
"tags:SMILES"
] | https://huggingface.co/datasets/sagawa/ZINC-canonicalized/resolve/main/README.md | ---
annotations_creators: []
language: []
language_creators:
- expert-generated
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: canonicalized ZINC
size_categories:
- 10M<n<100M
source_datasets:
- original
tags:
- ZINC
- chemical
- SMILES
task_categories: []
task_ids: []
---
### dataset description
We downloaded ZINC dataset from [here](https://zinc15.docking.org/) and canonicalized it.
We used the following function to canonicalize the data and removed some SMILES that cannot be read by RDKit.
```python:
from rdkit import Chem
def canonicalize(mol):
mol = Chem.MolToSmiles(Chem.MolFromSmiles(mol),True)
return mol
```
We randomly split the preprocessed data into train and validation. The ratio is 9 : 1. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-staging-eval-glue-cola-42256f-15426136 | 2022-09-03T13:50:56.000Z | null | false | 0b533459841603d5e5c20c41291bc8c981c49546 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:glue"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-glue-cola-42256f-15426136/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: binary_classification
model: navsad/navid_test_bert
metrics: []
dataset_name: glue
dataset_config: cola
dataset_split: validation
col_mapping:
text: sentence
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: navsad/navid_test_bert
* Dataset: glue
* Config: cola
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@yooo](https://huggingface.co/yooo) for evaluating this model. |
Kirili4ik | null | null | null | false | 7 | false | Kirili4ik/yandex_jobs | 2022-09-03T17:55:00.000Z | climate-fever | false | 7e22c8f616d706bebd86162860feabcf1c6affc4 | [] | [
"annotations_creators:expert-generated",
"language:ru",
"language_creators:found",
"license:unknown",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"tags:vacancies",
"tags:jobs",
"tags:ru",
"tags:yandex",
"task_categories:text-generation",
"task_categorie... | https://huggingface.co/datasets/Kirili4ik/yandex_jobs/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language:
- ru
language_creators:
- found
license:
- unknown
multilinguality:
- monolingual
paperswithcode_id: climate-fever
pretty_name: yandex_jobs
size_categories:
- n<1K
source_datasets:
- original
tags:
- vacancies
- jobs
- ru
- yandex
task_categories:
- text-generation
- summarization
- multiple-choice
task_ids:
- language-modeling
---
# Dataset Card for Yandex_Jobs
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This is a dataset of more than 600 IT vacancies in Russian from parsing telegram channel https://t.me/ya_jobs. All the texts are perfectly structured, no missing values.
### Supported Tasks and Leaderboards
`text-generation` with the 'Raw text column'.
`summarization` as for getting from all the info the header.
`multiple-choice` as for the hashtags (to choose multiple from all available in the dataset)
### Languages
The text in the dataset is in only in Russian. The associated BCP-47 code is `ru`.
## Dataset Structure
### Data Instances
The data is parsed from a vacancy of Russian IT company [Yandex](https://ya.ru/).
An example from the set looks as follows:
```
{'Header': 'Разработчик интерфейсов в группу разработки спецпроектов',
'Emoji': '🎳',
'Description': 'Конструктор лендингов — это инструмент Яндекса, который позволяет пользователям создавать лендинги и турбо-лендинги для Яндекс.Директа. Турбо — режим ускоренной загрузки страниц для показа на мобильных. У нас современный стек, смелые планы и высокая динамика.\nМы ищем опытного и открытого новому фронтенд-разработчика.',
'Requirements': '• отлично знаете JavaScript
• разрабатывали на Node.js, применяли фреймворк Express
• умеете создавать веб-приложения на React + Redux
• знаете HTML и CSS, особенности их отображения в браузерах',
'Tasks': '• разрабатывать интерфейсы',
'Pluses': '• писали интеграционные, модульные, функциональные или браузерные тесты
• умеете разворачивать и администрировать веб-сервисы: собирать Docker-образы, настраивать мониторинги, выкладывать в облачные системы, отлаживать в продакшене
• работали с реляционными БД PostgreSQL',
'Hashtags': '#фронтенд #турбо #JS',
'Link': 'https://ya.cc/t/t7E3UsmVSKs6L',
'Raw text': 'Разработчик интерфейсов в группу разработки спецпроектов🎳
Конструктор лендингов — это инструмент Яндекса, который позволяет пользователям создавать лендинги и турбо-лендинги для Яндекс.Директа. Турбо — режим ускоренной загрузки страниц для показа на мобильных. У нас современный стек, смелые планы и высокая динамика.
Мы ищем опытного и открытого новому фронтенд-разработчика.
Мы ждем, что вы:
• отлично знаете JavaScript
• разрабатывали на Node.js, применяли фреймворк Express
• умеете создавать веб-приложения на React + Redux
• знаете HTML и CSS, особенности их отображения в браузерах
Что нужно делать:
• разрабатывать интерфейсы
Будет плюсом, если вы:
• писали интеграционные, модульные, функциональные или браузерные тесты
• умеете разворачивать и администрировать веб-сервисы: собирать Docker-образы, настраивать мониторинги, выкладывать в облачные системы, отлаживать в продакшене
• работали с реляционными БД PostgreSQL
https://ya.cc/t/t7E3UsmVSKs6L
#фронтенд #турбо #JS'
}
```
### Data Fields
- `Header`: A string with a position title (str)
- `Emoji`: Emoji that is used at the end of the title position (usually asosiated with the position) (str)
- `Description`: Short description of the vacancy (str)
- `Requirements`: A couple of required technologies/programming languages/experience (str)
- `Tasks`: Examples of the tasks of the job position (str)
- `Pluses`: A couple of great points for the applicant to have (technologies/experience/etc)
- `Hashtags`: A list of hashtags assosiated with the job (usually programming languages) (str)
- `Link`: A link to a job description (there may be more information, but it is not checked) (str)
- `Raw text`: Raw text with all the formatiing from the channel. Created with other fields. (str)
### Data Splits
There is not enough examples yet to split it to train/test/val in my opinion.
## Dataset Creation
It downloaded and parsed from telegram channel https://t.me/ya_jobs 03.09.2022. All the unparsed examples and the ones missing any field are deleted (from 1600 vacancies to only 600 without any missing fields like emojis or links)
## Considerations for Using the Data
These vacancies are for only one IT company (yandex). This means they can be pretty specific and probably can not be generalized as any vacancies or even any IT vacancies.
## Contributions
- **Point of Contact and Author:** [Kirill Gelvan](telegram: @kirili4ik) |
cryptexcode | null | null | null | false | 1 | false | cryptexcode/MPST | 2022-09-03T20:43:00.000Z | null | false | ee8774c4c8a9c7812856f14bdefecab8fe1576d3 | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/cryptexcode/MPST/resolve/main/README.md | ---
license: cc-by-4.0
---
### Abstract
Social tagging of movies reveals a wide range of heterogeneous information about movies, like the genre, plot structure, soundtracks, metadata, visual and emotional experiences. Such information can be valuable in building automatic systems to create tags for movies. Automatic tagging systems can help recommendation engines to improve the retrieval of similar movies as well as help viewers to know what to expect from a movie in advance. In this paper, we set out to the task of collecting a corpus of movie plot synopses and tags. We describe a methodology that enabled us to build a fine-grained set of around 70 tags exposing heterogeneous characteristics of movie plots and the multi-label associations of these tags with some 14K movie plot synopses. We investigate how these tags correlate with movies and the flow of emotions throughout different types of movies. Finally, we use this corpus to explore the feasibility of inferring tags from plot synopses. We expect the corpus will be useful in other tasks where analysis of narratives is relevant.
### Content
This dataset was first published in LREC 2018 at Miyazaki, Japan.
Please find the paper here:

Later, this dataset was enriched with user reviews. The paper is available here:

This dataset was published in EMNLP 2020.
### Keywords
Tag generation for movies, Movie plot analysis, Multi-label dataset, Narrative texts
More information is available here
http://ritual.uh.edu/mpst-2018/
Please cite the following papers if you use this dataset:
```
@InProceedings{KAR18.332,
author = {Sudipta Kar and Suraj Maharjan and A. Pastor López-Monroy and Thamar Solorio},
title = {{MPST}: A Corpus of Movie Plot Synopses with Tags},
booktitle = {Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)},
year = {2018},
month = {May},
date = {7-12},
location = {Miyazaki, Japan},
editor = {Nicoletta Calzolari (Conference chair) and Khalid Choukri and Christopher Cieri and Thierry Declerck and Sara Goggi and Koiti Hasida and Hitoshi Isahara and Bente Maegaard and Joseph Mariani and Hélène Mazo and Asuncion Moreno and Jan Odijk and Stelios Piperidis and Takenobu Tokunaga},
publisher = {European Language Resources Association (ELRA)},
address = {Paris, France},
isbn = {979-10-95546-00-9},
language = {english}
}
```
```
@inproceedings{kar-etal-2020-multi,
title = "Multi-view Story Characterization from Movie Plot Synopses and Reviews",
author = "Kar, Sudipta and
Aguilar, Gustavo and
Lapata, Mirella and
Solorio, Thamar",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.emnlp-main.454",
doi = "10.18653/v1/2020.emnlp-main.454",
pages = "5629--5646",
abstract = "This paper considers the problem of characterizing stories by inferring properties such as theme and style using written synopses and reviews of movies. We experiment with a multi-label dataset of movie synopses and a tagset representing various attributes of stories (e.g., genre, type of events). Our proposed multi-view model encodes the synopses and reviews using hierarchical attention and shows improvement over methods that only use synopses. Finally, we demonstrate how we can take advantage of such a model to extract a complementary set of story-attributes from reviews without direct supervision. We have made our dataset and source code publicly available at https://ritual.uh.edu/multiview-tag-2020.",
}
```
|
indonesian-nlp | null | \ | null | false | 2 | false | indonesian-nlp/librivox-indonesia | 2022-10-24T09:14:51.000Z | null | false | b46cf4f76274d58e38fc32f7fe33a4814cc370a9 | [] | [
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language:ace",
"language:bal",
"language:bug",
"language:ind",
"language:min",
"language:jav",
"language:sun",
"license:cc",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:librivox",
"task... | https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia/resolve/main/README.md | ---
pretty_name: LibriVox Indonesia 1.0
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- ace
- bal
- bug
- ind
- min
- jav
- sun
license: cc
multilinguality:
- multilingual
size_categories:
ace:
- 1K<n<10K
bal:
- 1K<n<10K
bug:
- 1K<n<10K
ind:
- 1K<n<10K
min:
- 1K<n<10K
jav:
- 1K<n<10K
sun:
- 1K<n<10K
source_datasets:
- librivox
task_categories:
- automatic-speech-recognition
---
# Dataset Card for LibriVox Indonesia 1.0
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia
- **Repository:** https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia
- **Point of Contact:** [Cahya Wirawan](mailto:cahya.wirawan@gmail.com)
### Dataset Summary
The LibriVox Indonesia dataset consists of MP3 audio and a corresponding text file we generated from the public
domain audiobooks [LibriVox](https://librivox.org/). We collected only languages in Indonesia for this dataset.
The original LibriVox audiobooks or sound files' duration varies from a few minutes to a few hours. Each audio
file in the speech dataset now lasts from a few seconds to a maximum of 20 seconds.
We converted the audiobooks to speech datasets using the forced alignment software we developed. It supports
multilingual, including low-resource languages, such as Acehnese, Balinese, or Minangkabau. We can also use it
for other languages without additional work to train the model.
The dataset currently consists of 8 hours in 7 languages from Indonesia. We will add more languages or audio files
as we collect them.
### Languages
```
Acehnese, Balinese, Bugisnese, Indonesian, Minangkabau, Javanese, Sundanese
```
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`. Additional fields include
`reader` and `language`.
```python
{
'path': 'librivox-indonesia/sundanese/universal-declaration-of-human-rights/human_rights_un_sun_brc_0000.mp3',
'language': 'sun',
'reader': '3174',
'sentence': 'pernyataan umum ngeunaan hak hak asasi manusa sakabeh manusa',
'audio': {
'path': 'librivox-indonesia/sundanese/universal-declaration-of-human-rights/human_rights_un_sun_brc_0000.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 44100
},
}
```
### Data Fields
`path` (`string`): The path to the audio file
`language` (`string`): The language of the audio file
`reader` (`string`): The reader Id in LibriVox
`sentence` (`string`): The sentence the user read from the book.
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
### Data Splits
The speech material has only train split.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
``` |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-cnn_dailymail-3.0.0-76e071-15436137 | 2022-09-04T20:49:44.000Z | null | false | 01747f9e3b36fb579319d40898936edcd1a2a6af | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:cnn_dailymail"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-cnn_dailymail-3.0.0-76e071-15436137/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cnn_dailymail
eval_info:
task: summarization
model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
metrics: ['mae', 'mse', 'rouge', 'squad']
dataset_name: cnn_dailymail
dataset_config: 3.0.0
dataset_split: train
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-cnn_dailymail-3.0.0-fd18e2-15446138 | 2022-09-04T20:46:25.000Z | null | false | c5eeea30aae0f63dcdad307f32e4009865949f14 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:cnn_dailymail"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-cnn_dailymail-3.0.0-fd18e2-15446138/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cnn_dailymail
eval_info:
task: summarization
model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
metrics: ['mae', 'mse', 'rouge', 'squad']
dataset_name: cnn_dailymail
dataset_config: 3.0.0
dataset_split: train
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-staging-eval-cnn_dailymail-3.0.0-8aef96-15456139 | 2022-09-04T21:11:30.000Z | null | false | 31825c0782fc7a127974c4b9bbdbc9a94a76fbdc | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:cnn_dailymail"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-cnn_dailymail-3.0.0-8aef96-15456139/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cnn_dailymail
eval_info:
task: summarization
model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
metrics: ['mae', 'mse', 'rouge', 'squad']
dataset_name: cnn_dailymail
dataset_config: 3.0.0
dataset_split: train
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-cnn_dailymail-3.0.0-25032a-15466140 | 2022-09-04T21:07:41.000Z | null | false | 0d2ac8812872b678eb58191d0bf31a5d291c3759 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:cnn_dailymail"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-cnn_dailymail-3.0.0-25032a-15466140/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cnn_dailymail
eval_info:
task: summarization
model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
metrics: ['mae', 'mse', 'rouge', 'squad']
dataset_name: cnn_dailymail
dataset_config: 3.0.0
dataset_split: train
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-staging-eval-samsum-samsum-096051-15476141 | 2022-09-04T02:25:02.000Z | null | false | 72c2361371b0b7483028f438a82af75b3554d689 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:samsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-samsum-samsum-096051-15476141/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: train
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: samsum
* Config: samsum
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model. |
ganchengguang | null | null | null | false | 2 | false | ganchengguang/resume-5label-classification | 2022-09-04T02:53:22.000Z | null | false | ad3dd0050b0c4d75e84eeaad39020c9499a4c0ce | [] | [] | https://huggingface.co/datasets/ganchengguang/resume-5label-classification/resolve/main/README.md | This is a resume sentence classification dataset constructed based on resume text.(https://www.kaggle.com/datasets/oo7kartik/resume-text-batch)
The dataset have five category.(experience education knowledge project others ) And three element label(header content meta).
Because the dataset is a published paper, if you want to use this dataset in a paper or work, please cite BibTex.
@article{甘程光2021英文履歴書データ抽出システムへの,
title={英文履歴書データ抽出システムへの BERT 適用性の検討},
author={甘程光 and 高橋良英 and others},
journal={2021 年度 情報処理学会関西支部 支部大会 講演論文集},
volume={2021},
year={2021}
} |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-staging-eval-xsum-default-a80438-15496142 | 2022-09-04T03:28:51.000Z | null | false | f119500feb836ba3656b0fb9aa6b5291f53c92e9 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:xsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-xsum-default-a80438-15496142/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- xsum
eval_info:
task: summarization
model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
metrics: []
dataset_name: xsum
dataset_config: default
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-cnn_dailymail-3.0.0-01441a-15506143 | 2022-09-04T03:30:03.000Z | null | false | f7f6abf17cdb0a878c12cc9bca448a2cb710357f | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:cnn_dailymail"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-cnn_dailymail-3.0.0-01441a-15506143/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cnn_dailymail
eval_info:
task: summarization
model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
metrics: []
dataset_name: cnn_dailymail
dataset_config: 3.0.0
dataset_split: test
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model. |
Laasya | null | null | null | false | 1 | false | Laasya/civis-consultation-summaries | 2022-09-04T07:52:15.000Z | null | false | 1c22f0a860b96cf9f817b5718d16980e68000d95 | [] | [
"license:other"
] | https://huggingface.co/datasets/Laasya/civis-consultation-summaries/resolve/main/README.md | ---
license: other
---
|
SamAct | null | null | null | false | 1 | false | SamAct/medium_cleaned | 2022-09-04T08:32:11.000Z | null | false | 7e53c29bdeff7c789c6e250abfcf98a55ff810f8 | [] | [
"license:unlicense"
] | https://huggingface.co/datasets/SamAct/medium_cleaned/resolve/main/README.md | ---
license: unlicense
---
|
Luciano | null | null | null | false | 11 | false | Luciano/lener_br_text_to_lm | 2022-09-04T11:32:31.000Z | null | false | d8da37c6401feb23c939245046f08ea4b1ad4f94 | [] | [
"language:pt",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"task_categories:fill-mask",
"task_categories:text-generation",
"task_ids:masked-language-modeling",
"task_ids:language-modeling"
] | https://huggingface.co/datasets/Luciano/lener_br_text_to_lm/resolve/main/README.md | ---
annotations_creators: []
language:
- pt
language_creators: []
license: []
multilinguality:
- monolingual
pretty_name: 'The LeNER-Br language modeling dataset is a collection of legal texts
in Portuguese from the LeNER-Br dataset (https://cic.unb.br/~teodecampos/LeNER-Br/).
The legal texts were obtained from the original token classification Hugging Face
LeNER-Br dataset (https://huggingface.co/datasets/lener_br) and processed to create
a DatasetDict with train and validation dataset (20%).
The LeNER-Br language modeling dataset allows the finetuning of language models
as BERTimbau base and large.'
size_categories:
- 10K<n<100K
source_datasets: []
tags: []
task_categories:
- fill-mask
- text-generation
task_ids:
- masked-language-modeling
- language-modeling
---
# Dataset Card for lener_br_text_to_lm
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The LeNER-Br language modeling dataset is a collection of legal texts
in Portuguese from the LeNER-Br dataset (https://cic.unb.br/~teodecampos/LeNER-Br/).
The legal texts were obtained from the original token classification Hugging Face
LeNER-Br dataset (https://huggingface.co/datasets/lener_br) and processed to create
a DatasetDict with train and validation dataset (20%).
The LeNER-Br language modeling dataset allows the finetuning of language models
as BERTimbau base and large.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
```
DatasetDict({
train: Dataset({
features: ['text'],
num_rows: 8316
})
test: Dataset({
features: ['text'],
num_rows: 2079
})
})
```
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
gaurikapse | null | null | null | false | 2 | false | gaurikapse/civis-consultation-summaries | 2022-09-04T18:05:08.000Z | null | false | 9e09fd3f93f3102e35dc67bdcb0d2669d5f93168 | [] | [
"annotations_creators:no-annotation",
"language:en",
"language_creators:expert-generated",
"license:other",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"tags:legal",
"tags:indian",
"tags:government",
"tags:policy",
"tags:consultations",
"task_categories... | https://huggingface.co/datasets/gaurikapse/civis-consultation-summaries/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- expert-generated
license:
- other
multilinguality:
- monolingual
pretty_name: civis-consultation-summaries
size_categories:
- n<1K
source_datasets:
- original
tags:
- legal
- indian
- government
- policy
- consultations
task_categories:
- summarization
task_ids: []
---
|
haritzpuerto | null | null | null | false | 3 | false | haritzpuerto/MetaQA_Datasets | 2022-09-04T15:42:01.000Z | null | false | b9e657fd54956571c5ff5c578a8fb1d3a4e854bd | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/haritzpuerto/MetaQA_Datasets/resolve/main/README.md | ---
license: apache-2.0
---
|
haritzpuerto | null | null | null | false | 2 | false | haritzpuerto/MetaQA_Agents_Predictions | 2022-09-04T20:16:51.000Z | metaqa-combining-expert-agents-for-multi | false | 2636f596c4acb3c8832f51a7048f02b117226453 | [] | [
"arxiv:2112.01922",
"language:en",
"license:apache-2.0",
"multilinguality:monolingual",
"source_datasets:mrqa",
"source_datasets:duorc",
"source_datasets:qamr",
"source_datasets:boolq",
"source_datasets:commonsense_qa",
"source_datasets:hellaswag",
"source_datasets:social_i_qa",
"source_datase... | https://huggingface.co/datasets/haritzpuerto/MetaQA_Agents_Predictions/resolve/main/README.md | ---
annotations_creators: []
language:
- en
language_creators: []
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: MetaQA Agents' Predictions
size_categories: []
source_datasets:
- mrqa
- duorc
- qamr
- boolq
- commonsense_qa
- hellaswag
- social_i_qa
- narrativeqa
tags:
- multi-agent question answering
- multi-agent QA
- predictions
task_categories:
- question-answering
task_ids: []
paperswithcode_id: metaqa-combining-expert-agents-for-multi
---
# Dataset Card for MetaQA Agents' Predictions
## Dataset Description
- **Repository:** [MetaQA's Repository](https://github.com/UKPLab/MetaQA)
- **Paper:** [MetaQA: Combining Expert Agents for Multi-Skill Question Answering](https://arxiv.org/abs/2112.01922)
- **Point of Contact:** [Haritz Puerto](mailto:puerto@ukp.informatik.tu-darmstadt.de)
## Dataset Summary
This dataset contains the answer predictions of the QA agents for the [QA datasets](https://huggingface.co/datasets/haritzpuerto/MetaQA_Datasets) used in [MetaQA paper](https://arxiv.org/abs/2112.01922). In particular, it contains the following QA agents' predictions:
### Span-Extraction Agents
- Agent: Span-BERT Large (Joshi et al.,2020) trained on SQuAD. Predictions for:
- SQuAD
- NewsQA
- HotpotQA
- SearchQA
- Natural Questions
- TriviaQA-web
- QAMR
- DuoRC
- DROP
- Agent: Span-BERT Large (Joshi et al.,2020) trained on NewsQA. Predictions for:
- SQuAD
- NewsQA
- HotpotQA
- SearchQA
- Natural Questions
- TriviaQA-web
- QAMR
- DuoRC
- DROP
- Agent: Span-BERT Large (Joshi et al.,2020) trained on HotpotQA. Predictions for:
- SQuAD
- NewsQA
- HotpotQA
- SearchQA
- Natural Questions
- TriviaQA-web
- QAMR
- DuoRC
- DROP
- Agent: Span-BERT Large (Joshi et al.,2020) trained on SearchQA. Predictions for:
- SQuAD
- NewsQA
- HotpotQA
- SearchQA
- Natural Questions
- TriviaQA-web
- QAMR
- DuoRC
- DROP
- Agent: Span-BERT Large (Joshi et al.,2020) trained on Natural Questions. Predictions for:
- SQuAD
- NewsQA
- HotpotQA
- SearchQA
- Natural Questions
- TriviaQA-web
- QAMR
- DuoRC
- DROP
- Agent: Span-BERT Large (Joshi et al.,2020) trained on TriviaQA-web. Predictions for:
- SQuAD
- NewsQA
- HotpotQA
- SearchQA
- Natural Questions
- TriviaQA-web
- QAMR
- DuoRC
- DROP
- Agent: Span-BERT Large (Joshi et al.,2020) trained on QAMR. Predictions for:
- SQuAD
- NewsQA
- HotpotQA
- SearchQA
- Natural Questions
- TriviaQA-web
- QAMR
- DuoRC
- DROP
- Agent: Span-BERT Large (Joshi et al.,2020) trained on DuoRC. Predictions for:
- SQuAD
- NewsQA
- HotpotQA
- SearchQA
- Natural Questions
- TriviaQA-web
- QAMR
- DuoRC
- DROP
- Agent: Span-BERT Large (Joshi et al.,2020) trained on DROP. Predictions for:
- SQuAD
- NewsQA
- HotpotQA
- SearchQA
- Natural Questions
- TriviaQA-web
- QAMR
- DuoRC
- DROP
### Multiple-Choice Agents
- Agent: RoBERTa Large (Liu et al., 2019) trained on RACE. Predictions for:
- RACE
- Commonsense QA
- BoolQ
- HellaSWAG
- Social IQA
- Agent: RoBERTa Large (Liu et al., 2019) trained on HellaSWAG. Predictions for:
- RACE
- Commonsense QA
- BoolQ
- HellaSWAG
- Social IQA
- Agent: AlBERT xxlarge-v2 (Lan et al., 2020) trained on Commonsense QA. Predictions for:
- RACE
- Commonsense QA
- BoolQ
- HellaSWAG
- Social IQA
- Agent: BERT Large-wwm (Devlin et al., 2019) trained on BoolQ. Predictions for:
- BoolQ
### Abstractive Agents
- Agent: TASE (Segal et al., 2020) trained on DROP. Predictions for:
- DROP
- Agent: BART Large with Adapters (Pfeiffer et al., 2020) trained on NarrativeQA. Predictions for:
- NarrativeQA
### Multimodal Agents
- Agent: Hybrider (Chen et al., 2020) trained on HybridQA. Predictions for:
- HybridQA
### Languages
All the QA datasets used English and thus, the Agents's predictions are also in English.
## Dataset Structure
Each agent has a folder. Inside, there is a folder for each dataset containing four files:
- predict_nbest_predictions.json
- predict_predictions.json / predictions.json
- predict_results.json (for span-extraction agents)
### Structure of predict_nbest_predictions.json
```
{id: [{"start_logit": ...,
"end_logit": ...,
"text": ...,
"probability": ... }]}
```
### Structure of predict_predictions.json
```
{id: answer_text}
```
### Data Splits
All the QA datasets have 3 splits: train, validation, and test. The splits (Question-Context pairs) are provided in https://huggingface.co/datasets/haritzpuerto/MetaQA_Datasets
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help developing new multi-agent models and analyzing the predictions of QA models.
### Discussion of Biases
The QA models used to create this predictions may not be perfect, generate false data, and contain biases. The release of these predictions may help to identify these flaws in the models.
## Additional Information
### License
The MetaQA Agents' prediction dataset version is released under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0).
### Citation
```
@article{Puerto2021MetaQACE,
title={MetaQA: Combining Expert Agents for Multi-Skill Question Answering},
author={Haritz Puerto and Gozde Gul cSahin and Iryna Gurevych},
journal={ArXiv},
year={2021},
volume={abs/2112.01922}
}
``` |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-samsum-samsum-0e4017-15526144 | 2022-09-04T16:46:04.000Z | null | false | a189eae9498de2ace8b54290c3f94b7286a4c7c2 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:samsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-samsum-samsum-0e4017-15526144/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: facebook/bart-large-cnn
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: validation
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: facebook/bart-large-cnn
* Dataset: samsum
* Config: samsum
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@SamuelAllen1234](https://huggingface.co/SamuelAllen1234) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-samsum-samsum-a4ff98-15536145 | 2022-09-04T16:46:49.000Z | null | false | 9d4e8f919e11525f564bd99fdfa71164b26c299a | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:samsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-samsum-samsum-a4ff98-15536145/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: validation
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: samsum
* Config: samsum
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@SamuelAllen1234](https://huggingface.co/SamuelAllen1234) for evaluating this model. |
munggok | null | \ | false | 1 | false | munggok/KoPI-NLLB | 2022-09-06T05:49:03.000Z | null | false | 02de1f4f6049b8d7f53d924789fbf67aa5244139 | [] | [] | https://huggingface.co/datasets/munggok/KoPI-NLLB/resolve/main/README.md | KopI(Korpus Perayapan Indonesia)-NLLB, is Indonesian family language(aceh,bali,banjar,indonesia,jawa,minang,sunda) only extracted from NLLB Dataset, [allenai/nllb](https://huggingface.co/datasets/allenai/nllb)
each language set also filtered using some some deduplicate technique such as exact hash(md5) dedup technique and minhash LSH neardup
detail soon | |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-samsum-samsum-70f55d-15546146 | 2022-09-04T18:28:25.000Z | null | false | 654c7c822d4e30e593b84c0d17ffe8f5415596d5 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:samsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-samsum-samsum-70f55d-15546146/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: SamuelAllen1234/testing
metrics: ['rouge', 'mse', 'mae', 'squad']
dataset_name: samsum
dataset_config: samsum
dataset_split: validation
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen1234/testing
* Dataset: samsum
* Config: samsum
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@SamuelAllen12345](https://huggingface.co/SamuelAllen12345) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-staging-eval-samsum-samsum-85416c-15556147 | 2022-09-04T18:27:44.000Z | null | false | df39f858b9b08963848eeab993371aefa449f435 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:samsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-samsum-samsum-85416c-15556147/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: facebook/bart-large-cnn
metrics: ['rouge', 'mse', 'mae', 'squad']
dataset_name: samsum
dataset_config: samsum
dataset_split: validation
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: facebook/bart-large-cnn
* Dataset: samsum
* Config: samsum
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@SamuelAllen12345](https://huggingface.co/SamuelAllen12345) for evaluating this model. |
gaurikapse | null | null | null | false | 2 | false | gaurikapse/civis-consultations-transposed-data | 2022-09-04T18:45:18.000Z | null | false | e7ba41ad9c6e214c72e33639393fcb300187a5e4 | [] | [
"license:other"
] | https://huggingface.co/datasets/gaurikapse/civis-consultations-transposed-data/resolve/main/README.md | ---
license: other
---
|
namban | null | null | null | false | 2 | false | namban/ledgar | 2022-09-04T20:00:44.000Z | null | false | 57f02e50acc848309ad50777cc8988752d19b5d7 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/namban/ledgar/resolve/main/README.md | ---
license: afl-3.0
---
|
gandinaalikekeede | null | null | null | false | 2 | false | gandinaalikekeede/ledgar_cleaner | 2022-09-04T20:12:30.000Z | null | false | 9824a87c0f39341c8a4427e6c8778ef59c5fa5c3 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/gandinaalikekeede/ledgar_cleaner/resolve/main/README.md | ---
license: afl-3.0
---
|
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-00af64-15586150 | 2022-09-05T02:42:07.000Z | null | false | 0c95d910357f5e262bd04790e5122eda781573fe | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:squad_v2"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-00af64-15586150/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad_v2
eval_info:
task: extractive_question_answering
model: 21iridescent/RoBERTa-base-finetuned-squad2-lwt
metrics: []
dataset_name: squad_v2
dataset_config: squad_v2
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: 21iridescent/RoBERTa-base-finetuned-squad2-lwt
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jsfs11](https://huggingface.co/jsfs11) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-samsum-samsum-175281-15596151 | 2022-09-05T03:46:20.000Z | null | false | bb02409110bba66779b85f0271cef0f482f04404 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:samsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-samsum-samsum-175281-15596151/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
metrics: ['mse']
dataset_name: samsum
dataset_config: samsum
dataset_split: validation
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: samsum
* Config: samsum
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@SamuelAllen123](https://huggingface.co/SamuelAllen123) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-samsum-samsum-41c5cd-15606152 | 2022-09-05T03:46:21.000Z | null | false | a63bf346e599e6796a015f39c17baa988b9e9f7e | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:samsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-samsum-samsum-41c5cd-15606152/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
metrics: ['mae']
dataset_name: samsum
dataset_config: samsum
dataset_split: validation
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: samsum
* Config: samsum
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@SamuelAllen123](https://huggingface.co/SamuelAllen123) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-samsum-samsum-cc5bdf-15616153 | 2022-09-05T03:47:47.000Z | null | false | 3cb8c00aa2e79441a8358d44e42652bc6c90e10a | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:samsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-samsum-samsum-cc5bdf-15616153/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
metrics: ['mse']
dataset_name: samsum
dataset_config: samsum
dataset_split: validation
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: samsum
* Config: samsum
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model. |
pixta-ai | null | null | null | false | 1 | false | pixta-ai/faces-with-various-races-and-emotions | 2022-09-15T03:44:01.000Z | null | false | a82f7d3cc529a9ede6d1416a668f96ecefb433a3 | [] | [] | https://huggingface.co/datasets/pixta-ai/faces-with-various-races-and-emotions/resolve/main/README.md | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for pixta-ai/faces-with-various-races-and-emotions
## Dataset Description
- **Homepage:** https://www.pixta.ai/?utm_source=huggingface&utm_medium=embeddedlink&utm_campaign=community&utm_id=huggingface
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset consists of 600 items of faces with different emotions and mixed races. Age range from 20 to 60 years old, balance in gender, no occlusion, with head direction (<45 degree up-down and left-right)
For more details, please refer to the link: https://www.pixta.ai/
Or send your inquiries to contact@pixta.ai
### Supported Tasks and Leaderboards
face-detection, emotion-recognition, computer-vision: The dataset can be used to train or enhance model for face detection & emotion recognition
### Languages
English
### License
Academic & commercial usage |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-lener_br-lener_br-f0f34b-15626154 | 2022-09-05T05:09:08.000Z | null | false | 8c35b13454d43f2319e368f1fe7c97a878af4c46 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:lener_br"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-lener_br-lener_br-f0f34b-15626154/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- lener_br
eval_info:
task: entity_extraction
model: Luciano/bertimbau-base-lener-br-finetuned-lener-br
metrics: []
dataset_name: lener_br
dataset_config: lener_br
dataset_split: train
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/bertimbau-base-lener-br-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@https://huggingface.co/Luciano/bertimbau-base-lener-br-finetuned-lener-br](https://huggingface.co/https://huggingface.co/Luciano/bertimbau-base-lener-br-finetuned-lener-br) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-staging-eval-samsum-samsum-e82d51-15636155 | 2022-09-05T06:37:40.000Z | null | false | 4022c7affe48f8cf58cc541414c0a35a5eadd6d8 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:samsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-samsum-samsum-e82d51-15636155/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
metrics: ['mse', 'mae']
dataset_name: samsum
dataset_config: samsum
dataset_split: validation
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: samsum
* Config: samsum
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model. |
victor | null | null | null | false | 3 | false | victor/synthetic-donuts | 2022-09-05T08:05:51.000Z | null | false | bb6df4b8fdcd1576302511620ad6a8465e13fb39 | [] | [
"license:mit"
] | https://huggingface.co/datasets/victor/synthetic-donuts/resolve/main/README.md | ---
license: mit
---
|
victor | null | null | null | false | 2 | false | victor/autotrain-data-satellite-image-classification | 2022-09-05T09:30:13.000Z | null | false | e0b1e4d497fe81cad3e4695ae1c6c5ca7d64656d | [] | [
"task_categories:image-classification"
] | https://huggingface.co/datasets/victor/autotrain-data-satellite-image-classification/resolve/main/README.md | ---
task_categories:
- image-classification
---
# AutoTrain Dataset for project: satellite-image-classification
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project satellite-image-classification.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<256x256 CMYK PIL image>",
"target": 0
},
{
"image": "<256x256 CMYK PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(num_classes=1, names=['cloudy'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1200 |
| valid | 300 |
|
openclimatefix | null | null | null | false | 1 | false | openclimatefix/eumetsat-rss | 2022-11-12T16:40:27.000Z | null | false | adf03eda7e6ce0c4cf25c5d946ca9e27dc355c2e | [] | [
"license:other"
] | https://huggingface.co/datasets/openclimatefix/eumetsat-rss/resolve/main/README.md | ---
license: other
---
This dataset consists of the EUMETSAT Rapid Scan Service (RSS) imagery for 2014 to October 2022. This data has 2 formats, the High Resolution Visible channel (HRV) which covers Europe and North Africa at a resolution of roughly 2-3km per pixel, and is shifted each day to better image where the sun is shining, and the non-HRV data, which is comprised of 11 spectral channels at a 6-9km resolution covering the top third of the Earth centered on Europe. These images are taken 5 minutes apart and have been compressed and stacked into 1000-image Zarr stores. Using Xarray, these files can be opened all together to create one large Zarr store of HRV or non-HRV imagery. |
SetFit | null | null | null | false | 18 | false | SetFit/ade_corpus_v2_classification | 2022-09-05T14:14:53.000Z | null | false | 0d5751865d26618e2141fe0aecf06477d93d0955 | [] | [] | https://huggingface.co/datasets/SetFit/ade_corpus_v2_classification/resolve/main/README.md | # ADE-Corpus-V2 Dataset: Adverse Drug Reaction Data.
This is a dataset for classification if a sentence is ADE-related (True) or not (False).
**Train size: 17,637**
**Test size: 5,879**
[Source dataset](https://huggingface.co/datasets/ade_corpus_v2)
[Paper](https://www.sciencedirect.com/science/article/pii/S1532046412000615)
|
Osaleh | null | null | null | false | 1 | false | Osaleh/NE_ArSAS | 2022-09-05T11:52:06.000Z | null | false | 432cc594adf4bf4f47d7e3bfbf32b7c51608eeae | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/Osaleh/NE_ArSAS/resolve/main/README.md | ---
license: afl-3.0
---
|
mteb | null | null | null | false | 2 | false | mteb/mteb-example-submission | 2022-09-05T19:25:39.000Z | null | false | 409ea09f4f1af6cd28bdd26694f3f8aa679f6120 | [] | [
"benchmark:mteb",
"type:evaluation"
] | https://huggingface.co/datasets/mteb/mteb-example-submission/resolve/main/README.md | ---
benchmark: mteb
type: evaluation
--- |
asaxena1990 | null | null | null | false | 2 | false | asaxena1990/datasetpreview | 2022-09-05T12:18:05.000Z | null | false | a6532be4f02ca12a871ba4910dc2b72e7b3cf4e2 | [] | [
"license:cc-by-sa-4.0"
] | https://huggingface.co/datasets/asaxena1990/datasetpreview/resolve/main/README.md | ---
license: cc-by-sa-4.0
---
|
asaxena1990 | null | null | null | false | 2 | false | asaxena1990/datasetpreviewcsv | 2022-09-05T12:51:14.000Z | null | false | 698f0d0c15fbc15ca98d8757c294f397c5254a6a | [] | [
"license:cc-by-nc-sa-4.0"
] | https://huggingface.co/datasets/asaxena1990/datasetpreviewcsv/resolve/main/README.md | ---
license: cc-by-nc-sa-4.0
---
|
g8a9 | null | null | null | false | 174 | false | g8a9/europarl_en-it | 2022-09-07T10:14:04.000Z | null | false | 6df1024387c78af81538a7223c70a8101c61d6aa | [] | [
"language:en",
"language:it",
"license:unknown",
"multilinguality:monolingual",
"multilinguality:translation",
"task_categories:translation"
] | https://huggingface.co/datasets/g8a9/europarl_en-it/resolve/main/README.md | ---
language:
- en
- it
license:
- unknown
multilinguality:
- monolingual
- translation
pretty_name: Europarl v7 (en-it split)
tags: []
task_categories:
- translation
task_ids: []
---
# Dataset Card for Europarl v7 (en-it split)
This dataset contains only the English-Italian split of Europarl v7.
We created the dataset to provide it to the [M2L 2022 Summer School](https://www.m2lschool.org/) students.
For all the information on the dataset, please refer to: [https://www.statmt.org/europarl/](https://www.statmt.org/europarl/)
## Dataset Structure
### Data Fields
- sent_en: English transcript
- sent_it: Italian translation
### Data Splits
We created three custom training/validation/testing splits. Feel free to rearrange them if needed. These ARE NOT by any means official splits.
- train (1717204 pairs)
- validation (190911 pairs)
- test (1000 pairs)
### Citation Information
If using the dataset, please cite:
`Koehn, P. (2005). Europarl: A parallel corpus for statistical machine translation. In Proceedings of machine translation summit x: papers (pp. 79-86).`
### Contributions
Thanks to [@g8a9](https://github.com/g8a9) for adding this dataset.
|
batterydata | null | null | null | false | 27 | false | batterydata/battery-device-data-qa | 2022-09-05T15:54:40.000Z | null | false | 37ab06f69deddb5dc9aed9214ee7278c25a1179d | [] | [
"language:en",
"license:apache-2.0",
"task_categories:question-answering"
] | https://huggingface.co/datasets/batterydata/battery-device-data-qa/resolve/main/README.md | ---
language:
- en
license:
- apache-2.0
task_categories:
- question-answering
pretty_name: 'Battery Device Question Answering Dataset'
---
# Battery Device QA Data
Battery device records, including anode, cathode, and electrolyte.
Examples of the question answering evaluation dataset:
\{'question': 'What is the cathode?', 'answer': 'Al foil', 'context': 'The blended slurry was then cast onto a clean current collector (Al foil for the cathode and Cu foil for the anode) and dried at 90 °C under vacuum overnight.', 'start index': 645\}
\{'question': 'What is the anode?', 'answer': 'Cu foil', 'context': 'The blended slurry was then cast onto a clean current collector (Al foil for the cathode and Cu foil for the anode) and dried at 90 °C under vacuum overnight. Finally, the obtained electrodes were cut into desired shapes on demand. It should be noted that the electrode mass ratio of cathode/anode is set to about 4, thus achieving the battery balance.', 'start index': 673\}
\{'question': 'What is the cathode?', 'answer': 'SiC/RGO nanocomposite', 'context': 'In conclusion, the SiC/RGO nanocomposite, integrating the synergistic effect of SiC flakes and RGO, was synthesized by an in situ gas–solid fabrication method. Taking advantage of the enhanced photogenerated charge separation, large CO2 adsorption, and numerous exposed active sites, SiC/RGO nanocomposite served as the cathode material for the photo-assisted Li–CO2 battery.', 'start index': 284\}
# Usage
```
from datasets import load_dataset
dataset = load_dataset("batterydata/battery-device-data-qa")
```
# Citation
```
@article{huang2022batterybert,
title={BatteryBERT: A Pretrained Language Model for Battery Database Enhancement},
author={Huang, Shu and Cole, Jacqueline M},
journal={J. Chem. Inf. Model.},
year={2022},
doi={10.1021/acs.jcim.2c00035},
url={DOI:10.1021/acs.jcim.2c00035},
pages={DOI: 10.1021/acs.jcim.2c00035},
publisher={ACS Publications}
}
``` |
open-source-metrics | null | null | null | false | 2 | false | open-source-metrics/diffusers-dependents | 2022-11-09T16:17:45.000Z | null | false | 0cf1497a667bb59681e18dc9de041274ed435812 | [] | [
"license:apache-2.0",
"tags:github-stars"
] | https://huggingface.co/datasets/open-source-metrics/diffusers-dependents/resolve/main/README.md | ---
license: apache-2.0
pretty_name: diffusers metrics
tags:
- github-stars
---
# diffusers metrics
This dataset contains metrics about the huggingface/diffusers package.
Number of repositories in the dataset: 160
Number of packages in the dataset: 2
## Package dependents
This contains the data available in the [used-by](https://github.com/huggingface/diffusers/network/dependents)
tab on GitHub.
### Package & Repository star count
This section shows the package and repository star count, individually.
Package | Repository
:-------------------------:|:-------------------------:
 | 
There are 0 packages that have more than 1000 stars.
There are 3 repositories that have more than 1000 stars.
The top 10 in each category are the following:
*Package*
[JoaoLages/diffusers-interpret](https://github.com/JoaoLages/diffusers-interpret): 121
[samedii/perceptor](https://github.com/samedii/perceptor): 1
*Repository*
[gradio-app/gradio](https://github.com/gradio-app/gradio): 9168
[divamgupta/diffusionbee-stable-diffusion-ui](https://github.com/divamgupta/diffusionbee-stable-diffusion-ui): 4264
[AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui): 3527
[bes-dev/stable_diffusion.openvino](https://github.com/bes-dev/stable_diffusion.openvino): 925
[nateraw/stable-diffusion-videos](https://github.com/nateraw/stable-diffusion-videos): 899
[sharonzhou/long_stable_diffusion](https://github.com/sharonzhou/long_stable_diffusion): 360
[Eventual-Inc/Daft](https://github.com/Eventual-Inc/Daft): 251
[JoaoLages/diffusers-interpret](https://github.com/JoaoLages/diffusers-interpret): 121
[GT4SD/gt4sd-core](https://github.com/GT4SD/gt4sd-core): 113
[brycedrennan/imaginAIry](https://github.com/brycedrennan/imaginAIry): 104
### Package & Repository fork count
This section shows the package and repository fork count, individually.
Package | Repository
:-------------------------:|:-------------------------:
 | 
There are 0 packages that have more than 200 forks.
There are 2 repositories that have more than 200 forks.
The top 10 in each category are the following:
*Package*
*Repository*
[gradio-app/gradio](https://github.com/gradio-app/gradio): 574
[AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui): 377
[bes-dev/stable_diffusion.openvino](https://github.com/bes-dev/stable_diffusion.openvino): 108
[divamgupta/diffusionbee-stable-diffusion-ui](https://github.com/divamgupta/diffusionbee-stable-diffusion-ui): 96
[nateraw/stable-diffusion-videos](https://github.com/nateraw/stable-diffusion-videos): 73
[GT4SD/gt4sd-core](https://github.com/GT4SD/gt4sd-core): 34
[sharonzhou/long_stable_diffusion](https://github.com/sharonzhou/long_stable_diffusion): 29
[coreweave/kubernetes-cloud](https://github.com/coreweave/kubernetes-cloud): 20
[bananaml/serverless-template-stable-diffusion](https://github.com/bananaml/serverless-template-stable-diffusion): 15
[AmericanPresidentJimmyCarter/yasd-discord-bot](https://github.com/AmericanPresidentJimmyCarter/yasd-discord-bot): 9
[NickLucche/stable-diffusion-nvidia-docker](https://github.com/NickLucche/stable-diffusion-nvidia-docker): 9
[vopani/waveton](https://github.com/vopani/waveton): 9
[harubaru/discord-stable-diffusion](https://github.com/harubaru/discord-stable-diffusion): 9
|
open-source-metrics | null | null | null | false | 1 | false | open-source-metrics/accelerate-dependents | 2022-11-09T15:50:48.000Z | null | false | 5236613eeb85a0ea21b5c837b33fda92297fd70d | [] | [
"license:apache-2.0",
"tags:github-stars"
] | https://huggingface.co/datasets/open-source-metrics/accelerate-dependents/resolve/main/README.md | ---
license: apache-2.0
pretty_name: accelerate metrics
tags:
- github-stars
---
# accelerate metrics
This dataset contains metrics about the huggingface/accelerate package.
Number of repositories in the dataset: 727
Number of packages in the dataset: 37
## Package dependents
This contains the data available in the [used-by](https://github.com/huggingface/accelerate/network/dependents)
tab on GitHub.
### Package & Repository star count
This section shows the package and repository star count, individually.
Package | Repository
:-------------------------:|:-------------------------:
 | 
There are 10 packages that have more than 1000 stars.
There are 16 repositories that have more than 1000 stars.
The top 10 in each category are the following:
*Package*
[huggingface/transformers](https://github.com/huggingface/transformers): 70480
[fastai/fastai](https://github.com/fastai/fastai): 22774
[lucidrains/DALLE2-pytorch](https://github.com/lucidrains/DALLE2-pytorch): 7674
[kornia/kornia](https://github.com/kornia/kornia): 7103
[facebookresearch/pytorch3d](https://github.com/facebookresearch/pytorch3d): 6548
[huggingface/diffusers](https://github.com/huggingface/diffusers): 5457
[lucidrains/imagen-pytorch](https://github.com/lucidrains/imagen-pytorch): 5113
[catalyst-team/catalyst](https://github.com/catalyst-team/catalyst): 2985
[lucidrains/denoising-diffusion-pytorch](https://github.com/lucidrains/denoising-diffusion-pytorch): 1727
[abhishekkrthakur/tez](https://github.com/abhishekkrthakur/tez): 1101
*Repository*
[huggingface/transformers](https://github.com/huggingface/transformers): 70480
[google-research/google-research](https://github.com/google-research/google-research): 25092
[ray-project/ray](https://github.com/ray-project/ray): 22047
[lucidrains/DALLE2-pytorch](https://github.com/lucidrains/DALLE2-pytorch): 7674
[kornia/kornia](https://github.com/kornia/kornia): 7103
[huggingface/diffusers](https://github.com/huggingface/diffusers): 5457
[lucidrains/imagen-pytorch](https://github.com/lucidrains/imagen-pytorch): 5113
[wandb/wandb](https://github.com/wandb/wandb): 4738
[skorch-dev/skorch](https://github.com/skorch-dev/skorch): 4679
[catalyst-team/catalyst](https://github.com/catalyst-team/catalyst): 2985
### Package & Repository fork count
This section shows the package and repository fork count, individually.
Package | Repository
:-------------------------:|:-------------------------:
 | 
There are 9 packages that have more than 200 forks.
There are 16 repositories that have more than 200 forks.
The top 10 in each category are the following:
*Package*
[huggingface/transformers](https://github.com/huggingface/transformers): 16157
[fastai/fastai](https://github.com/fastai/fastai): 7297
[facebookresearch/pytorch3d](https://github.com/facebookresearch/pytorch3d): 975
[kornia/kornia](https://github.com/kornia/kornia): 723
[lucidrains/DALLE2-pytorch](https://github.com/lucidrains/DALLE2-pytorch): 582
[huggingface/diffusers](https://github.com/huggingface/diffusers): 490
[lucidrains/imagen-pytorch](https://github.com/lucidrains/imagen-pytorch): 412
[catalyst-team/catalyst](https://github.com/catalyst-team/catalyst): 366
[lucidrains/denoising-diffusion-pytorch](https://github.com/lucidrains/denoising-diffusion-pytorch): 235
[abhishekkrthakur/tez](https://github.com/abhishekkrthakur/tez): 136
*Repository*
[huggingface/transformers](https://github.com/huggingface/transformers): 16157
[google-research/google-research](https://github.com/google-research/google-research): 6139
[ray-project/ray](https://github.com/ray-project/ray): 3876
[roatienza/Deep-Learning-Experiments](https://github.com/roatienza/Deep-Learning-Experiments): 729
[kornia/kornia](https://github.com/kornia/kornia): 723
[lucidrains/DALLE2-pytorch](https://github.com/lucidrains/DALLE2-pytorch): 582
[huggingface/diffusers](https://github.com/huggingface/diffusers): 490
[nlp-with-transformers/notebooks](https://github.com/nlp-with-transformers/notebooks): 436
[lucidrains/imagen-pytorch](https://github.com/lucidrains/imagen-pytorch): 412
[catalyst-team/catalyst](https://github.com/catalyst-team/catalyst): 366
|
open-source-metrics | null | null | null | false | 1 | false | open-source-metrics/evaluate-dependents | 2022-11-09T15:48:30.000Z | null | false | b4a925e5b519692323b5b323ae01317302f8f6ac | [] | [
"license:apache-2.0",
"tags:github-stars"
] | https://huggingface.co/datasets/open-source-metrics/evaluate-dependents/resolve/main/README.md | ---
license: apache-2.0
pretty_name: evaluate metrics
tags:
- github-stars
---
# evaluate metrics
This dataset contains metrics about the huggingface/evaluate package.
Number of repositories in the dataset: 106
Number of packages in the dataset: 3
## Package dependents
This contains the data available in the [used-by](https://github.com/huggingface/evaluate/network/dependents)
tab on GitHub.
### Package & Repository star count
This section shows the package and repository star count, individually.
Package | Repository
:-------------------------:|:-------------------------:
 | 
There are 1 packages that have more than 1000 stars.
There are 2 repositories that have more than 1000 stars.
The top 10 in each category are the following:
*Package*
[huggingface/accelerate](https://github.com/huggingface/accelerate): 2884
[fcakyon/video-transformers](https://github.com/fcakyon/video-transformers): 4
[entelecheia/ekorpkit](https://github.com/entelecheia/ekorpkit): 2
*Repository*
[huggingface/transformers](https://github.com/huggingface/transformers): 70481
[huggingface/accelerate](https://github.com/huggingface/accelerate): 2884
[huggingface/evaluate](https://github.com/huggingface/evaluate): 878
[pytorch/benchmark](https://github.com/pytorch/benchmark): 406
[imhuay/studies](https://github.com/imhuay/studies): 161
[AIRC-KETI/ke-t5](https://github.com/AIRC-KETI/ke-t5): 128
[Jaseci-Labs/jaseci](https://github.com/Jaseci-Labs/jaseci): 32
[philschmid/optimum-static-quantization](https://github.com/philschmid/optimum-static-quantization): 20
[hms-dbmi/scw](https://github.com/hms-dbmi/scw): 19
[philschmid/optimum-transformers-optimizations](https://github.com/philschmid/optimum-transformers-optimizations): 15
[girafe-ai/msai-python](https://github.com/girafe-ai/msai-python): 15
[lewtun/dl4phys](https://github.com/lewtun/dl4phys): 15
### Package & Repository fork count
This section shows the package and repository fork count, individually.
Package | Repository
:-------------------------:|:-------------------------:
 | 
There are 1 packages that have more than 200 forks.
There are 2 repositories that have more than 200 forks.
The top 10 in each category are the following:
*Package*
[huggingface/accelerate](https://github.com/huggingface/accelerate): 224
[fcakyon/video-transformers](https://github.com/fcakyon/video-transformers): 0
[entelecheia/ekorpkit](https://github.com/entelecheia/ekorpkit): 0
*Repository*
[huggingface/transformers](https://github.com/huggingface/transformers): 16157
[huggingface/accelerate](https://github.com/huggingface/accelerate): 224
[pytorch/benchmark](https://github.com/pytorch/benchmark): 131
[Jaseci-Labs/jaseci](https://github.com/Jaseci-Labs/jaseci): 67
[huggingface/evaluate](https://github.com/huggingface/evaluate): 48
[imhuay/studies](https://github.com/imhuay/studies): 42
[AIRC-KETI/ke-t5](https://github.com/AIRC-KETI/ke-t5): 14
[girafe-ai/msai-python](https://github.com/girafe-ai/msai-python): 14
[hms-dbmi/scw](https://github.com/hms-dbmi/scw): 11
[kili-technology/automl](https://github.com/kili-technology/automl): 5
[whatofit/LevelWordWithFreq](https://github.com/whatofit/LevelWordWithFreq): 5
|
open-source-metrics | null | null | null | false | 2 | false | open-source-metrics/optimum-dependents | 2022-11-09T16:03:46.000Z | null | false | 7b0f1916677e79dea7a4fc603eaf43d00afdc7e3 | [] | [
"license:apache-2.0",
"tags:github-stars"
] | https://huggingface.co/datasets/open-source-metrics/optimum-dependents/resolve/main/README.md | ---
license: apache-2.0
pretty_name: optimum metrics
tags:
- github-stars
---
# optimum metrics
This dataset contains metrics about the huggingface/optimum package.
Number of repositories in the dataset: 19
Number of packages in the dataset: 6
## Package dependents
This contains the data available in the [used-by](https://github.com/huggingface/optimum/network/dependents)
tab on GitHub.
### Package & Repository star count
This section shows the package and repository star count, individually.
Package | Repository
:-------------------------:|:-------------------------:
 | 
There are 0 packages that have more than 1000 stars.
There are 0 repositories that have more than 1000 stars.
The top 10 in each category are the following:
*Package*
[SeldonIO/MLServer](https://github.com/SeldonIO/MLServer): 288
[AlekseyKorshuk/optimum-transformers](https://github.com/AlekseyKorshuk/optimum-transformers): 114
[huggingface/optimum-intel](https://github.com/huggingface/optimum-intel): 61
[huggingface/optimum-graphcore](https://github.com/huggingface/optimum-graphcore): 34
[huggingface/optimum-habana](https://github.com/huggingface/optimum-habana): 24
[bhavsarpratik/easy-transformers](https://github.com/bhavsarpratik/easy-transformers): 10
*Repository*
[SeldonIO/MLServer](https://github.com/SeldonIO/MLServer): 288
[marqo-ai/marqo](https://github.com/marqo-ai/marqo): 265
[AlekseyKorshuk/optimum-transformers](https://github.com/AlekseyKorshuk/optimum-transformers): 114
[graphcore/tutorials](https://github.com/graphcore/tutorials): 65
[huggingface/optimum-intel](https://github.com/huggingface/optimum-intel): 61
[huggingface/optimum-graphcore](https://github.com/huggingface/optimum-graphcore): 34
[huggingface/optimum-habana](https://github.com/huggingface/optimum-habana): 24
[philschmid/optimum-static-quantization](https://github.com/philschmid/optimum-static-quantization): 20
[philschmid/optimum-transformers-optimizations](https://github.com/philschmid/optimum-transformers-optimizations): 15
[girafe-ai/msai-python](https://github.com/girafe-ai/msai-python): 15
### Package & Repository fork count
This section shows the package and repository fork count, individually.
Package | Repository
:-------------------------:|:-------------------------:
 | 
There are 0 packages that have more than 200 forks.
There are 0 repositories that have more than 200 forks.
The top 10 in each category are the following:
*Package*
[SeldonIO/MLServer](https://github.com/SeldonIO/MLServer): 82
[huggingface/optimum-graphcore](https://github.com/huggingface/optimum-graphcore): 18
[huggingface/optimum-intel](https://github.com/huggingface/optimum-intel): 10
[AlekseyKorshuk/optimum-transformers](https://github.com/AlekseyKorshuk/optimum-transformers): 6
[huggingface/optimum-habana](https://github.com/huggingface/optimum-habana): 3
[bhavsarpratik/easy-transformers](https://github.com/bhavsarpratik/easy-transformers): 2
*Repository*
[SeldonIO/MLServer](https://github.com/SeldonIO/MLServer): 82
[graphcore/tutorials](https://github.com/graphcore/tutorials): 33
[huggingface/optimum-graphcore](https://github.com/huggingface/optimum-graphcore): 18
[girafe-ai/msai-python](https://github.com/girafe-ai/msai-python): 14
[huggingface/optimum-intel](https://github.com/huggingface/optimum-intel): 10
[marqo-ai/marqo](https://github.com/marqo-ai/marqo): 6
[AlekseyKorshuk/optimum-transformers](https://github.com/AlekseyKorshuk/optimum-transformers): 6
[whatofit/LevelWordWithFreq](https://github.com/whatofit/LevelWordWithFreq): 5
[philschmid/optimum-transformers-optimizations](https://github.com/philschmid/optimum-transformers-optimizations): 3
[huggingface/optimum-habana](https://github.com/huggingface/optimum-habana): 3
|
open-source-metrics | null | null | null | false | 2 | false | open-source-metrics/tokenizers-dependents | 2022-11-09T16:16:31.000Z | null | false | c5462dc479106c0accbb6b8abcc2ba56d617cd86 | [] | [
"license:apache-2.0",
"tags:github-stars"
] | https://huggingface.co/datasets/open-source-metrics/tokenizers-dependents/resolve/main/README.md | ---
license: apache-2.0
pretty_name: tokenizers metrics
tags:
- github-stars
---
# tokenizers metrics
This dataset contains metrics about the huggingface/tokenizers package.
Number of repositories in the dataset: 11460
Number of packages in the dataset: 124
## Package dependents
This contains the data available in the [used-by](https://github.com/huggingface/tokenizers/network/dependents)
tab on GitHub.
### Package & Repository star count
This section shows the package and repository star count, individually.
Package | Repository
:-------------------------:|:-------------------------:
 | 
There are 14 packages that have more than 1000 stars.
There are 41 repositories that have more than 1000 stars.
The top 10 in each category are the following:
*Package*
[huggingface/transformers](https://github.com/huggingface/transformers): 70475
[hankcs/HanLP](https://github.com/hankcs/HanLP): 26958
[facebookresearch/ParlAI](https://github.com/facebookresearch/ParlAI): 9439
[UKPLab/sentence-transformers](https://github.com/UKPLab/sentence-transformers): 8461
[lucidrains/DALLE-pytorch](https://github.com/lucidrains/DALLE-pytorch): 4816
[ThilinaRajapakse/simpletransformers](https://github.com/ThilinaRajapakse/simpletransformers): 3303
[neuml/txtai](https://github.com/neuml/txtai): 2530
[QData/TextAttack](https://github.com/QData/TextAttack): 2087
[lukas-blecher/LaTeX-OCR](https://github.com/lukas-blecher/LaTeX-OCR): 1981
[utterworks/fast-bert](https://github.com/utterworks/fast-bert): 1760
*Repository*
[huggingface/transformers](https://github.com/huggingface/transformers): 70480
[hankcs/HanLP](https://github.com/hankcs/HanLP): 26958
[RasaHQ/rasa](https://github.com/RasaHQ/rasa): 14842
[facebookresearch/ParlAI](https://github.com/facebookresearch/ParlAI): 9440
[gradio-app/gradio](https://github.com/gradio-app/gradio): 9169
[UKPLab/sentence-transformers](https://github.com/UKPLab/sentence-transformers): 8462
[microsoft/unilm](https://github.com/microsoft/unilm): 6650
[EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo): 6431
[moyix/fauxpilot](https://github.com/moyix/fauxpilot): 6300
[lucidrains/DALLE-pytorch](https://github.com/lucidrains/DALLE-pytorch): 4816
### Package & Repository fork count
This section shows the package and repository fork count, individually.
Package | Repository
:-------------------------:|:-------------------------:
 | 
There are 11 packages that have more than 200 forks.
There are 39 repositories that have more than 200 forks.
The top 10 in each category are the following:
*Package*
[huggingface/transformers](https://github.com/huggingface/transformers): 16158
[hankcs/HanLP](https://github.com/hankcs/HanLP): 7388
[facebookresearch/ParlAI](https://github.com/facebookresearch/ParlAI): 1920
[UKPLab/sentence-transformers](https://github.com/UKPLab/sentence-transformers): 1695
[ThilinaRajapakse/simpletransformers](https://github.com/ThilinaRajapakse/simpletransformers): 658
[lucidrains/DALLE-pytorch](https://github.com/lucidrains/DALLE-pytorch): 543
[utterworks/fast-bert](https://github.com/utterworks/fast-bert): 336
[nyu-mll/jiant](https://github.com/nyu-mll/jiant): 273
[QData/TextAttack](https://github.com/QData/TextAttack): 269
[lukas-blecher/LaTeX-OCR](https://github.com/lukas-blecher/LaTeX-OCR): 245
*Repository*
[huggingface/transformers](https://github.com/huggingface/transformers): 16157
[hankcs/HanLP](https://github.com/hankcs/HanLP): 7388
[RasaHQ/rasa](https://github.com/RasaHQ/rasa): 4105
[plotly/dash-sample-apps](https://github.com/plotly/dash-sample-apps): 2795
[facebookresearch/ParlAI](https://github.com/facebookresearch/ParlAI): 1920
[UKPLab/sentence-transformers](https://github.com/UKPLab/sentence-transformers): 1695
[microsoft/unilm](https://github.com/microsoft/unilm): 1223
[openvinotoolkit/open_model_zoo](https://github.com/openvinotoolkit/open_model_zoo): 1207
[bhaveshlohana/HacktoberFest2020-Contributions](https://github.com/bhaveshlohana/HacktoberFest2020-Contributions): 1020
[data-science-on-aws/data-science-on-aws](https://github.com/data-science-on-aws/data-science-on-aws): 884
|
open-source-metrics | null | null | null | false | 2 | false | open-source-metrics/datasets-dependents | 2022-11-09T16:03:32.000Z | null | false | 7733ba49cee9c687a83d4b71b640e2c44fd178a5 | [] | [
"license:apache-2.0",
"tags:github-stars"
] | https://huggingface.co/datasets/open-source-metrics/datasets-dependents/resolve/main/README.md | ---
license: apache-2.0
pretty_name: datasets metrics
tags:
- github-stars
---
# datasets metrics
This dataset contains metrics about the huggingface/datasets package.
Number of repositories in the dataset: 4997
Number of packages in the dataset: 215
## Package dependents
This contains the data available in the [used-by](https://github.com/huggingface/datasets/network/dependents)
tab on GitHub.
### Package & Repository star count
This section shows the package and repository star count, individually.
Package | Repository
:-------------------------:|:-------------------------:
 | 
There are 22 packages that have more than 1000 stars.
There are 43 repositories that have more than 1000 stars.
The top 10 in each category are the following:
*Package*
[huggingface/transformers](https://github.com/huggingface/transformers): 70480
[fastai/fastbook](https://github.com/fastai/fastbook): 16052
[jina-ai/jina](https://github.com/jina-ai/jina): 16052
[borisdayma/dalle-mini](https://github.com/borisdayma/dalle-mini): 12873
[allenai/allennlp](https://github.com/allenai/allennlp): 11198
[facebookresearch/ParlAI](https://github.com/facebookresearch/ParlAI): 9440
[huggingface/tokenizers](https://github.com/huggingface/tokenizers): 5867
[huggingface/diffusers](https://github.com/huggingface/diffusers): 5457
[PaddlePaddle/PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP): 5422
[HIT-SCIR/ltp](https://github.com/HIT-SCIR/ltp): 4058
*Repository*
[huggingface/transformers](https://github.com/huggingface/transformers): 70481
[google-research/google-research](https://github.com/google-research/google-research): 25092
[ray-project/ray](https://github.com/ray-project/ray): 22047
[allenai/allennlp](https://github.com/allenai/allennlp): 11198
[facebookresearch/ParlAI](https://github.com/facebookresearch/ParlAI): 9440
[gradio-app/gradio](https://github.com/gradio-app/gradio): 9169
[aws/amazon-sagemaker-examples](https://github.com/aws/amazon-sagemaker-examples): 7343
[microsoft/unilm](https://github.com/microsoft/unilm): 6650
[deeppavlov/DeepPavlov](https://github.com/deeppavlov/DeepPavlov): 5844
[huggingface/diffusers](https://github.com/huggingface/diffusers): 5457
### Package & Repository fork count
This section shows the package and repository fork count, individually.
Package | Repository
:-------------------------:|:-------------------------:
 | 
There are 17 packages that have more than 200 forks.
There are 40 repositories that have more than 200 forks.
The top 10 in each category are the following:
*Package*
[huggingface/transformers](https://github.com/huggingface/transformers): 16157
[fastai/fastbook](https://github.com/fastai/fastbook): 6033
[allenai/allennlp](https://github.com/allenai/allennlp): 2218
[jina-ai/jina](https://github.com/jina-ai/jina): 1967
[facebookresearch/ParlAI](https://github.com/facebookresearch/ParlAI): 1920
[PaddlePaddle/PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP): 1583
[HIT-SCIR/ltp](https://github.com/HIT-SCIR/ltp): 988
[borisdayma/dalle-mini](https://github.com/borisdayma/dalle-mini): 945
[ThilinaRajapakse/simpletransformers](https://github.com/ThilinaRajapakse/simpletransformers): 658
[huggingface/tokenizers](https://github.com/huggingface/tokenizers): 502
*Repository*
[huggingface/transformers](https://github.com/huggingface/transformers): 16157
[google-research/google-research](https://github.com/google-research/google-research): 6139
[aws/amazon-sagemaker-examples](https://github.com/aws/amazon-sagemaker-examples): 5493
[ray-project/ray](https://github.com/ray-project/ray): 3876
[allenai/allennlp](https://github.com/allenai/allennlp): 2218
[facebookresearch/ParlAI](https://github.com/facebookresearch/ParlAI): 1920
[PaddlePaddle/PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP): 1583
[x4nth055/pythoncode-tutorials](https://github.com/x4nth055/pythoncode-tutorials): 1435
[microsoft/unilm](https://github.com/microsoft/unilm): 1223
[deeppavlov/DeepPavlov](https://github.com/deeppavlov/DeepPavlov): 1055
|
batterydata | null | null | null | false | 19 | false | batterydata/pos_tagging | 2022-09-05T16:05:33.000Z | null | false | 4cae50882a24a955155db7d170b571e93ab8102f | [] | [
"language:en",
"license:apache-2.0",
"task_categories:token-classification"
] | https://huggingface.co/datasets/batterydata/pos_tagging/resolve/main/README.md | ---
language:
- en
license:
- apache-2.0
task_categories:
- token-classification
pretty_name: 'Part-of-speech(POS) Tagging Dataset for BatteryDataExtractor'
---
# POS Tagging Dataset
## Original Data Source
#### Conll2003
E. F. Tjong Kim Sang and F. De Meulder, Proceedings of the
Seventh Conference on Natural Language Learning at HLT-
NAACL 2003, 2003, pp. 142–147.
#### The Peen Treebank
M. P. Marcus, B. Santorini and M. A. Marcinkiewicz, Comput.
Linguist., 1993, 19, 313–330.
## Citation
BatteryDataExtractor: battery-aware text-mining software embedded with BERT models |
batterydata | null | null | null | false | 1 | false | batterydata/abbreviation_detection | 2022-09-05T16:02:48.000Z | null | false | 39190a2140c5fc237fed556ef88449015271850b | [] | [
"arxiv:2204.12061",
"language:en",
"license:apache-2.0",
"task_categories:token-classification"
] | https://huggingface.co/datasets/batterydata/abbreviation_detection/resolve/main/README.md | ---
language:
- en
license:
- apache-2.0
task_categories:
- token-classification
pretty_name: 'Abbreviation Detection Dataset for BatteryDataExtractor'
---
# Abbreviation Detection Dataset
## Original Data Source
#### PLOS
I. Zilio, H. Saadany, P. Sharma, D. Kanojia and C. Orasan,
PLOD: An Abbreviation Detection Dataset for Scientific Docu-
ments, 2022, https://arxiv.org/abs/2204.12061.
#### SDU@AAAI-21
A. P. B. Veyseh, F. Dernoncourt, Q. H. Tran and T. H. Nguyen,
Proceedings of the 28th International Conference on Compu-
tational Linguistics, 2020, pp. 3285–3301
## Citation
BatteryDataExtractor: battery-aware text-mining software embedded with BERT models |
batterydata | null | null | null | false | 11 | false | batterydata/cner | 2022-09-05T16:07:43.000Z | null | false | 4976bb5ace12abe22747787d3663a203946c319e | [] | [
"arxiv:2006.03039",
"language:en",
"license:apache-2.0",
"task_categories:token-classification"
] | https://huggingface.co/datasets/batterydata/cner/resolve/main/README.md | ---
language:
- en
license:
- apache-2.0
task_categories:
- token-classification
pretty_name: 'Chemical Named Entity Recognition (CNER) Dataset for BatteryDataExtractor'
---
# CNER Dataset
## Original Data Source
#### CHEMDNER
M. Krallinger, O. Rabal, F. Leitner, M. Vazquez, D. Salgado,
Z. Lu, R. Leaman, Y. Lu, D. Ji, D. M. Lowe et al., J. Cheminf.,
2015, 7, 1–17.
#### MatScholar
I. Weston, V. Tshitoyan, J. Dagdelen, O. Kononova, A. Tre-
wartha, K. A. Persson, G. Ceder and A. Jain, J. Chem. Inf.
Model., 2019, 59, 3692–3702.
#### SOFC
A. Friedrich, H. Adel, F. Tomazic, J. Hingerl, R. Benteau,
A. Maruscyk and L. Lange, The SOFC-exp corpus and neural
approaches to information extraction in the materials science
domain, 2020, https://arxiv.org/abs/2006.03039.
#### BioNLP
G. Crichton, S. Pyysalo, B. Chiu and A. Korhonen, BMC Bioinf.,
2017, 18, 1–14.
## Citation
BatteryDataExtractor: battery-aware text-mining software embedded with BERT models |
daspartho | null | null | null | false | 6 | false | daspartho/anime-or-not | 2022-09-12T06:52:56.000Z | null | false | 9b0c3068e673d857989dd4d001a118cd945d50e2 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/daspartho/anime-or-not/resolve/main/README.md | ---
license: apache-2.0
---
|
poojaruhal | null | null | null | false | 1 | false | poojaruhal/Code-comment-classification | 2022-10-16T11:11:46.000Z | null | false | 3d2bbff4d30d5c41d2cbf5b1d55fbc8d10cfdbaa | [] | [
"annotations_creators:expert-generated",
"language:en",
"language_creators:crowdsourced",
"license:cc-by-nc-sa-4.0",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"tags:'source code comments'",
"tags:'java class comments'",
"tags:'python class comments'",
... | https://huggingface.co/datasets/poojaruhal/Code-comment-classification/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- crowdsourced
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
pretty_name: 'Code-comment-classification
'
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- '''source code comments'''
- '''java class comments'''
- '''python class comments'''
- '''
smalltalk class comments'''
task_categories:
- text-classification
task_ids:
- intent-classification
- multi-label-classification
---
# Dataset Card for Code Comment Classification
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/poojaruhal/RP-class-comment-classification
- **Repository:** https://github.com/poojaruhal/RP-class-comment-classification
- **Paper:** https://doi.org/10.1016/j.jss.2021.111047
- **Point of Contact:** https://poojaruhal.github.io
### Dataset Summary
The dataset contains class comments extracted from various big and diverse open-source projects of three programming languages Java, Smalltalk, and Python.
### Supported Tasks and Leaderboards
Single-label text classification and Multi-label text classification
### Languages
Java, Python, Smalltalk
## Dataset Structure
### Data Instances
```json
{
"class" : "Absy.java",
"comment":"* Azure Blob File System implementation of AbstractFileSystem. * This impl delegates to the old FileSystem",
"summary":"Azure Blob File System implementation of AbstractFileSystem.",
"expand":"This impl delegates to the old FileSystem",
"rational":"",
"deprecation":"",
"usage":"",
"exception":"",
"todo":"",
"incomplete":"",
"commentedcode":"",
"directive":"",
"formatter":"",
"license":"",
"ownership":"",
"pointer":"",
"autogenerated":"",
"noise":"",
"warning":"",
"recommendation":"",
"precondition":"",
"codingGuidelines":"",
"extension":"",
"subclassexplnation":"",
"observation":"",
}
```
### Data Fields
class: name of the class with the language extension.
comment: class comment of the class
categories: a category that sentence is classified to. It indicated a particular type of information.
### Data Splits
10-fold cross validation
## Dataset Creation
### Curation Rationale
To identify the infomation embedded in the class comments across various projects and programming languages.
### Source Data
#### Initial Data Collection and Normalization
It contains the dataset extracted from various open-source projects of three programming languages Java, Smalltalk, and Python.
- #### Java
Each file contains all the extracted class comments from one project. We have a total of six java projects. We chose a sample of 350 comments from all these files for our experiment.
- [Eclipse.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Java/) - Extracted class comments from the Eclipse project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Eclipse](https://github.com/eclipse).
- [Guava.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Java/Guava.csv) - Extracted class comments from the Guava project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Guava](https://github.com/google/guava).
- [Guice.csv](/https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Java/Guice.csv) - Extracted class comments from the Guice project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Guice](https://github.com/google/guice).
- [Hadoop.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Java/Hadoop.csv) - Extracted class comments from the Hadoop project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Apache Hadoop](https://github.com/apache/hadoop)
- [Spark.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Java/Spark.csv) - Extracted class comments from the Apache Spark project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Apache Spark](https://github.com/apache/spark)
- [Vaadin.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Java/Vaadin.csv) - Extracted class comments from the Vaadin project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Vaadin](https://github.com/vaadin/framework)
- [Parser_Details.md](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Java/Parser_Details.md) - Details of the parser used to parse class comments of Java [ Projects](https://doi.org/10.5281/zenodo.4311839)
- #### Smalltalk/
Each file contains all the extracted class comments from one project. We have a total of seven Pharo projects. We chose a sample of 350 comments from all these files for our experiment.
- [GToolkit.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Pharo/GToolkit.csv) - Extracted class comments from the GToolkit project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo.
- [Moose.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Pharo/Moose.csv) - Extracted class comments from the Moose project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo.
- [PetitParser.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Pharo/PetitParser.csv) - Extracted class comments from the PetitParser project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo.
- [Pillar.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Pharo/Pillar.csv) - Extracted class comments from the Pillar project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo.
- [PolyMath.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Pharo/PolyMath.csv) - Extracted class comments from the PolyMath project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo.
- [Roassal2.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Pharo/Roassal2.csv) -Extracted class comments from the Roassal2 project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo.
- [Seaside.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Pharo/Seaside.csv) - Extracted class comments from the Seaside project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo.
- [Parser_Details.md](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Pharo/Parser_Details.md) - Details of the parser used to parse class comments of Pharo [ Projects](https://doi.org/10.5281/zenodo.4311839)
- #### Python/
Each file contains all the extracted class comments from one project. We have a total of seven Python projects. We chose a sample of 350 comments from all these files for our experiment.
- [Django.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Python/Django.csv) - Extracted class comments from the Django project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Django](https://github.com/django)
- [IPython.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Python/IPython.csv) - Extracted class comments from the Ipython project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub[IPython](https://github.com/ipython/ipython)
- [Mailpile.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Python/Mailpile.csv) - Extracted class comments from the Mailpile project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Mailpile](https://github.com/mailpile/Mailpile)
- [Pandas.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Python/Pandas.csv) - Extracted class comments from the Pandas project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [pandas](https://github.com/pandas-dev/pandas)
- [Pipenv.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Python/Pipenv.csv) - Extracted class comments from the Pipenv project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Pipenv](https://github.com/pypa/pipenv)
- [Pytorch.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Python/Pytorch.csv) - Extracted class comments from the Pytorch project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [PyTorch](https://github.com/pytorch/pytorch)
- [Requests.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Python/Requests.csv) - Extracted class comments from the Requests project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Requests](https://github.com/psf/requests/)
- [Parser_Details.md](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Python/Parser_Details.md) - Details of the parser used to parse class comments of Python [ Projects](https://doi.org/10.5281/zenodo.4311839)
### Annotations
#### Annotation process
Four evaluators (all authors of this paper (https://doi.org/10.1016/j.jss.2021.111047)), each having at least four years of programming experience, participated in the annonation process.
We partitioned Java, Python, and Smalltalk comments equally among all evaluators based on the distribution of the language's dataset to ensure the inclusion of comments from all projects and diversified lengths. Each classification is reviewed by three evaluators.
The details are given in the paper [Rani et al., JSS, 2021](https://doi.org/10.1016/j.jss.2021.111047)
#### Who are the annotators?
[Rani et al., JSS, 2021](https://doi.org/10.1016/j.jss.2021.111047)
### Personal and Sensitive Information
Author information embedded in the text
## Additional Information
### Dataset Curators
[Pooja Rani, Ivan, Manuel]
### Licensing Information
[license: cc-by-nc-sa-4.0]
### Citation Information
```
@article{RANI2021111047,
title = {How to identify class comment types? A multi-language approach for class comment classification},
journal = {Journal of Systems and Software},
volume = {181},
pages = {111047},
year = {2021},
issn = {0164-1212},
doi = {https://doi.org/10.1016/j.jss.2021.111047},
url = {https://www.sciencedirect.com/science/article/pii/S0164121221001448},
author = {Pooja Rani and Sebastiano Panichella and Manuel Leuenberger and Andrea {Di Sorbo} and Oscar Nierstrasz},
keywords = {Natural language processing technique, Code comment analysis, Software documentation}
}
```
|
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-big_patent-y-7d0862-15806176 | 2022-09-07T03:32:35.000Z | null | false | dbfb6932cd47473876f8869f8fae932cc9099edb | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:big_patent"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-big_patent-y-7d0862-15806176/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- big_patent
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-base-16384-book-summary
metrics: []
dataset_name: big_patent
dataset_config: y
dataset_split: test
col_mapping:
text: description
target: abstract
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-book-summary
* Dataset: big_patent
* Config: y
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-staging-eval-big_patent-y-7d0862-15806177 | 2022-09-06T10:16:50.000Z | null | false | 214a9794ff850e1c35c9d22c58752e1ee0cd10df | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:big_patent"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-big_patent-y-7d0862-15806177/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- big_patent
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2
metrics: []
dataset_name: big_patent
dataset_config: y
dataset_split: test
col_mapping:
text: description
target: abstract
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2
* Dataset: big_patent
* Config: y
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-big_patent-y-7d0862-15806178 | 2022-09-06T16:50:20.000Z | null | false | f4f99ef293bfa13ce34d2cf7ece919d9776ff0ca | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:big_patent"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-big_patent-y-7d0862-15806178/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- big_patent
eval_info:
task: summarization
model: pszemraj/led-base-book-summary
metrics: []
dataset_name: big_patent
dataset_config: y
dataset_split: test
col_mapping:
text: description
target: abstract
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/led-base-book-summary
* Dataset: big_patent
* Config: y
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. |
bigscience-biomedical | null | @article{souganciouglu2017biosses,
title={BIOSSES: a semantic sentence similarity estimation system for the biomedical domain},
author={Soğancıoğlu, Gizem, Hakime Öztürk, and Arzucan Özgür},
journal={Bioinformatics},
volume={33},
number={14},
pages={i49--i58},
year={2017},
publisher={Oxford University Press}
} | BIOSSES computes similarity of biomedical sentences by utilizing WordNet as the
general domain ontology and UMLS as the biomedical domain specific ontology.
The original paper outlines the approaches with respect to using annotator
score as golden standard. Source view will return all annotator score
individually whereas the Bigbio view will return the mean of the annotator
score. | false | 2 | false | bigscience-biomedical/biosses | 2022-10-16T19:22:03.000Z | null | false | 45a41748fd315381f85f3b7363ec25cd7d0f2d31 | [] | [
"language:en",
"license:gpl-3.0",
"multilinguality:monolingual"
] | https://huggingface.co/datasets/bigscience-biomedical/biosses/resolve/main/README.md | ---
language: en
license: gpl-3.0
multilinguality: monolingual
pretty_name: BIOSSES
---
# Dataset Card for BIOSSES
## Homepage
https://tabilab.cmpe.boun.edu.tr/BIOSSES/DataSet.html
## Dataset Description
BIOSSES computes similarity of biomedical sentences by utilizing WordNet as the general domain ontology and UMLS as the biomedical domain specific ontology. The original paper outlines the approaches with respect to using annotator score as golden standard. Source view will return all annotator score individually whereas the Bigbio view will return the mean of the annotator score.
## Citation Information
```
@article{souganciouglu2017biosses,
title={BIOSSES: a semantic sentence similarity estimation system for the biomedical domain},
author={Soğancıoğlu, Gizem, Hakime Öztürk, and Arzucan Özgür},
journal={Bioinformatics},
volume={33},
number={14},
pages={i49--i58},
year={2017},
publisher={Oxford University Press}
}
```
|
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-samsum-samsum-fbc19a-15816179 | 2022-09-06T02:43:18.000Z | null | false | d1cb85a2f99002f343fad318b7f3d9d1b308921f | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:samsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-samsum-samsum-fbc19a-15816179/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- samsum
eval_info:
task: summarization
model: google/pegasus-xsum
metrics: []
dataset_name: samsum
dataset_config: samsum
dataset_split: validation
col_mapping:
text: dialogue
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/pegasus-xsum
* Dataset: samsum
* Config: samsum
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-cnn_dailymail-3.0.0-0b05dc-15886185 | 2022-09-06T10:42:21.000Z | null | false | c2bb89e72da89cf38680d5bb47fe689b0716bfc5 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:cnn_dailymail"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-cnn_dailymail-3.0.0-0b05dc-15886185/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cnn_dailymail
eval_info:
task: summarization
model: t5-small
metrics: []
dataset_name: cnn_dailymail
dataset_config: 3.0.0
dataset_split: test
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: t5-small
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Carmen](https://huggingface.co/Carmen) for evaluating this model. |
eraldoluis | null | @INPROCEEDINGS{
8923668,
author={Sayama, Hélio Fonseca and Araujo, Anderson Viçoso and Fernandes, Eraldo Rezende},
booktitle={2019 8th Brazilian Conference on Intelligent Systems (BRACIS)},
title={FaQuAD: Reading Comprehension Dataset in the Domain of Brazilian Higher Education},
year={2019},
volume={},
number={},
pages={443-448},
doi={10.1109/BRACIS.2019.00084}
} | Academic secretaries and faculty members of higher education institutions face a common problem:
the abundance of questions sent by academics
whose answers are found in available institutional documents.
The official documents produced by Brazilian public universities are vast and disperse,
which discourage students to further search for answers in such sources.
In order to lessen this problem, we present FaQuAD:
a novel machine reading comprehension dataset
in the domain of Brazilian higher education institutions.
FaQuAD follows the format of SQuAD (Stanford Question Answering Dataset) [Rajpurkar et al. 2016].
It comprises 900 questions about 249 reading passages (paragraphs),
which were taken from 18 official documents of a computer science college
from a Brazilian federal university
and 21 Wikipedia articles related to Brazilian higher education system.
As far as we know, this is the first Portuguese reading comprehension dataset in this format. | false | 4 | false | eraldoluis/faquad | 2022-09-07T11:46:08.000Z | null | false | 034808adf6a51fbe9ce4a53eeeba84627b67419d | [] | [
"annotations_creators:expert-generated",
"language_creators:found",
"language:pt",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:extended|wikipedia",
"task_categories:question-answering",
"task_ids:extractive-qa"
] | https://huggingface.co/datasets/eraldoluis/faquad/resolve/main/README.md | ---
pretty_name: FaQuAD
annotations_creators:
- expert-generated
language_creators:
- found
language:
- pt
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- extended|wikipedia
task_categories:
- question-answering
task_ids:
- extractive-qa
# paperswithcode_id: faquad
train-eval-index:
- config: plain_text
task: question-answering
task_id: extractive_question_answering
splits:
train_split: train
eval_split: validation
col_mapping:
question: question
context: context
answers:
text: text
answer_start: answer_start
metrics:
- type: squad
name: SQuAD
---
# Dataset Card for FaQuAD
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/liafacom/faquad
- **Repository:** https://github.com/liafacom/faquad
- **Paper:** https://ieeexplore.ieee.org/document/8923668/
<!-- - **Leaderboard:** -->
- **Point of Contact:** Eraldo R. Fernandes <eraldoluis@gmail.com>
### Dataset Summary
Academic secretaries and faculty members of higher education institutions face a common problem:
the abundance of questions sent by academics
whose answers are found in available institutional documents.
The official documents produced by Brazilian public universities are vast and disperse,
which discourage students to further search for answers in such sources.
In order to lessen this problem, we present FaQuAD:
a novel machine reading comprehension dataset
in the domain of Brazilian higher education institutions.
FaQuAD follows the format of SQuAD (Stanford Question Answering Dataset) [Rajpurkar et al. 2016].
It comprises 900 questions about 249 reading passages (paragraphs),
which were taken from 18 official documents of a computer science college
from a Brazilian federal university
and 21 Wikipedia articles related to Brazilian higher education system.
As far as we know, this is the first Portuguese reading comprehension dataset in this format.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
|
schibsted | null | null | null | false | 1 | false | schibsted/recsys-slates-dataset | 2022-09-06T11:27:53.000Z | null | false | 28d972c94caec3a6308383a261e6c84733baaa80 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/schibsted/recsys-slates-dataset/resolve/main/README.md | ---
license: apache-2.0
---
|
gorkaartola | null | null | null | false | 1 | false | gorkaartola/ZS-train_SDG_Descriptions_S1-sentence_S2-SDGtitle_Negative_Sample_Filter-Only_Title_and_Headline | 2022-09-06T14:52:46.000Z | null | false | c9bc2dc442b053e2f70f11cbcf6aa3ee01b54286 | [] | [] | https://huggingface.co/datasets/gorkaartola/ZS-train_SDG_Descriptions_S1-sentence_S2-SDGtitle_Negative_Sample_Filter-Only_Title_and_Headline/resolve/main/README.md | label_ids:
- (0) contradiction
- (2) entailment |
tartuNLP | null | null | null | false | 1 | false | tartuNLP/finno-ugric-train | 2022-09-08T14:27:45.000Z | null | false | b314649ae9af4fd4e235b506acea00bb09ebe923 | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/tartuNLP/finno-ugric-train/resolve/main/README.md | ---
license: cc-by-4.0
---
|
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-conll2003-conll2003-0054c2-15936187 | 2022-09-06T17:53:00.000Z | null | false | f1fed66dfcbbc155f73431e9f2c9362fe2ace7d4 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:conll2003"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-conll2003-conll2003-0054c2-15936187/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- conll2003
eval_info:
task: entity_extraction
model: kamalkraj/bert-base-cased-ner-conll2003
metrics: []
dataset_name: conll2003
dataset_config: conll2003
dataset_split: test
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: kamalkraj/bert-base-cased-ner-conll2003
* Dataset: conll2003
* Config: conll2003
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@akdeniz27](https://huggingface.co/akdeniz27) for evaluating this model. |
priyank-m | null | null | null | false | 1 | false | priyank-m/chinese_text_recognition | 2022-09-21T09:08:19.000Z | null | false | 45970ba9a0fc0f0e7971757228ea1b17d9dd3dfb | [] | [
"language:zh",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"tags:ocr",
"tags:text-recognition",
"tags:chinese",
"task_categories:image-to-text",
"task_ids:image-captioning"
] | https://huggingface.co/datasets/priyank-m/chinese_text_recognition/resolve/main/README.md | ---
annotations_creators: []
language:
- zh
language_creators: []
license: []
multilinguality:
- monolingual
pretty_name: chinese_text_recognition
size_categories:
- 100K<n<1M
source_datasets: []
tags:
- ocr
- text-recognition
- chinese
task_categories:
- image-to-text
task_ids:
- image-captioning
---
Source of data: https://github.com/FudanVI/benchmarking-chinese-text-recognition |
CShorten | null | null | null | false | 1 | false | CShorten/1000-CORD19-Papers-Text | 2022-09-06T22:05:10.000Z | null | false | 19654330f83566c724afc264534fa726aa834bb9 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/CShorten/1000-CORD19-Papers-Text/resolve/main/README.md | ---
license: afl-3.0
---
|
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-cnn_dailymail-3.0.0-fcbcd1-15976191 | 2022-09-06T23:16:06.000Z | null | false | 499e407cf6a86f408818969400d1de63163e65a1 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:cnn_dailymail"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-cnn_dailymail-3.0.0-fcbcd1-15976191/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cnn_dailymail
eval_info:
task: summarization
model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
metrics: ['rouge', 'accuracy', 'exact_match']
dataset_name: cnn_dailymail
dataset_config: 3.0.0
dataset_split: test
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-cnn_dailymail-3.0.0-5863f2-15966190 | 2022-09-06T23:14:30.000Z | null | false | 5909507bf7ac0113a0a906b0a5583c8b8e0d4085 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:cnn_dailymail"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-cnn_dailymail-3.0.0-5863f2-15966190/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cnn_dailymail
eval_info:
task: summarization
model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
metrics: ['rouge', 'accuracy']
dataset_name: cnn_dailymail
dataset_config: 3.0.0
dataset_split: test
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model. |
neuralspace | null | null | null | false | 1 | false | neuralspace/citizen_nlu | 2022-09-09T05:53:16.000Z | acronym-identification | false | 1139ac8154d30113fab374b3961faec562b0dd8f | [] | [
"annotations_creators:other",
"language_creators:other",
"language:as",
"language:bn",
"language:gu",
"language:hi",
"language:kn",
"language:mr",
"language:pa",
"language:ta",
"language:te",
"expert-generated license:cc-by-nc-sa-4.0",
"multilinguality:multilingual",
"size_categories:n>1K"... | https://huggingface.co/datasets/neuralspace/citizen_nlu/resolve/main/README.md | ---
annotations_creators:
- other
language_creators:
- other
language:
- as
- bn
- gu
- hi
- kn
- mr
- pa
- ta
- te
expert-generated license:
- cc-by-nc-sa-4.0
multilinguality:
- multilingual
size_categories:
- n>1K
source_datasets:
- original
task_categories:
- question-answering
- text-retrieval
- text2text-generation
- other
- translation
- conversational
task_ids:
- extractive-qa
- closed-domain-qa
- utterance-retrieval
- document-retrieval
- closed-domain-qa
- open-book-qa
- closed-book-qa
paperswithcode_id: acronym-identification
pretty_name: Citizen Services NLU Multilingual Dataset.
train-eval-index:
- config: citizen_nlu
task: token-classification
task_id: entity_extraction
splits:
train_split: train
eval_split: test
col_mapping:
sentence: text
label: target
metrics:
- type: citizen_nlu
name: citizen_nlu
config:
citizen_nlu
tags:
- chatbots
- citizen services
- help
- emergency services
- health
- reporting crime
configs:
- citizen_nlu
---
# Dataset Card for citizen_nlu
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
### Dataset Description
- **Homepage**: [NeuralSpace Homepage](https://huggingface.co/neuralspace)
- **Repository:** [citizen_nlu Dataset](https://huggingface.co/datasets/neuralspace/citizen_nlu)
- **Point of Contact:** [Juhi Jain](mailto:juhi@neuralspace.ai)
- **Point of Contact:** [Ayushman Dash](mailto:ayushman@neuralspace.ai)
- **Size of downloaded dataset files:** 67.6 MB
### Dataset Summary
NeuralSpace strives to provide AutoNLP text and speech services, especially for low-resource languages. One of the major services provided by NeuralSpace on its platform is the “Language Understanding” service, where you can build, train and deploy your NLU model to recognize intents and entities with minimal code and just a few clicks.
The initiative of this challenge is created with the purpose of sparkling AI applications to address some of the pressing problems in India and find unique ways to address them. Starting with a focus on NLU, this challenge hopes to make progress towards multilingual modelling, as language diversity is significantly underserved on the web.
NeuralSpace aims at mastering the low-resource domain, and the citizen services use case is naturally a multilingual and essential domain for the general citizen.
Citizen services refer to the essential services provided by organizations to general citizens. In this case, we focus on important services like various FIR-based requests, Blood/Platelets Donation, and Coronavirus-related queries.
Such services may not be needed regularly by any particular city but when needed are of utmost importance, and in general, the needs for such services are prevalent every day.
Despite the importance of citizen services, linguistically rich countries like India are still far behind in delivering such essential needs to the citizens with absolute ease. The best services currently available do not exist in various low-resource languages that are native to different groups of people. This challenge aims to make government services more efficient, responsive, and customer-friendly.
As our computing resources and modelling capabilities grow, so does our potential to support our citizens by delivering a far superior customer experience. Equipping a Citizen services bot with the ability to converse in vernacular languages would make them accessible to a vast group of people for whom English is not a language of choice, but for who are increasingly turning to digital platforms and interfaces for a wide range of needs and wants.
### Supported Tasks
A key component of any chatbot system is the NLU pipeline for ‘Intent Classification’ and ‘Named Entity Recognition. This primarily enables any chatbot to perform various tasks at ease. A fully functional multilingual chatbot needs to be able to decipher the language and understand exactly what the user wants.
#### citizen_nlu
A manually-curated multilingual dataset by Data Engineers at [NeuralSpace](https://www.neuralspace.ai/) for citizen services in 9 Indian languages for a realistic information-seeking task with data samples written by native-speaking expert data annotators [here](https://www.neuralspace.ai/). The dataset files are available in CSV format.
### Languages
The citizen_nlu data is available in nine Indian languages i.e, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Punjabi, Tamil, and Telugu
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 67.6 MB
An example of 'test' looks as follows.
``` text,intents
मेरे पिता की कार उनके कार्यालय की पार्किंग से कल से गायब है। वाहन संख्या केए-03-एचए-1985 । मैं एफआईआर कराना चाहता हूं।,ReportingMissingVehicle
```
An example of 'train' looks as follows.
```text,intents
என் தாத்தா எனக்கு பிறந்தநாள் பரிசு கொடுத்தார் மஞ்சள் நான் டாடனானோவை இழந்தேன். காணவில்லை என புகார் தெரிவிக்க விரும்புகிறேன்,ReportingMissingVehicle
```
### Data Fields
The data fields are the same among all splits.
#### citizen_nlu
- `text`: a `string` feature.
- `intent`: a `string` feature.
- `type`: a classification label, with possible values including `train` or `test`.
### Data Splits
#### citizen_nlu
| |train|test|
|----|----:|---:|
|citizen_nlu| 287832| 4752|
### Contributions
Mehar Bhatia (mehar@neuralspace.ai) |
neuralspace | null | null | null | false | 1 | false | neuralspace/autotrain-data-citizen_nlu_bn | 2022-09-07T05:32:14.000Z | null | false | 542460b9f8fefcc6544fdd06991e3a3d9be2eef3 | [] | [
"language:bn",
"task_categories:text-classification"
] | https://huggingface.co/datasets/neuralspace/autotrain-data-citizen_nlu_bn/resolve/main/README.md | ---
language:
- bn
task_categories:
- text-classification
---
# AutoTrain Dataset for project: citizen_nlu_bn
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project citizen_nlu_bn.
### Languages
The BCP-47 code for the dataset's language is bn.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "\u0997\u09a4 \u09e8 \u09ae\u09be\u09b8 \u0986\u09ae\u09be\u09b0 \u0986\u0997\u09c7 \u0995\u09b0\u09cb \u09a8\u09be \u0986\u09ae\u09bf \u0995\u09a4 \u09a6\u09bf\u09a8 \u09aa\u09b0\u09c7 \u09b0\u0995\u09cd\u09a4 \u09a6\u09bf\u09a4\u09c7 \u09aa\u09be\u09b0\u09bf?",
"target": 3
},
{
"text": "\u09b9\u09a0\u09be\u09ce \u0986\u09ae\u09bf \u09a6\u09cb\u0995\u09be\u09a8\u09c7 \u09af\u09be\u0993\u09af\u09bc\u09be\u09b0 \u099c\u09a8\u09cd\u09af \u098f\u0995\u099f\u09bf \u0996\u09be\u09b2\u09bf \u09b0\u09be\u09b8\u09cd\u09a4\u09be\u09af\u09bc \u09b9\u09be\u0981\u099f\u099b\u09bf\u09b2\u09be\u09ae \u09b8\u09be\u09a6\u09be \u09b0\u0999\u09c7\u09b0 \u0993\u09ac\u09bf 005639 \u0986\u09ae\u09bf \u09b0\u09bf\u09aa\u09cb\u09b0\u09cd\u099f \u0995\u09b0\u09ac \u09af\u0996\u09a8 \u0986\u09ae\u09bf \u09a4\u09be\u09b0 \u0995\u09be\u099b\u09c7 \u0986\u09b8\u09ac \u098f\u09ac\u0982 \u09a7\u09be\u0995\u09cd\u0995\u09be \u09a6\u09bf\u09af\u09bc\u09c7 \u099a\u09b2\u09c7 \u09af\u09be\u09ac",
"target": 44
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=55, names=['ContactRealPerson', 'Eligibility For BloodDonationWithComorbidities', 'EligibilityForBloodDonationAgeLimit', 'EligibilityForBloodDonationCovidGap', 'EligibilityForBloodDonationForPregnantWomen', 'EligibilityForBloodDonationGap', 'EligibilityForBloodDonationSTD', 'EligibilityForBloodReceiversBloodGroup', 'EligitbilityForVaccine', 'InquiryForCovidActiveCasesCount', 'InquiryForCovidDeathCount', 'InquiryForCovidPrevention', 'InquiryForCovidRecentCasesCount', 'InquiryForCovidTotalCasesCount', 'InquiryForDoctorConsultation', 'InquiryForQuarantinePeriod', 'InquiryForTravelRestrictions', 'InquiryForVaccinationRequirements', 'InquiryForVaccineCost', 'InquiryForVaccineCount', 'InquiryOfContact', 'InquiryOfCovidSymptoms', 'InquiryOfEmergencyContact', 'InquiryOfLocation', 'InquiryOfLockdownDetails', 'InquiryOfTiming', 'InquiryofBloodDonationRequirements', 'InquiryofBloodReceivalRequirements', 'InquiryofPostBloodDonationCareSchemes', 'InquiryofPostBloodDonationCertificate', 'InquiryofPostBloodDonationEffects', 'InquiryofPostBloodReceivalCareSchemes', 'InquiryofPostBloodReceivalEffects', 'InquiryofVaccinationAgeLimit', 'IntentForBloodDonationAppointment', 'IntentForBloodReceivalAppointment', 'ReportingAnimalAbuse', 'ReportingAnimalPoaching', 'ReportingChildAbuse', 'ReportingCyberCrime', 'ReportingDomesticViolence', 'ReportingDowry', 'ReportingDrugConsumption', 'ReportingDrugTrafficing', 'ReportingHitAndRun', 'ReportingMissingPerson', 'ReportingMissingPets', 'ReportingMissingVehicle', 'ReportingMurder', 'ReportingPropertyTakeOver', 'ReportingSexualAssault', 'ReportingTheft', 'ReportingTresspassing', 'ReportingVehicleAccident', 'StatusOfFIR'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 27146 |
| valid | 6800 |
|
asaxena1990 | null | null | null | false | 1 | false | asaxena1990/citizen_nlu | 2022-09-07T05:45:47.000Z | null | false | 90d581bb08843607d7d75eabeba4047109f4f434 | [] | [
"license:cc-by-nc-sa-4.0"
] | https://huggingface.co/datasets/asaxena1990/citizen_nlu/resolve/main/README.md | ---
license: cc-by-nc-sa-4.0
---
|
cjvt | null | @misc{solar3.0,
title = {Developmental corpus {\v S}olar 3.0},
author = {Arhar Holdt, {\v S}pela and Rozman, Tadeja and Stritar Ku{\v c}uk, Mojca and Krek, Simon and Krap{\v s} Vodopivec, Irena and Stabej, Marko and Pori, Eva and Goli, Teja and Lavri{\v c}, Polona and Laskowski, Cyprian and Kocjan{\v c}i{\v c}, Polonca and Klemenc, Bojan and Krsnik, Luka and Kosem, Iztok},
url = {http://hdl.handle.net/11356/1589},
note = {Slovenian language resource repository {CLARIN}.{SI}},
year = {2022}
} | Šolar is a developmental corpus of 5485 school texts (e.g., essays), written by students in Slovenian secondary schools
(age 15-19) and pupils in the 7th-9th grade of primary school (13-15), with a small percentage also from the 6th grade.
Part of the corpus (1516 texts) is annotated with teachers' corrections using a system of labels described in the
document available at https://www.clarin.si/repository/xmlui/bitstream/handle/11356/1589/Smernice-za-oznacevanje-korpusa-Solar_V1.1.pdf (in Slovenian). | false | 54 | false | cjvt/solar3 | 2022-10-21T07:35:45.000Z | null | false | a77ffb4773b694d03c805d80ea128b44e5c709f3 | [] | [
"annotations_creators:expert-generated",
"language_creators:other",
"language:sl",
"license:cc-by-nc-sa-4.0",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:text2text-generation",
"task_categories:other",
"tags... | https://huggingface.co/datasets/cjvt/solar3/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- other
language:
- sl
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 1K<n<10K
source_datasets:
- original
task_categories:
- text2text-generation
- other
task_ids: []
pretty_name: solar3
tags:
- grammatical-error-correction
- other-token-classification-of-text-errors
---
# Dataset Card for solar3
### Dataset Summary
Šolar* is a developmental corpus of 5485 school texts (e.g., essays), written by students in Slovenian secondary schools
(age 15-19) and pupils in the 7th-9th grade of primary school (13-15), with a small percentage also from the 6th grade.
Part of the corpus (1516 texts) is annotated with teachers' corrections using a system of labels described in the
document available at https://www.clarin.si/repository/xmlui/bitstream/handle/11356/1589/Smernice-za-oznacevanje-korpusa-Solar_V1.1.pdf (in Slovenian).
\(*) pronounce "š" as "sh" in "shoe".
By default the dataset is provided at **sentence-level** (125867 instances): each instance contains a source (the original) and a target (the corrected) sentence. Note that either the source or the target sentence in an instance may be missing - this usually happens when a source sentence is marked as redundant or when a new sentence is added by the teacher. Additionally, a source or a target sentence may appear in multiple instances - for example, this happens when one sentence gets divided into multiple sentences.
There is also an option to aggregate the instances at the **document-level** or **paragraph-level**
by explicitly providing the correct config:
```
datasets.load_dataset("cjvt/solar3", "paragraph_level")`
datasets.load_dataset("cjvt/solar3", "document_level")`
```
### Supported Tasks and Leaderboards
Error correction, e.g., at token/sequence level, as token/sequence classification or text2text generation.
### Languages
Slovenian.
## Dataset Structure
### Data Instances
A sample instance from the dataset:
```json
{
'id_doc': 'solar1',
'doc_title': 'KUS-G-slo-1-GO-E-2009-10001',
'is_manually_validated': True,
'src_tokens': ['”', 'Ne', 'da', 'sovražim', ',', 'da', 'ljubim', 'sem', 'na', 'svetu', '”', ',', 'izreče', 'Antigona', 'v', 'bran', 'kralju', 'Kreonu', 'za', 'svoje', 'nasprotno', 'mišljenje', 'pred', 'smrtjo', '.'],
'src_ling_annotations': {
# truncated for conciseness
'lemma': ['”', 'ne', 'da', 'sovražiti', ...],
'ana': ['mte:U', 'mte:L', 'mte:Vd', ...],
'msd': ['UPosTag=PUNCT', 'UPosTag=PART|Polarity=Neg', 'UPosTag=SCONJ', ...],
'ne_tag': [..., 'O', 'B-PER', 'O', ...],
'space_after': [False, True, True, False, ...]
},
'tgt_tokens': ['„', 'Ne', 'da', 'sovražim', ',', 'da', 'ljubim', 'sem', 'na', 'svetu', ',', '”', 'izreče', 'Antigona', 'sebi', 'v', 'bran', 'kralju', 'Kreonu', 'za', 'svoje', 'nasprotno', 'mišljenje', 'pred', 'smrtjo', '.'],
# omitted for conciseness, the format is the same as in 'src_ling_annotations'
'tgt_ling_annotations': {...},
'corrections': [
{'idx_src': [0], 'idx_tgt': [0], 'corr_types': ['Z/LOČ/nerazvrščeno']},
{'idx_src': [10, 11], 'idx_tgt': [10, 11], 'corr_types': ['Z/LOČ/nerazvrščeno']},
{'idx_src': [], 'idx_tgt': [14], 'corr_types': ['O/KAT/povratnost']}
]
}
```
The instance represents a correction in the document 'solar1' (`id_doc`), which were manually assigned/validated (`is_manually_validated`). More concretely, the source sentence contains three errors (as indicated by three elements in `corrections`):
- a punctuation change: '”' -> '„';
- a punctuation change: ['”', ','] -> [',', '”'] (i.e. comma inside the quote, not outside);
- addition of a new word: 'sebi'.
### Data Fields
- `id_doc`: a string containing the identifying name of the document in which the sentence appears;
- `doc_title`: a string containing the assigned document title;
- `is_manually_validated`: a bool indicating whether the document in which the sentence appears was reviewed by a teacher;
- `src_tokens`: words in the source sentence (`[]` if there is no source sentence);
- `src_ling_annotations`: a dict containing the lemmas (key `"lemma"`), morphosyntactic descriptions using UD (key `"msd"`) and JOS/MULTEXT-East (key `"ana"`) specification, named entity tags encoded using IOB2 (key `"ne_tag"`) for the source tokens (**automatically annotated**), and spacing information (key `"space_after"`), i.e. whether there is a whitespace after each token;
- `tgt_tokens`: words in the target sentence (`[]` if there is no target sentence);
- `tgt_ling_annotations`: a dict containing the lemmas (key `"lemma"`), morphosyntactic descriptions using UD (key `"msd"`) and JOS/MULTEXT-East (key `"ana"`) specification, named entity tags encoded using IOB2 (key `"ne_tag"`) for the target tokens (**automatically annotated**), and spacing information (key `"space_after"`), i.e. whether there is a whitespace after each token;
- `corrections`: a list of the corrections, with each correction represented with a dictionary, containing the indices of the source tokens involved (`idx_src`), target tokens involved (`idx_tgt`), and the categories of the corrections made (`corr_types`). Please note that there can be multiple assigned categories for one annotated correction, in which case `len(corr_types) > 1`.
## Dataset Creation
The Developmental corpus Šolar consists of 5,485 texts written by students in Slovenian secondary schools (age 15-19) and pupils in the 7th-9th grade of primary school (13-15), with a small percentage also from the 6th grade. The information on school (elementary or secondary), subject, level (grade or year), type of text, region, and date of production is provided for each text. School essays form the majority of the corpus while other material includes texts created during lessons, such as text recapitulations or descriptions, examples of formal applications, etc.
Part of the corpus (1516 texts) is annotated with teachers' corrections using a system of labels described in the attached document (in Slovenian). Teacher corrections were part of the original files and reflect real classroom situations of essay marking. Corrections were then inserted into texts by annotators and subsequently categorized. Due to the annotations being gathered in a practical (i.e. classroom) setting, only the most relevant errors may sometimes be annotated, e.g., not all incorrectly placed commas are annotated if there is a bigger issue in the text.
## Additional Information
### Dataset Curators
Špela Arhar Holdt; et al. (please see http://hdl.handle.net/11356/1589 for the full list)
### Licensing Information
CC BY-NC-SA 4.0.
### Citation Information
```
@misc{solar3,
title = {Developmental corpus {\v S}olar 3.0},
author = {Arhar Holdt, {\v S}pela and Rozman, Tadeja and Stritar Ku{\v c}uk, Mojca and Krek, Simon and Krap{\v s} Vodopivec, Irena and Stabej, Marko and Pori, Eva and Goli, Teja and Lavri{\v c}, Polona and Laskowski, Cyprian and Kocjan{\v c}i{\v c}, Polonca and Klemenc, Bojan and Krsnik, Luka and Kosem, Iztok},
url = {http://hdl.handle.net/11356/1589},
note = {Slovenian language resource repository {CLARIN}.{SI}},
year = {2022}
}
```
### Contributions
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
|
julius-br | null | null | null | false | null | false | julius-br/GARFAB | 2022-09-21T15:54:55.000Z | null | false | aa18c10ce999c806bf6f30a050b0d9a720ccd0c3 | [] | [
"license:mit"
] | https://huggingface.co/datasets/julius-br/GARFAB/resolve/main/README.md | ---
license: mit
---
**Published**: September 21th, 2022 <br>
**Author**: Julius Breiholz
# GARFAB-Dataset
The (G)erman corpus of annotated (A)pp (R)eviews to detect (F)eature requests (A)nd (B)ug reports (GARFAB) is a dataset to fine-tune models regarding classification of app reviews (ASRs) into "Feature Requests", "Bug Reports" and "Irrelevants" for the German language. All ASRs were collected from the Google Play Store and were classified manually by two independent annotators. A weighted and a full version are published with the following distributions of ASRs:
| | Feature Request | Bug Reports | Irrelevant | Total |
| --- | --- | --- | --- | --- |
full | 345 | 387 | 2212 | 2944 |
weighted | 345 | 345 | 345 | 1035 |
|
jamescalam | null | null | null | false | 1 | false | jamescalam/reddit-demo | 2022-09-07T12:12:43.000Z | null | false | 85f90b5212cc669b29aac223f6e7a97e82da95c9 | [] | [] | https://huggingface.co/datasets/jamescalam/reddit-demo/resolve/main/README.md | # Reddit Demo dataset
|
helliun | null | null | null | false | null | false | helliun/mePics | 2022-09-07T14:33:55.000Z | null | false | b514058e84ca638776d8b92786dc41a343aafdbf | [] | [] | https://huggingface.co/datasets/helliun/mePics/resolve/main/README.md | ;oertjh |
Outside | null | null | null | false | 1 | false | Outside/prova | 2022-09-07T13:38:43.000Z | null | false | bf8ef036aa26d956ce5adf2e4e614f2fa714d595 | [] | [
"license:other"
] | https://huggingface.co/datasets/Outside/prova/resolve/main/README.md | ---
license: other
---
|
abcefgdfdsf | null | null | null | false | null | false | abcefgdfdsf/stablediff | 2022-09-07T15:14:14.000Z | null | false | c59a9221b13784714d149bd63d66e7c7df90ce3a | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/abcefgdfdsf/stablediff/resolve/main/README.md | ---
license: apache-2.0
---
|
nagyigergo | null | null | null | false | null | false | nagyigergo/gyurcsany | 2022-09-07T16:56:02.000Z | null | false | 37ff92ce72b49a5e1bfb603b158475a6506db739 | [] | [
"license:unknown"
] | https://huggingface.co/datasets/nagyigergo/gyurcsany/resolve/main/README.md | ---
license: unknown
---
|
zeroshot | null | null | null | false | 5 | false | zeroshot/twitter-financial-news-topic | 2022-09-07T18:47:26.000Z | null | false | 10ef7f8808e95d6b848e2da300e24e4feeedccd5 | [] | [
"annotations_creators:other",
"language:en",
"language_creators:other",
"license:mit",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"tags:twitter",
"tags:finance",
"tags:markets",
"tags:stocks",
"tags:wallstreet",
"tags:quant",
"tags:hedgefunds",... | https://huggingface.co/datasets/zeroshot/twitter-financial-news-topic/resolve/main/README.md | ---
annotations_creators:
- other
language:
- en
language_creators:
- other
license:
- mit
multilinguality:
- monolingual
pretty_name: twitter financial news
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- twitter
- finance
- markets
- stocks
- wallstreet
- quant
- hedgefunds
- markets
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
### Dataset Description
The Twitter Financial News dataset is an English-language dataset containing an annotated corpus of finance-related tweets. This dataset is used to classify finance-related tweets for their topic.
1. The dataset holds 21,107 documents annotated with 20 labels:
```python
topics = {
"LABEL_0": "Analyst Update",
"LABEL_1": "Fed | Central Banks",
"LABEL_2": "Company | Product News",
"LABEL_3": "Treasuries | Corporate Debt",
"LABEL_4": "Dividend",
"LABEL_5": "Earnings",
"LABEL_6": "Energy | Oil",
"LABEL_7": "Financials",
"LABEL_8": "Currencies",
"LABEL_9": "General News | Opinion",
"LABEL_10": "Gold | Metals | Materials",
"LABEL_11": "IPO",
"LABEL_12": "Legal | Regulation",
"LABEL_13": "M&A | Investments",
"LABEL_14": "Macro",
"LABEL_15": "Markets",
"LABEL_16": "Politics",
"LABEL_17": "Personnel Change",
"LABEL_18": "Stock Commentary",
"LABEL_19": "Stock Movement",
}
```
The data was collected using the Twitter API. The current dataset supports the multi-class classification task.
### Task: Topic Classification
# Data Splits
There are 2 splits: train and validation. Below are the statistics:
| Dataset Split | Number of Instances in Split |
| ------------- | ------------------------------------------- |
| Train | 16,990 |
| Validation | 4,118 |
# Licensing Information
The Twitter Financial Dataset (topic) version 1.0.0 is released under the MIT License. |
Blueo | null | null | null | false | null | false | Blueo/images | 2022-09-07T22:14:38.000Z | null | false | 9d48f81e8065d6e3eaec1ad961067941818ed327 | [] | [] | https://huggingface.co/datasets/Blueo/images/resolve/main/README.md | |
nateraw | null | null | null | false | 1 | false | nateraw/us-accidents | 2022-09-07T22:24:52.000Z | null | false | 5873a8aa4a5b3b4010501de70241f853acbbadc0 | [] | [
"arxiv:1906.05409",
"arxiv:1909.09638",
"license:cc-by-nc-sa-4.0",
"kaggle_id:sobhanmoosavi/us-accidents"
] | https://huggingface.co/datasets/nateraw/us-accidents/resolve/main/README.md | ---
license:
- cc-by-nc-sa-4.0
kaggle_id: sobhanmoosavi/us-accidents
---
# Dataset Card for US Accidents (2016 - 2021)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://kaggle.com/datasets/sobhanmoosavi/us-accidents
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
### Description
This is a countrywide car accident dataset, which covers __49 states of the USA__. The accident data are collected from __February 2016 to Dec 2021__, using multiple APIs that provide streaming traffic incident (or event) data. These APIs broadcast traffic data captured by a variety of entities, such as the US and state departments of transportation, law enforcement agencies, traffic cameras, and traffic sensors within the road-networks. Currently, there are about __2.8 million__ accident records in this dataset. Check [here](https://smoosavi.org/datasets/us_accidents) to learn more about this dataset.
### Acknowledgements
Please cite the following papers if you use this dataset:
- Moosavi, Sobhan, Mohammad Hossein Samavatian, Srinivasan Parthasarathy, and Rajiv Ramnath. “[A Countrywide Traffic Accident Dataset](https://arxiv.org/abs/1906.05409).”, 2019.
- Moosavi, Sobhan, Mohammad Hossein Samavatian, Srinivasan Parthasarathy, Radu Teodorescu, and Rajiv Ramnath. ["Accident Risk Prediction based on Heterogeneous Sparse Data: New Dataset and Insights."](https://arxiv.org/abs/1909.09638) In proceedings of the 27th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, ACM, 2019.
### Content
This dataset has been collected in real-time, using multiple Traffic APIs. Currently, it contains accident data that are collected from February 2016 to Dec 2021 for the Contiguous United States. Check [here](https://smoosavi.org/datasets/us_accidents) to learn more about this dataset.
### Inspiration
US-Accidents can be used for numerous applications such as real-time car accident prediction, studying car accidents hotspot locations, casualty analysis and extracting cause and effect rules to predict car accidents, and studying the impact of precipitation or other environmental stimuli on accident occurrence. The most recent release of the dataset can also be useful to study the impact of COVID-19 on traffic behavior and accidents.
### Usage Policy and Legal Disclaimer
This dataset is being distributed only for __Research__ purposes, under Creative Commons Attribution-Noncommercial-ShareAlike license (CC BY-NC-SA 4.0). By clicking on download button(s) below, you are agreeing to use this data only for non-commercial, research, or academic applications. You may need to cite the above papers if you use this dataset.
### Inquiries or need help?
For any inquiries, contact me at moosavi.3@osu.edu
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was shared by [@sobhanmoosavi](https://kaggle.com/sobhanmoosavi)
### Licensing Information
The license for this dataset is cc-by-nc-sa-4.0
### Citation Information
```bibtex
[More Information Needed]
```
### Contributions
[More Information Needed] |
nupurkmr9 | null | null | null | false | null | false | nupurkmr9/tortoise | 2022-09-08T02:57:37.000Z | null | false | 651baf9f1fbef3d6fb3de9b01651f3a5454f8c09 | [] | [
"license:mit"
] | https://huggingface.co/datasets/nupurkmr9/tortoise/resolve/main/README.md | ---
license: mit
---
|
SetFit | null | null | null | false | 9 | false | SetFit/onestop_english | 2022-09-08T06:16:39.000Z | null | false | 95ec1d31cef548b24b6071771ed2a2d317fd7717 | [] | [
"license:cc-by-sa-4.0"
] | https://huggingface.co/datasets/SetFit/onestop_english/resolve/main/README.md | ---
license: cc-by-sa-4.0
---
# OneStopEnglish
OneStopEnglish is a corpus of texts written at three reading levels, and demonstrates its usefulness for through two applications - automatic readability assessment and automatic text simplification.
This dataset is a version of [onestop_english](https://huggingface.co/datasets/onestop_english), which was randomly split into (64*3=) 192 train examples, and 375 test examples (stratified). |
sberbank-ai | null | null | null | false | 1 | false | sberbank-ai/school_notebooks_EN | 2022-10-25T11:10:25.000Z | null | false | b3f5895b2de319ccb2e3ae9e0d8fd6b193da46e7 | [] | [
"language:en",
"license:mit",
"source_datasets:original",
"task_categories:image-segmentation",
"task_categories:object-detection",
"tags:optical-character-recognition",
"tags:text-detection",
"tags:ocr"
] | https://huggingface.co/datasets/sberbank-ai/school_notebooks_EN/resolve/main/README.md | ---
language:
- en
license:
- mit
source_datasets:
- original
task_categories:
- image-segmentation
- object-detection
task_ids: []
tags:
- optical-character-recognition
- text-detection
- ocr
---
# School Notebooks Dataset
The images of school notebooks with handwritten notes in English.
The dataset annotation contain end-to-end markup for training detection and OCR models, as well as an end-to-end model for reading text from pages.
## Annotation format
The annotation is in COCO format. The `annotation.json` should have the following dictionaries:
- `annotation["categories"]` - a list of dicts with a categories info (categotiy names and indexes).
- `annotation["images"]` - a list of dictionaries with a description of images, each dictionary must contain fields:
- `file_name` - name of the image file.
- `id` for image id.
- `annotation["annotations"]` - a list of dictioraties with a murkup information. Each dictionary stores a description for one polygon from the dataset, and must contain the following fields:
- `image_id` - the index of the image on which the polygon is located.
- `category_id` - the polygon’s category index.
- `attributes` - dict with some additional annotation information. In the `translation` subdict you can find text translation for the line.
- `segmentation` - the coordinates of the polygon, a list of numbers - which are coordinate pairs x and y. |
Anastasia1812 | null | null | null | false | 1 | false | Anastasia1812/bunny | 2022-09-08T09:56:50.000Z | null | false | 360875ac83db1a044fa95d969013eda19d8c2667 | [] | [] | https://huggingface.co/datasets/Anastasia1812/bunny/resolve/main/README.md | Bynny dataset |
sberbank-ai | null | null | null | false | 10 | false | sberbank-ai/school_notebooks_RU | 2022-10-25T11:11:05.000Z | null | false | de3d933876c7141671ee244acb800131fb5bf787 | [] | [
"language:ru",
"license:mit",
"source_datasets:original",
"task_categories:image-segmentation",
"task_categories:object-detection",
"tags:optical-character-recognition",
"tags:text-detection",
"tags:ocr"
] | https://huggingface.co/datasets/sberbank-ai/school_notebooks_RU/resolve/main/README.md | ---
language:
- ru
license:
- mit
source_datasets:
- original
task_categories:
- image-segmentation
- object-detection
task_ids: []
tags:
- optical-character-recognition
- text-detection
- ocr
---
# School Notebooks Dataset
The images of school notebooks with handwritten notes in Russian.
The dataset annotation contain end-to-end markup for training detection and OCR models, as well as an end-to-end model for reading text from pages.
## Annotation format
The annotation is in COCO format. The `annotation.json` should have the following dictionaries:
- `annotation["categories"]` - a list of dicts with a categories info (categotiy names and indexes).
- `annotation["images"]` - a list of dictionaries with a description of images, each dictionary must contain fields:
- `file_name` - name of the image file.
- `id` for image id.
- `annotation["annotations"]` - a list of dictioraties with a murkup information. Each dictionary stores a description for one polygon from the dataset, and must contain the following fields:
- `image_id` - the index of the image on which the polygon is located.
- `category_id` - the polygon’s category index.
- `attributes` - dict with some additional annotation information. In the `translation` subdict you can find text translation for the line.
- `segmentation` - the coordinates of the polygon, a list of numbers - which are coordinate pairs x and y. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-staging-eval-emotion-default-39ecfd-16096203 | 2022-09-08T10:10:12.000Z | null | false | c7186656e42f3b8660bf4a0e7768d54bb8d9429d | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:emotion"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-emotion-default-39ecfd-16096203/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- emotion
eval_info:
task: multi_class_classification
model: lewtun/sagemaker-distilbert-emotion-1
metrics: []
dataset_name: emotion
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: lewtun/sagemaker-distilbert-emotion-1
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
jmacs | null | null | null | false | 1 | false | jmacs/jmacsface | 2022-09-08T11:43:37.000Z | null | false | b168b613f0d023619bf0d00d9b7b34e9bc407afe | [] | [
"license:cc"
] | https://huggingface.co/datasets/jmacs/jmacsface/resolve/main/README.md | ---
license: cc
---
|
merve | null | null | null | false | 22 | false | merve/supersoaker-failures | 2022-09-08T16:06:06.000Z | null | false | 2482635b77c1cbd351e72955dca35bed0c135a41 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/merve/supersoaker-failures/resolve/main/README.md | ---
license: apache-2.0
---
|
Aitrepreneur | null | null | null | false | 1 | false | Aitrepreneur/testing | 2022-09-08T16:52:29.000Z | null | false | c446a2bc325ba054ed9adb05a6113e5f41e04d68 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/Aitrepreneur/testing/resolve/main/README.md | ---
license: afl-3.0
---
|
SocialGrep | null | null | All the mentions of climate change on Reddit before Sep 1 2022. | false | 1 | false | SocialGrep/the-reddit-climate-change-dataset | 2022-09-08T18:24:20.000Z | null | false | 6d5678654a99a8fd5150bf7523ced793e92a0be6 | [] | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"language:en",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original"
] | https://huggingface.co/datasets/SocialGrep/the-reddit-climate-change-dataset/resolve/main/README.md | ---
annotations_creators:
- lexyr
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
paperswithcode_id: null
---
# Dataset Card for the-reddit-climate-change-dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets/the-reddit-climate-change-dataset?utm_source=huggingface&utm_medium=link&utm_campaign=theredditclimatechangedataset)
- **Reddit downloader used:** [https://socialgrep.com/exports](https://socialgrep.com/exports?utm_source=huggingface&utm_medium=link&utm_campaign=theredditclimatechangedataset)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=theredditclimatechangedataset)
### Dataset Summary
All the mentions of climate change on Reddit before Sep 1 2022.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Additional Information
### Licensing Information
CC-BY v4.0
|
nateraw | null | null | null | false | 1 | false | nateraw/airbnb-stock-price-new-new | 2022-09-08T18:48:08.000Z | null | false | 17f24d0e1728d03561905934d6ba0368431d4e42 | [] | [
"license:cc0-1.0",
"kaggle_id:evangower/airbnb-stock-price"
] | https://huggingface.co/datasets/nateraw/airbnb-stock-price-new-new/resolve/main/README.md | ---
license:
- cc0-1.0
kaggle_id: evangower/airbnb-stock-price
---
# Dataset Card for Airbnb Stock Price
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://kaggle.com/datasets/evangower/airbnb-stock-price
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This contains the historical stock price of Airbnb (ticker symbol ABNB) an American company that operates an online marketplace for lodging, primarily homestays for vacation rentals, and tourism activities. Based in San Francisco, California, the platform is accessible via website and mobile app.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was shared by [@evangower](https://kaggle.com/evangower)
### Licensing Information
The license for this dataset is cc0-1.0
### Citation Information
```bibtex
[More Information Needed]
```
### Contributions
[More Information Needed] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.