id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
flax-community/swahili-safi | 2021-07-18T12:48:55.000Z | [
"region:us"
] | flax-community | Cleaned dataset for Swahili Language Modeling | @InProceedings{huggingface:flax-community,
title = Cleaned dataset for Swahili Language Modeling,
authors={Fitsum, Alok, Patrick},
year={2021},
link = https://huggingface.co/datasets/flax-community/swahili-safi
} | null | 3 | 6 | # Swahili-Safi Dataset
A relatively clean dataset for Swahili language modeling, built by combining and cleaning several existing datasets.
Sources include:
```
mc4-sw
oscar-sw
swahili_news
IWSLT
XNLI
flores 101
swahili-lm
gamayun-swahili-minikit
broadcastnews-sw
subset of wikipedia-en translated (using m2m100) to sw
```
In total this dataset is ~3.5 GB in size with over 21 million lines of text.
## Usage
This dataset can be downloaded and used as follows:
```python
from datasets import load_dataset
ds = load_dataset("flax-community/swahili-safi")
```
|
huggingartists/cardi-b | 2022-10-25T09:26:12.000Z | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | huggingartists | This dataset is designed to generate lyrics with HuggingArtists. | @InProceedings{huggingartists:dataset,
title = {Lyrics dataset},
author={Aleksey Korshuk
},
year={2021}
} | null | 0 | 6 | ---
language:
- en
tags:
- huggingartists
- lyrics
---
# Dataset Card for "huggingartists/cardi-b"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 0.485384 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/5a60c41c5543b9286bc6d645603c8df8.568x568x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/cardi-b">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Cardi B</div>
<a href="https://genius.com/artists/cardi-b">
<div style="text-align: center; font-size: 14px;">@cardi-b</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/cardi-b).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/cardi-b")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|223| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/cardi-b")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2021
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
mrm8488/fake-news | 2021-10-15T16:06:35.000Z | [
"region:us"
] | mrm8488 | null | null | null | 0 | 6 | Entry not found |
omar-sharif/BAD-Bengali-Aggressive-Text-Dataset | 2022-02-24T15:42:02.000Z | [
"region:us"
] | omar-sharif | null | null | null | 1 | 6 | ## Novel Aggressive Text Dataset in Bengali
## Tackling Cyber-Aggression: Identification and Fine-Grained Categorization of Aggressive Texts on Social Media using Weighted Ensemble of Transformers
**Author:** Omar Sharif and Mohammed Moshiul Hoque
**Related Papers:**
[Paper1 in Neurocomputing Journal](https://www.sciencedirect.com/science/article/abs/pii/S0925231221018567)
[Paper2 in CONSTRAINT@AAAI-2021](https://link.springer.com/chapter/10.1007%2F978-3-030-73696-5_2)
[Paper3 in LTEDI@EACL-2021](https://link.springer.com/chapter/10.1007%2F978-3-030-73696-5_2)
## Abstract
The pervasiveness of aggressive content in social media has become a serious concern for government organizations and tech companies because of its pernicious societal effects. In recent years, social media has been repeatedly used as a tool to incite communal aggression, spread distorted propaganda, damage social harmony and demean the identity of individuals or a community in the public spaces. Therefore, restraining the proliferation of aggressive content and detecting them has become an urgent duty. Studies of the identification of aggressive content have mostly been done for English and other resource-high languages. Automatic systems developed for those languages can not accurately identify detrimental contents written in regional languages like Bengali. To compensate this insufficiency, this work presents a novel Bengali aggressive text dataset (called ‘BAD’) with two-level annotation. In level-A, 14158 texts are labeled as either aggressive or non-aggressive. While in level-B, 6807 aggressive texts are categorized into religious, political, verbal and gendered aggression classes each having 2217, 2085, 2043 and 462 texts respectively. This paper proposes a weighted ensemble technique including m-BERT, distil-BERT, Bangla-BERT and XLM-R as the base classifiers to identify and classify the aggressive texts in Bengali. The proposed model can readdress the softmax probabilities of the participating classifiers depending on their primary outcomes. This weighting technique has enabled the model to outdoes the simple average ensemble and all other machine learning (ML), deep learning (DL) baselines. It has acquired the highest weighted f1-score of 93.43% in the identification task and 93.11% in the categorization task.
## Contribution
Major contributions of this work can be illustrated in the following:
- Dataset:present a new Bengali aggressive text dataset which contains 6807 aggressive and 7351 non-aggressive texts. Furthermore, by employing a hierarchical annotation schema, aggressive texts are annotated into religious, political, verbal and gendered aggression classes.
- Insights: provide useful insights and detailed statistics of the data that ensure the quality of the dataset.
- Model: develop a weighted ensemble model using m-BERT, distil-BERT, Bangla-BERT, XLM-R to identify and categorize aggressive Bengali texts. The proposed model emphasizes the participating classifiers' softmax probabilities based on their previous performance on the dataset. This weighting technique outperforms the simple average ensemble approach and enhances the classifier performance in the developed dataset.
- Benchmarking: investigate and compare the performance of the proposed model with other ML, DL baselines and existing techniques, thus setting up a benchmark work to compare in the future.
- Error analysis: deeply analyze the results and errors of the proposed model. Presents qualitative and quantitative analysis that shed light on the reasons behind some of the errors and provide a few directions that might help to mitigate the system's deficiency.
This research is one of the pioneering works that aims to identify and classify aggressive texts in Bengali as per our exploration. We expect that the resources developed in this work will pave the way for aggressive text classification researchers in Bengali.
## Ackonwlegements
We sincerely acknowledge the anonymous reviewers for their insightful suggestions, which help to improve the work. This work was supported by the ICT Innovation Fund, ICT Division and Directorate of Research & Extension, CUET. Thanks to [Prof. Dr. Mohammed Moshiul Hoque](https://www.researchgate.net/profile/Moshiul_Hoque) sir for his valuable guidance.
## Cite this work
If you find this repository helpful in your work please cite the following
```
@article{SHARIF2021,
title = {Tackling Cyber-Aggression: Identification and Fine-Grained Categorization of Aggressive Texts on Social Media using Weighted Ensemble of Transformers},
journal = {Neurocomputing},
year = {2021},
issn = {0925-2312},
doi = {https://doi.org/10.1016/j.neucom.2021.12.022},
url = {https://www.sciencedirect.com/science/article/pii/S0925231221018567},
author = {Omar Sharif and Mohammed Moshiul Hoque},
keywords = {Natural language processing, Aggressive text classification, Low resource language, Bengali aggressive text corpus, Deep learning, Transformers, Ensemble},
abstract = {The pervasiveness of aggressive content in social media has become a serious concern for government organizations and tech companies because of its pernicious societal effects. In recent years, social media has been repeatedly used as a tool to incite communal aggression, spread distorted propaganda, damage social harmony and demean the identity of individuals or a community in the public spaces. Therefore, restraining the proliferation of aggressive content and detecting them has become an urgent duty. Studies of the identification of aggressive content have mostly been done for English and other resource-high languages. Automatic systems developed for those languages can not accurately identify detrimental contents written in regional languages like Bengali. To compensate this insufficiency, this work presents a novel Bengali aggressive text dataset (called ‘BAD’) with two-level annotation. In level-A, 14158 texts are labeled as either aggressive or non-aggressive. While in level-B, 6807 aggressive texts are categorized into religious, political, verbal and gendered aggression classes each having 2217, 2085, 2043 and 462 texts respectively. This paper proposes a weighted ensemble technique including m-BERT, distil-BERT, Bangla-BERT and XLM-R as the base classifiers to identify and classify the aggressive texts in Bengali. The proposed model can readdress the softmax probabilities of the participating classifiers depending on their primary outcomes. This weighting technique has enabled the model to outdoes the simple average ensemble and all other machine learning (ML), deep learning (DL) baselines. It has acquired the highest weighted f1-score of 93.43% in the identification task and 93.11% in the categorization task.}
}
@InProceedings{sharif2021constraint,
author="Sharif, Omar
and Hoque, Mohammed Moshiul",
editor="Chakraborty, Tanmoy and et al.",
title="Identification and Classification of Textual Aggression in Social Media: Resource Creation and Evaluation",
booktitle="Combating Online Hostile Posts in Regional Languages during Emergency Situation",
year="2021",
publisher="Springer Nature Switzerland AG",
pages="1--12",
doi = {https://doi.org/10.1007/978-3-030-73696-5_2},
}
@inproceedings{sharif-etal-2021-nlp,
title = "{NLP}-{CUET}@{D}ravidian{L}ang{T}ech-{EACL}2021: Offensive Language Detection from Multilingual Code-Mixed Text using Transformers",
author = "Sharif, Omar and
Hossain, Eftekhar and
Hoque, Mohammed Moshiul",
booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages",
month = apr,
year = "2021",
address = "Kyiv",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.dravidianlangtech-1.35",
pages = "255--261",
abstract = "The increasing accessibility of the internet facilitated social media usage and encouraged individuals to express their opinions liberally. Nevertheless, it also creates a place for content polluters to disseminate offensive posts or contents. Most of such offensive posts are written in a cross-lingual manner and can easily evade the online surveillance systems. This paper presents an automated system that can identify offensive text from multilingual code-mixed data. In the task, datasets provided in three languages including Tamil, Malayalam and Kannada code-mixed with English where participants are asked to implement separate models for each language. To accomplish the tasks, we employed two machine learning techniques (LR, SVM), three deep learning (LSTM, LSTM+Attention) techniques and three transformers (m-BERT, Indic-BERT, XLM-R) based methods. Results show that XLM-R outperforms other techniques in Tamil and Malayalam languages while m-BERT achieves the highest score in the Kannada language. The proposed models gained weighted f{\_}1 score of 0.76 (for Tamil), 0.93 (for Malayalam ), and 0.71 (for Kannada) with a rank of 3rd, 5th and 4th respectively.",
}
```
## Note
`If you find any anomaly or have any query/suggestion feel free to ping.
|
philschmid/test_german_squad | 2021-10-25T13:55:14.000Z | [
"region:us"
] | philschmid | null | null | null | 2 | 6 | Entry not found |
s3h/gec-arabic | 2021-12-31T11:00:34.000Z | [
"region:us"
] | s3h | null | null | null | 0 | 6 | Entry not found |
s3h/poc-gec | 2021-12-20T16:36:38.000Z | [
"region:us"
] | s3h | null | null | null | 0 | 6 | Entry not found |
valurank/PoliticalBias_AllSides_Txt | 2022-10-21T13:37:02.000Z | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | valurank | null | null | null | 1 | 6 | ---
license:
- other
language:
- en
multilinguality:
- monolingual
task_categories:
- classification
task_ids:
- classification
---
# Dataset Card for news-12factor
## Table of Contents
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Source Data](#source-data)
- [Annotations](#annotations)
## Dataset Description
~20k articles labeled left, right, or center by the editors of allsides.com.
## Languages
The text in the dataset is in English
## Dataset Structure
3 folders, with many text files in each. Each text file represent the body text of one article.
## Source Data
URL data was scraped using https://github.com/mozilla/readability
## Annotations
Articles were manually annotated by news editors who were attempting to select representative articles from the left, right and center of each article topic. In other words, the dataset should generally be balanced - the left/right/center articles cover the same set of topics, and have roughly the same amount of articles in each.
|
valurank/hate-multi | 2022-10-25T09:57:06.000Z | [
"task_categories:text-classification",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:derived",
"language:en",
"license:other",
"region:us"
] | valurank | null | null | null | 0 | 6 | ---
language:
- en
license: other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- derived
task_categories:
- text-classification
---
# Dataset Card for hate-multi
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
## Dataset Description
### Dataset Summary
This dataset contains a collection of text labeled as hate speech (class 1) or not (class 0).
## Dataset Creation
The dataset was creating by aggregating multiple publicly available datasets.
### Source Data
The following datasets were used:
* https://huggingface.co/datasets/hate_speech18 - Filtered to remove examples labeled as 'idk/skip', 'relation'
* https://huggingface.co/datasets/hate_speech_offensive - Tweet text cleaned by lower casing, removing mentions and urls. Dropped instanced labeled as 'offensive language'
* https://huggingface.co/datasets/ucberkeley-dlab/measuring-hate-speech - Tweet text cleaned by lower casing, removing mentions and urls. Dropped instanced with hatespeech == 1
|
Khedesh/ArmanNER | 2022-03-11T10:42:30.000Z | [
"region:us"
] | Khedesh | null | null | null | 0 | 6 | # PersianNER
Named-Entity Recognition in Persian Language
## ArmanPersoNERCorpus
This is the first manually-annotated Persian named-entity (NE) dataset (ISLRN 399-379-640-828-6). We are releasing it only for academic research use.
The dataset includes 250,015 tokens and 7,682 Persian sentences in total. It is available in 3 folds to be used in turn as training and test sets. Each file contains one token, along with its manually annotated named-entity tag, per line. Each sentence is separated with a newline. The NER tags are in IOB format.
According to the instructions provided to the annotators, NEs are categorized into six classes: person, organization (such as banks, ministries, embassies, teams, nationalities, networks and publishers), location (such as cities, villages, rivers, seas, gulfs, deserts and mountains), facility (such as schools, universities, research centers, airports, railways, bridges, roads, harbors, stations, hospitals, parks, zoos and cinemas), product (such as books, newspapers, TV shows, movies, airplanes, ships, cars, theories, laws, agreements and religions), and event (such as wars, earthquakes, national holidays, festivals and conferences); other are the remaining tokens.
|
pietrolesci/conj_nli | 2022-04-25T13:27:25.000Z | [
"region:us"
] | pietrolesci | null | null | null | 0 | 6 | ## Overview
The original dataset can be found [here](https://github.com/swarnaHub/ConjNLI). It has been
proposed in [ConjNLI: Natural Language Inference Over Conjunctive Sentences](https://aclanthology.org/2020.emnlp-main.661/).
This dataset is a stress test for natural language inference over conjunctive sentences,
where the premise differs from the hypothesis by conjuncts removed, added, or replaced.
## Dataset curation
The label mapping is the usual `{"entailment": 0, "neutral": 1, "contradiction": 2}`
used in NLI datasets. Note that labels for `test` split are not available.
Also, the `train` split is originally named `adversarial_train_15k`.
There are 2 instances (join on "premise", "hypothesis", "label") present both in `train` and `dev`.
The `test` split does not have labels.
Finally, in the `train` set there are a few instances without a label, they are removed.
## Code to create the dataset
```python
import pandas as pd
from datasets import Dataset, ClassLabel, Value, Features, DatasetDict
# download data from repo https://github.com/swarnaHub/ConjNLI
paths = {
"train": "<path_to_folder>/ConjNLI-master/data/NLI/adversarial_train_15k.tsv",
"dev": "<path_to_folder>/ConjNLI-master/data/NLI/conj_dev.tsv",
"test": "<path_to_folder>/ConjNLI-master/data/NLI/conj_test.tsv",
}
dataset_splits = {}
for split, path in paths.items():
# load data
df = pd.read_csv(paths[split], sep="\t")
# encode labels using the default mapping used by other nli datasets
# i.e, entailment: 0, neutral: 1, contradiction: 2
df.columns = df.columns.str.lower()
if "test" in path:
df["label"] = -1
else:
# remove empty labels
df = df.loc[~df["label"].isna()]
# encode labels
df["label"] = df["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2})
# cast to dataset
features = Features({
"premise": Value(dtype="string", id=None),
"hypothesis": Value(dtype="string", id=None),
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
})
dataset = Dataset.from_pandas(df, features=features)
dataset_splits[split] = dataset
conj_nli = DatasetDict(dataset_splits)
conj_nli.push_to_hub("pietrolesci/conj_nli", token="<token>")
# check overlap between splits
from itertools import combinations
for i, j in combinations(conj_nli.keys(), 2):
print(
f"{i} - {j}: ",
pd.merge(
conj_nli[i].to_pandas(),
conj_nli[j].to_pandas(),
on=["premise", "hypothesis", "label"], how="inner"
).shape[0],
)
#> train - dev: 2
#> train - test: 0
#> dev - test: 0
``` |
hackathon-pln-es/es_tweets_laboral | 2022-10-25T10:03:39.000Z | [
"task_categories:text-classification",
"task_ids:intent-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:es",
"license:unknown",
"region:us"
] | hackathon-pln-es | null | null | null | 1 | 6 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- es
license:
- unknown
multilinguality:
- monolingual
pretty_name: "Tweets en espa\xF1ol denuncia laboral"
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- intent-classification
---
# Dataset Card for [es_tweets_laboral]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
Dataset creado por @hucruz, @DanielaGarciaQuezada, @hylandude, @BloodBoy21
Etiquetado por @DanielaGarciaQuezada
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
español
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
hackathon-pln-es/unam_tesis | 2022-10-25T10:03:47.000Z | [
"task_categories:text-classification",
"task_ids:language-modeling",
"annotations_creators:MajorIsaiah",
"annotations_creators:Ximyer",
"annotations_creators:clavel",
"annotations_creators:inoid",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:n=200",
"source_dat... | hackathon-pln-es | null | null | null | 4 | 6 | ---
annotations_creators:
- MajorIsaiah
- Ximyer
- clavel
- inoid
language_creators: [crowdsourced]
language: [es]
license: [apache-2.0]
multilinguality: [monolingual]
pretty_name: ''
size_categories:
- n=200
source_datasets: [original]
task_categories: [text-classification]
task_ids: [language-modeling]
---
# Dataset Card of "unam_tesis"
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
- [yiselclavel@gmail.com](mailto:yiselclavel@gmail.com)
- [isaac7isaias@gmail.com](mailto:isaac7isaias@gmail.com)
### Dataset Summary
El dataset unam_tesis cuenta con 1000 tesis de 5 carreras de la Universidad Nacional Autónoma de México (UNAM), 200 por carrera. Se pretende seguir incrementando este dataset con las demás carreras y más tesis.
### Supported Tasks and Leaderboards
text-classification
### Languages
Español (es)
## Dataset Structure
### Data Instances
Las instancias del dataset son de la siguiente forma:
El objetivo de esta tesis es elaborar un estudio de las condiciones asociadas al aprendizaje desde casa a nivel preescolar y primaria en el municipio de Nicolás Romero a partir de la cancelación de clases presenciales ante la contingencia sanitaria del Covid-19 y el entorno familiar del estudiante. En México, la Encuesta para la Medición del Impacto COVID-19 en la Educación (ECOVID-ED) 2020, es un proyecto que propone el INEGI y realiza de manera especial para conocer las necesidades de la población estudiantil de 3 a 29 años de edad, saber qué está sucediendo con su entorno inmediato, las condiciones en las que desarrollan sus actividades académicas y el apoyo que realizan padres, tutores o cuidadores principales de las personas en edad formativa. La ECOVID-ED 2020 se llevó a cabo de manera especial con el objetivo de conocer el impacto de la cancelación provisional de clases presenciales en las instituciones educativas del país para evitar los contagios por la pandemia COVID-19 en la experiencia educativa de niños, niñas, adolescentes y jóvenes de 3 a 29 años, tanto en el ciclo escolar 2019-2020, como en ciclo 2020-2021. En este ámbito de investigación, el Instituto de Investigaciones sobre la Universidad y la Educación (IISUE) de la Universidad Nacional Autónoma de México público en 2020 la obra “Educación y Pandemia: Una visión académica” que se integran 34 trabajos que abordan la muy amplia temática de la educación y la universidad con reflexiones y ejercicios analíticos estrechamente relacionadas en el marco coyuntural de la pandemia COVID-19. La tesis se presenta en tres capítulos: En el capítulo uno se realizará una descripción del aprendizaje de los estudiantes a nivel preescolar y primaria del municipio de NicolásRomero, Estado de México, que por motivo de la contingencia sanitaria contra el Covid-19 tuvieron que concluir su ciclo académico 2019-2020 y el actual ciclo 2020-2021 en su casa debido a la cancelación provisional de clases presenciales y bajo la tutoría de padres, familiar o ser cercano; así como las horas destinadas al estudio y las herramientas tecnológicas como teléfonos inteligentes, computadoras portátiles, computadoras de escritorio, televisión digital y tableta. En el capítulo dos, se presentarán las herramientas necesarias para la captación de la información mediante técnicas de investigación social, a través de las cuales se mencionará, la descripción, contexto y propuestas del mismo, considerando los diferentes tipos de cuestionarios, sus componentes y diseño, teniendo así de manera específica la diversidad de ellos, que llevarán como finalidad realizar el cuestionario en línea para la presente investigación. Posteriormente, se podrá destacar las fases del diseño de la investigación, que se realizarán mediante una prueba piloto tomando como muestra a distintos expertos en el tema. De esta manera se obtendrá la información relevante para estudiarla a profundidad. En el capítulo tres, se realizará el análisis apoyado de las herramientas estadísticas, las cuales ofrecen explorar la muestra de una manera relevante, se aplicará el método inferencial para expresar la información y predecir las condiciones asociadas al autoaprendizaje, la habilidad pedagógica de padres o tutores, la convivencia familiar, la carga académica y actividades escolares y condicionamiento tecnológico,con la finalidad de inferir en la población. Asimismo, se realizarán pruebas de hipótesis, tablas de contingencia y matriz de correlación. Por consiguiente, los resultados obtenidos de las estadísticas se interpretarán para describir las condiciones asociadas y como impactan en la enseñanza de preescolar y primaria desde casa.|María de los Ángeles|Blancas Regalado|Análisis de las condiciones del aprendizaje desde casa en los alumnos de preescolar y primaria del municipio de Nicolás Romero |2022|Actuaría
| Carreras | Número de instancias |
|--------------|----------------------|
| Actuaría | 200 |
| Derecho| 200 |
| Economía| 200 |
| Psicología| 200 |
| Química Farmacéutico Biológica| 200 |
### Data Fields
El dataset está compuesto por los siguientes campos: "texto|titulo|carrera". <br/>
texto: Se refiere al texto de la introducción de la tesis. <br/>
titulo: Se refiere al título de la tesis. <br/>
carrera: Se refiere al nombre de la carrera a la que pertenece la tesis. <br/>
### Data Splits
El dataset tiene 2 particiones: entrenamiento (train) y prueba (test).
| Partición | Número de instancias |
|--------------|-------------------|
| Entrenamiento | 800 |
| Prueba | 200 |
## Dataset Creation
### Curation Rationale
La creación de este dataset ha sido motivada por la participación en el Hackathon 2022 de PLN en Español organizado por Somos NLP, con el objetivo de democratizar el NLP en español y promover su aplicación a buenas causas y, debido a que no existe un dataset de tesis en español.
### Source Data
#### Initial Data Collection and Normalization
El dataset original (dataset_tesis) fue creado a partir de un proceso de scraping donde se extrajeron tesis de la Universidad Nacional Autónoma de México en el siguiente link: https://tesiunam.dgb.unam.mx/F?func=find-b-0&local_base=TES01.
Se optó por realizar un scraper para conseguir la información. Se decidió usar la base de datos TESIUNAM, la cual es un catálogo en donde se pueden visualizar las tesis de los sustentantes que obtuvieron un grado en la UNAM, así como de las tesis de licenciatura de escuelas incorporadas a ella.
Para ello, en primer lugar se consultó la Oferta Académica (http://oferta.unam.mx/indice-alfabetico.html) de la Universidad, sitio de donde se extrajo cada una de las 131 licenciaturas en forma de lista. Después, se analizó cada uno de los casos presente en la base de datos, debido a que existen carreras con más de 10 tesis, otras con menos de 10, o con solo una o ninguna tesis disponible. Se usó Selenium para la interacción con un navegador Web (Edge) y está actualmente configurado para obtener las primeras 20 tesis, o menos, por carrera.
Este scraper obtiene de esta base de datos:
- Nombres del Autor
- Apellidos del Autor
- Título de la Tesis
- Año de la Tesis
- Carrera de la Tesis
A la vez, este scraper descarga cada una de las tesis en la carpeta Downloads del equipo local. En el csv formado por el scraper se añadió el "Resumen/Introduccion/Conclusion de la tesis", dependiendo cual primero estuviera disponible, ya que la complejidad recae en la diferencia de la estructura y formato de cada una de las tesis.
#### Who are the source language producers?
Los datos son creados por humanos de forma manual, en este caso por estudiantes de la UNAM y revisados por sus supervisores.
### Annotations
El dataset fue procesado para eliminar información innecesaria para los clasificadores. El dataset original cuenta con los siguientes campos: "texto|autor_nombre|autor_apellido|titulo|año|carrera".
#### Annotation process
Se extrajeron primeramente 200 tesis de 5 carreras de esta universidad: Actuaría, Derecho, Economía, Psicología y Química Farmacéutico Biológica. De estas se extrajo: introducción, nombre del autor, apellidos de autor, título de la tesis y la carrera. Los datos fueron revisados y limpiados por los autores.
Luego, el dataset fue procesado con las siguientes tareas de Procesamiento de Lenguaje Natural (dataset_tesis_procesado):
- convertir a minúsculas
- tokenización
- eliminar palabras que no son alfanuméricas
- eliminar palabras vacías
- stemming: eliminar plurales
#### Who are the annotators?
Las anotaciones fueron hechas por humanos, en este caso los autores del dataset, usando código de máquina en el lenguaje Python.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
El presente conjunto de datos favorecerá la búsqueda e investigación relacionada con tesis en español, a partir de su categorización automática por un modelo entrenado con este dataset. Esta tarea favorece el cumplimiento del objetivo 4 de Desarrollo Sostenible de la ONU: Educación y Calidad (https://www.un.org/sustainabledevelopment/es/objetivos-de-desarrollo-sostenible/).
### Discussion of Biases
El texto tiene algunos errores en la codificación por lo que algunos caracteres como las tildes no se muestran correctamente. Las palabras con estos caracteres son eliminadas en el procesamiento hasta que se corrija el problema.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Miembros del equipo (user de Hugging Face):
[Isacc Isahias López López](https://huggingface.co/MajorIsaiah)
[Yisel Clavel Quintero](https://huggingface.co/clavel)
[Dionis López](https://huggingface.co/inoid)
[Ximena Yeraldin López López](https://huggingface.co/Ximyer)
### Licensing Information
La versión 1.0.0 del dataset unam_tesis está liberada bajo la licencia <a href='http://www.apache.org/licenses/LICENSE-2.0'/> Apache-2.0 License </a>.
### Citation Information
"Esta base de datos se ha creado en el marco del Hackathon 2022 de PLN en Español organizado por Somos NLP patrocinado por Platzi, Paperspace y Hugging Face: https://huggingface.co/hackathon-pln-es."
Para citar este dataset, por favor, use el siguiente formato de cita:
@inproceedings{Hackathon 2022 de PLN en Español,
title={UNAM's Theses with BETO fine-tuning classify},
author={López López, Isaac Isaías; Clavel Quintero, Yisel; López Ramos, Dionis & López López, Ximena Yeraldin},
booktitle={Hackathon 2022 de PLN en Español},
year={2022}
}
### Contributions
Gracias a [@yiselclavel](https://github.com/yiselclavel) y [@IsaacIsaias](https://github.com/IsaacIsaias) por agregar este dataset.
|
SocialGrep/the-reddit-place-dataset | 2022-07-01T17:51:57.000Z | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | SocialGrep | The written history or /r/Place, in posts and comments. | null | null | 1 | 6 | ---
annotations_creators:
- lexyr
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
paperswithcode_id: null
---
# Dataset Card for the-reddit-place-dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets/the-reddit-place-dataset?utm_source=huggingface&utm_medium=link&utm_campaign=theredditplacedataset)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=theredditplacedataset)
### Dataset Summary
The written history or /r/Place, in posts and comments.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Additional Information
### Licensing Information
CC-BY v4.0
|
lm233/humor_train | 2022-04-08T18:13:45.000Z | [
"region:us"
] | lm233 | null | null | null | 1 | 6 | annotations_creators: []
language_creators: []
languages: []
licenses: []
multilinguality: []
pretty_name: humor_train
size_categories: []
source_datasets: []
task_categories: []
task_ids: [] |
student/FFHQ | 2022-04-16T06:24:36.000Z | [
"region:us"
] | student | null | null | null | 2 | 6 | FFHQ 70000张png图片
链接:https://pan.baidu.com/s/1XDfTKWOhtwAAQQJ0KBU4RQ
提取码:bowj
## Flickr-Faces-HQ Dataset (FFHQ)






Flickr-Faces-HQ (FFHQ) is a high-quality image dataset of human faces, originally created as a benchmark for generative adversarial networks (GAN):
> **A Style-Based Generator Architecture for Generative Adversarial Networks**<br>
> Tero Karras (NVIDIA), Samuli Laine (NVIDIA), Timo Aila (NVIDIA)<br>
> http://stylegan.xyz/paper
The dataset consists of 70,000 high-quality PNG images at 1024×1024 resolution and contains considerable variation in terms of age, ethnicity and image background. It also has good coverage of accessories such as eyeglasses, sunglasses, hats, etc. The images were crawled from [Flickr](https://www.flickr.com/), thus inheriting all the biases of that website, and automatically aligned and cropped using [dlib](http://dlib.net/). Only images under permissive licenses were collected. Various automatic filters were used to prune the set, and finally [Amazon Mechanical Turk](https://www.mturk.com/) was used to remove the occasional statues, paintings, or photos of photos.
For business inquiries, please contact [researchinquiries@nvidia.com](mailto:researchinquiries@nvidia.com)
For press and other inquiries, please contact Hector Marinez at [hmarinez@nvidia.com](mailto:hmarinez@nvidia.com)
## Licenses
The individual images were published in Flickr by their respective authors under either [Creative Commons BY 2.0](https://creativecommons.org/licenses/by/2.0/), [Creative Commons BY-NC 2.0](https://creativecommons.org/licenses/by-nc/2.0/), [Public Domain Mark 1.0](https://creativecommons.org/publicdomain/mark/1.0/), [Public Domain CC0 1.0](https://creativecommons.org/publicdomain/zero/1.0/), or [U.S. Government Works](http://www.usa.gov/copyright.shtml) license. All of these licenses allow **free use, redistribution, and adaptation for non-commercial purposes**. However, some of them require giving **appropriate credit** to the original author, as well as **indicating any changes** that were made to the images. The license and original author of each image are indicated in the metadata.
* [https://creativecommons.org/licenses/by/2.0/](https://creativecommons.org/licenses/by/2.0/)
* [https://creativecommons.org/licenses/by-nc/2.0/](https://creativecommons.org/licenses/by-nc/2.0/)
* [https://creativecommons.org/publicdomain/mark/1.0/](https://creativecommons.org/publicdomain/mark/1.0/)
* [https://creativecommons.org/publicdomain/zero/1.0/](https://creativecommons.org/publicdomain/zero/1.0/)
* [http://www.usa.gov/copyright.shtml](http://www.usa.gov/copyright.shtml)
The dataset itself (including JSON metadata, download script, and documentation) is made available under [Creative Commons BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license by NVIDIA Corporation. You can **use, redistribute, and adapt it for non-commercial purposes**, as long as you (a) give appropriate credit by **citing our paper**, (b) **indicate any changes** that you've made, and (c) distribute any derivative works **under the same license**.
* [https://creativecommons.org/licenses/by-nc-sa/4.0/](https://creativecommons.org/licenses/by-nc-sa/4.0/)
## Overview
All data is hosted on Google Drive:
| Path | Size | Files | Format | Description
| :--- | :--: | ----: | :----: | :----------
| [ffhq-dataset](https://drive.google.com/open?id=1u2xu7bSrWxrbUxk-dT-UvEJq8IjdmNTP) | 2.56 TB | 210,014 | | Main folder
| ├ [ffhq-dataset-v1.json](https://drive.google.com/open?id=1IB0BFbN_eRZx9UkJqLHSgJiQhqX-PrI6) | 254 MB | 1 | JSON | Metadata including copyright info, URLs, etc.
| ├ [images1024x1024](https://drive.google.com/open?id=1u3Hbfn3Q6jsTlte3BY85CGwId77H-OOu) | 89.1 GB | 70,000 | PNG | Aligned and cropped images at 1024×1024
| ├ [thumbnails128x128](https://drive.google.com/open?id=1uJkWCpLUM-BnXW3H_IgVMdfENeNDFNmC) | 1.95 GB | 70,000 | PNG | Thumbnails at 128×128
| ├ [in-the-wild-images](https://drive.google.com/open?id=1YyuocbwILsHAjTusSUG-_zL343jlVBhf) | 955 GB | 70,000 | PNG | Original images from Flickr
| ├ [tfrecords](https://drive.google.com/open?id=1LTBpJ0W_WLjqza3zdayligS8Dh1V1gA6) | 273 GB | 9 | tfrecords | Multi-resolution data for [StyleGAN](http://stylegan.xyz/code) and [ProGAN](https://github.com/tkarras/progressive_growing_of_gans)
| └ [zips](https://drive.google.com/open?id=1WocxvZ4GEZ1DI8dOz30aSj2zT6pkATYS) | 1.28 TB | 4 | ZIP | Contents of each folder as a ZIP archive.
High-level statistics:

For use cases that require separate training and validation sets, we have appointed the first 60,000 images to be used for training and the remaining 10,000 for validation. In the [StyleGAN paper](http://stylegan.xyz/paper), however, we used all 70,000 images for training.
We have explicitly made sure that there are no duplicate images in the dataset itself. However, please note that the `in-the-wild` folder may contain multiple copies of the same image in cases where we extracted several different faces from the same image.
## Download script
You can either grab the data directly from Google Drive or use the provided [download script](./download_ffhq.py). The script makes things considerably easier by automatically downloading all the requested files, verifying their checksums, retrying each file several times on error, and employing multiple concurrent connections to maximize bandwidth.
```
> python download_ffhq.py -h
usage: download_ffhq.py [-h] [-j] [-s] [-i] [-t] [-w] [-r] [-a]
[--num_threads NUM] [--status_delay SEC]
[--timing_window LEN] [--chunk_size KB]
[--num_attempts NUM]
Download Flickr-Face-HQ (FFHQ) dataset to current working directory.
optional arguments:
-h, --help show this help message and exit
-j, --json download metadata as JSON (254 MB)
-s, --stats print statistics about the dataset
-i, --images download 1024x1024 images as PNG (89.1 GB)
-t, --thumbs download 128x128 thumbnails as PNG (1.95 GB)
-w, --wilds download in-the-wild images as PNG (955 GB)
-r, --tfrecords download multi-resolution TFRecords (273 GB)
-a, --align recreate 1024x1024 images from in-the-wild images
--num_threads NUM number of concurrent download threads (default: 32)
--status_delay SEC time between download status prints (default: 0.2)
--timing_window LEN samples for estimating download eta (default: 50)
--chunk_size KB chunk size for each download thread (default: 128)
--num_attempts NUM number of download attempts per file (default: 10)
```
```
> python ..\download_ffhq.py --json --images
Downloading JSON metadata...
\ 100.00% done 1/1 files 0.25/0.25 GB 43.21 MB/s ETA: done
Parsing JSON metadata...
Downloading 70000 files...
| 100.00% done 70000/70000 files 89.19 GB/89.19 GB 59.87 MB/s ETA: done
```
The script also serves as a reference implementation of the automated scheme that we used to align and crop the images. Once you have downloaded the in-the-wild images with `python download_ffhq.py --wilds`, you can run `python download_ffhq.py --align` to reproduce exact replicas of the aligned 1024×1024 images using the facial landmark locations included in the metadata.
## Metadata
The `ffhq-dataset-v1.json` file contains the following information for each image in a machine-readable format:
```
{
"0": { # Image index
"category": "training", # Training or validation
"metadata": { # Info about the original Flickr photo:
"photo_url": "https://www.flickr.com/photos/...", # - Flickr URL
"photo_title": "DSCF0899.JPG", # - File name
"author": "Jeremy Frumkin", # - Author
"country": "", # - Country where the photo was taken
"license": "Attribution-NonCommercial License", # - License name
"license_url": "https://creativecommons.org/...", # - License detail URL
"date_uploaded": "2007-08-16", # - Date when the photo was uploaded to Flickr
"date_crawled": "2018-10-10" # - Date when the photo was crawled from Flickr
},
"image": { # Info about the aligned 1024x1024 image:
"file_url": "https://drive.google.com/...", # - Google Drive URL
"file_path": "images1024x1024/00000.png", # - Google Drive path
"file_size": 1488194, # - Size of the PNG file in bytes
"file_md5": "ddeaeea6ce59569643715759d537fd1b", # - MD5 checksum of the PNG file
"pixel_size": [1024, 1024], # - Image dimensions
"pixel_md5": "47238b44dfb87644460cbdcc4607e289", # - MD5 checksum of the raw pixel data
"face_landmarks": [...] # - 68 face landmarks reported by dlib
},
"thumbnail": { # Info about the 128x128 thumbnail:
"file_url": "https://drive.google.com/...", # - Google Drive URL
"file_path": "thumbnails128x128/00000.png", # - Google Drive path
"file_size": 29050, # - Size of the PNG file in bytes
"file_md5": "bd3e40b2ba20f76b55dc282907b89cd1", # - MD5 checksum of the PNG file
"pixel_size": [128, 128], # - Image dimensions
"pixel_md5": "38d7e93eb9a796d0e65f8c64de8ba161" # - MD5 checksum of the raw pixel data
},
"in_the_wild": { # Info about the in-the-wild image:
"file_url": "https://drive.google.com/...", # - Google Drive URL
"file_path": "in-the-wild-images/00000.png", # - Google Drive path
"file_size": 3991569, # - Size of the PNG file in bytes
"file_md5": "1dc0287e73e485efb0516a80ce9d42b4", # - MD5 checksum of the PNG file
"pixel_size": [2016, 1512], # - Image dimensions
"pixel_md5": "86b3470c42e33235d76b979161fb2327", # - MD5 checksum of the raw pixel data
"face_rect": [667, 410, 1438, 1181], # - Axis-aligned rectangle of the face region
"face_landmarks": [...], # - 68 face landmarks reported by dlib
"face_quad": [...] # - Aligned quad of the face region
}
},
...
}
```
## Acknowledgements
We thank Jaakko Lehtinen, David Luebke, and Tuomas Kynkäänniemi for in-depth discussions and helpful comments; Janne Hellsten, Tero Kuosmanen, and Pekka Jänis for compute infrastructure and help with the code release.
We also thank Vahid Kazemi and Josephine Sullivan for their work on automatic face detection and alignment that enabled us to collect the data in the first place:
> **One Millisecond Face Alignment with an Ensemble of Regression Trees**<br>
> Vahid Kazemi, Josephine Sullivan<br>
> Proc. CVPR 2014<br>
> https://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Kazemi_One_Millisecond_Face_2014_CVPR_paper.pdf
|
agemagician/NetSurfP-SS3 | 2022-04-18T03:43:55.000Z | [
"region:us"
] | agemagician | null | null | null | 1 | 6 | Entry not found |
pietrolesci/fracas | 2022-04-25T08:40:07.000Z | [
"region:us"
] | pietrolesci | null | null | null | 0 | 6 | ## Overview
Original dataset [here](https://github.com/felipessalvatore/NLI_datasets).
Below the original description reported for convenience.
```latex
@MISC{Fracas96,
author = {{The Fracas Consortium} and Robin Cooper and Dick Crouch and Jan Van Eijck and Chris Fox and Josef Van Genabith and Jan Jaspars and Hans Kamp and David Milward and Manfred Pinkal and Massimo Poesio and Steve Pulman and Ted Briscoe and Holger Maier and Karsten Konrad},
title = {Using the Framework},
year = {1996}
}
```
Adapted from [https://nlp.stanford.edu/~wcmac/downloads/fracas.xml](https://nlp.stanford.edu/~wcmac/downloads/fracas.xml). We took `P1, ..., Pn` as premise and H as hypothesis. Labels have been mapped as follows `{'yes': "entailment", 'no': 'contradiction', 'undef': "neutral", 'unknown': "neutral"}`. And we randomly split 80/20 for train/dev.
## Dataset curation
One hypothesis in the dev set and three hypotheses in the train set are empty and have been
filled in with the empty string `""`. Labels are encoded with custom NLI mapping, that is
```
{"entailment": 0, "neutral": 1, "contradiction": 2}
```
## Code to create the dataset
```python
import pandas as pd
from datasets import Features, Value, ClassLabel, Dataset, DatasetDict, load_dataset
from pathlib import Path
# load datasets
path = Path("<path to folder>/nli_datasets")
datasets = {}
for dataset_path in path.iterdir():
datasets[dataset_path.name] = {}
for name in dataset_path.iterdir():
df = pd.read_csv(name)
datasets[dataset_path.name][name.name.split(".")[0]] = df
ds = {}
for name, df_ in datasets["fracas"].items():
df = df_.copy()
assert df["label"].isna().sum() == 0
# fill-in empty hypothesis
df = df.fillna("")
# encode labels
df["label"] = df["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2})
# cast to dataset
features = Features({
"premise": Value(dtype="string", id=None),
"hypothesis": Value(dtype="string", id=None),
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
})
ds[name] = Dataset.from_pandas(df, features=features)
dataset = DatasetDict(ds)
dataset.push_to_hub("fracas", token="<token>")
# check overlap between splits
from itertools import combinations
for i, j in combinations(ds.keys(), 2):
print(
f"{i} - {j}: ",
pd.merge(
ds[i].to_pandas(),
ds[j].to_pandas(),
on=["label", "premise", "hypothesis"],
how="inner",
).shape[0],
)
#> train - dev: 0
``` |
bigscience-data/roots_en_uncorpus | 2022-12-12T10:59:37.000Z | [
"language:en",
"license:cc-by-4.0",
"region:us"
] | bigscience-data | null | null | null | 0 | 6 | ---
language: en
license: cc-by-4.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_en_uncorpus
# uncorpus
- Dataset uid: `uncorpus`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 2.8023 % of total
- 10.7390 % of ar
- 5.7970 % of fr
- 9.7477 % of es
- 2.0417 % of en
- 1.2540 % of zh
### BigScience processing steps
#### Filters applied to: ar
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: fr
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: es
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: en
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: zh
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
|
pietrolesci/mpe | 2022-04-25T09:00:18.000Z | [
"region:us"
] | pietrolesci | null | null | null | 0 | 6 | ## Overview
Original dataset [here](https://github.com/aylai/MultiPremiseEntailment).
## Dataset curation
Same data and splits as the original. The following columns have been added:
- `premise`: concatenation of `premise1`, `premise2`, `premise3`, and `premise4`
- `label`: encoded `gold_label` with the following mapping `{"entailment": 0, "neutral": 1, "contradiction": 2}`
## Code to create the dataset
```python
import pandas as pd
from datasets import Features, Value, ClassLabel, Dataset, DatasetDict
from pathlib import Path
# read data
path = Path("<path to files>")
datasets = {}
for dataset_path in path.rglob("*.txt"):
df = pd.read_csv(dataset_path, sep="\t")
datasets[dataset_path.name.split("_")[1].split(".")[0]] = df
ds = {}
for name, df_ in datasets.items():
df = df_.copy()
# fix parsing error for dev split
if name == "dev":
# fix parsing error
df.loc[df["contradiction_judgments"] == "3 contradiction", "contradiction_judgments"] = 3
df.loc[df["gold_label"].isna(), "gold_label"] = "contradiction"
# check no nan
assert df.isna().sum().sum() == 0
# fix dtypes
for col in ("entailment_judgments", "neutral_judgments", "contradiction_judgments"):
df[col] = df[col].astype(int)
# fix premise column
for i in range(1, 4 + 1):
df[f"premise{i}"] = df[f"premise{i}"].str.split("/", expand=True)[1]
df["premise"] = df[[f"premise{i}" for i in range(1, 4 + 1)]].agg(" ".join, axis=1)
# encode labels
df["label"] = df["gold_label"].map({"entailment": 0, "neutral": 1, "contradiction": 2})
# cast to dataset
features = Features({
"premise1": Value(dtype="string", id=None),
"premise2": Value(dtype="string", id=None),
"premise3": Value(dtype="string", id=None),
"premise4": Value(dtype="string", id=None),
"premise": Value(dtype="string", id=None),
"hypothesis": Value(dtype="string", id=None),
"entailment_judgments": Value(dtype="int32"),
"neutral_judgments": Value(dtype="int32"),
"contradiction_judgments": Value(dtype="int32"),
"gold_label": Value(dtype="string"),
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
})
ds[name] = Dataset.from_pandas(df, features=features)
# push to hub
ds = DatasetDict(ds)
ds.push_to_hub("mpe", token="<token>")
# check overlap between splits
from itertools import combinations
for i, j in combinations(ds.keys(), 2):
print(
f"{i} - {j}: ",
pd.merge(
ds[i].to_pandas(),
ds[j].to_pandas(),
on=["premise", "hypothesis", "label"],
how="inner",
).shape[0],
)
#> dev - test: 0
#> dev - train: 0
#> test - train: 0
``` |
pietrolesci/add_one_rte | 2022-04-25T08:48:42.000Z | [
"region:us"
] | pietrolesci | null | null | null | 0 | 6 | ## Overview
Original data available [here](http://www.seas.upenn.edu/~nlp/resources/AN-composition.tgz).
## Dataset curation
`premise` and `hypothesis` columns have been cleaned following common practices ([1](https://github.com/rabeehk/robust-nli/blob/c32ff958d4df68ac2fad9bf990f70d30eab9f297/data/scripts/add_one_rte.py#L51-L52), [2](https://github.com/azpoliak/hypothesis-only-NLI/blob/b045230437b5ba74b9928ca2bac5e21ae57876b9/data/convert_add_1_rte.py#L31-L32)), that is
- remove HTML tags `<b>`, `<u>`, `</b>`, `</u>`
- normalize repeated white spaces
- strip
`mean_human_score` has been transformed into class labels following common practices ([1](https://github.com/rabeehk/robust-nli/blob/c32ff958d4df68ac2fad9bf990f70d30eab9f297/data/scripts/add_one_rte.py#L20-L35), [2](https://github.com/azpoliak/hypothesis-only-NLI/blob/b045230437b5ba74b9928ca2bac5e21ae57876b9/data/convert_add_1_rte.py#L6-L17)), that is
- for test set: `mean_human_score <= 3 -> "not-entailed"` and `mean_human_score >= 4 -> "entailed"` (anything between 3 and 4 has been removed)
- for all other splits: `mean_human_score < 3.5 -> "not-entailed"` else `"entailed"`
more details below.
## Code to generate the dataset
```python
import pandas as pd
from datasets import Features, Value, ClassLabel, Dataset, DatasetDict
def convert_label(score, is_test):
if is_test:
if score <= 3:
return "not-entailed"
elif score >= 4:
return "entailed"
return "REMOVE"
if score < 3.5:
return "not-entailed"
return "entailed"
ds = {}
for split in ("dev", "test", "train"):
# read data
df = pd.read_csv(f"<path to folder>/AN-composition/addone-entailment/splits/data.{split}", sep="\t", header=None)
df.columns = ["mean_human_score", "binary_label", "sentence_id", "adjective", "noun", "premise", "hypothesis"]
# clean text from html tags and useless spaces
for col in ("premise", "hypothesis"):
df[col] = (
df[col]
.str.replace("(<b>)|(<u>)|(</b>)|(</u>)", " ", regex=True)
.str.replace(" {2,}", " ", regex=True)
.str.strip()
)
# encode labels
if split == "test":
df["label"] = df["mean_human_score"].map(lambda x: convert_label(x, True))
df = df.loc[df["label"] != "REMOVE"]
else:
df["label"] = df["mean_human_score"].map(lambda x: convert_label(x, False))
assert df["label"].isna().sum() == 0
df["label"] = df["label"].map({"not-entailed": 0, "entailed": 1})
# cast to dataset
features = Features({
"mean_human_score": Value(dtype="float32"),
"binary_label": Value(dtype="string"),
"sentence_id": Value(dtype="string"),
"adjective": Value(dtype="string"),
"noun": Value(dtype="string"),
"premise": Value(dtype="string"),
"hypothesis": Value(dtype="string"),
"label": ClassLabel(num_classes=2, names=["not-entailed", "entailed"]),
})
ds[split] = Dataset.from_pandas(df, features=features)
ds = DatasetDict(ds)
ds.push_to_hub("add_one_rte", token="<token>")
# check overlap between splits
from itertools import combinations
for i, j in combinations(ds.keys(), 2):
print(
f"{i} - {j}: ",
pd.merge(
ds[i].to_pandas(),
ds[j].to_pandas(),
on=["premise", "hypothesis", "label"],
how="inner",
).shape[0],
)
#> dev - test: 0
#> dev - train: 0
#> test - train: 0
``` |
bigscience-data/roots_en_wikinews | 2022-12-12T11:02:53.000Z | [
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | bigscience-data | null | null | null | 0 | 6 | ---
language: en
license: cc-by-sa-3.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_en_wikinews
# wikinews_filtered
- Dataset uid: `wikinews_filtered`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 0.0307 % of total
- 0.0701 % of ar
- 0.3036 % of pt
- 0.0271 % of en
- 0.0405 % of fr
- 0.2119 % of indic-ta
- 0.0081 % of zh
- 0.0510 % of es
- 0.0725 % of ca
### BigScience processing steps
#### Filters applied to: ar
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_ar
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: pt
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_pt
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: en
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_en
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: fr
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: indic-ta
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-ta
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: zh
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_zhs
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: es
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_es
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: ca
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_ca
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
|
bigscience-data/roots_id_indonesian_news_articles_2017 | 2022-12-12T11:05:35.000Z | [
"language:id",
"license:cc0-1.0",
"region:us"
] | bigscience-data | null | null | null | 2 | 6 | ---
language: id
license: cc0-1.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_id_indonesian_news_articles_2017
# Indonesian News Articles 2017
- Dataset uid: `indonesian_news_articles_2017`
### Description
Indonesian news articles published at 2017 contains published date, content, title, and source.
### Homepage
kaggle.com/aashari/indonesian-news-articles-published-at-2017
### Licensing
- public domain
- cc0-1.0: Creative Commons Zero v1.0 Universal
CC0: Public Domain
### Speaker Locations
- Asia
- Indonesia
### Sizes
- 0.0688 % of total
- 26.1751 % of id
### BigScience processing steps
#### Filters applied to: id
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
|
bigscience-data/roots_vi_wikibooks | 2022-12-12T11:16:36.000Z | [
"language:vi",
"license:cc-by-sa-3.0",
"region:us"
] | bigscience-data | null | null | null | 0 | 6 | ---
language: vi
license: cc-by-sa-3.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_vi_wikibooks
# wikibooks_filtered
- Dataset uid: `wikibooks_filtered`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 0.0897 % of total
- 0.2591 % of en
- 0.0965 % of fr
- 0.1691 % of es
- 0.2834 % of indic-hi
- 0.2172 % of pt
- 0.0149 % of zh
- 0.0279 % of ar
- 0.1374 % of vi
- 0.5025 % of id
- 0.3694 % of indic-ur
- 0.5744 % of eu
- 0.0769 % of ca
- 0.0519 % of indic-ta
- 0.1470 % of indic-mr
- 0.0751 % of indic-te
- 0.0156 % of indic-bn
- 0.0476 % of indic-ml
- 0.0087 % of indic-pa
### BigScience processing steps
#### Filters applied to: en
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_en
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: fr
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_fr
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: es
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_es
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: indic-hi
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-hi
- dedup_template_soft
- filter_small_docs_bytes_300
#### Filters applied to: pt
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_pt
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: zh
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_zhs
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: ar
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_ar
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: vi
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_vi
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: id
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_id
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-ur
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: eu
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_eu
- dedup_template_soft
- replace_newline_with_space
#### Filters applied to: ca
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_ca
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: indic-ta
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-ta
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-mr
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-mr
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-te
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-te
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-bn
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-bn
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-ml
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-ml
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-pa
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-pa
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
|
bigscience-data/roots_zh_wikibooks | 2022-12-12T11:17:18.000Z | [
"language:zh",
"license:cc-by-sa-3.0",
"region:us"
] | bigscience-data | null | null | null | 7 | 6 | ---
language: zh
license: cc-by-sa-3.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_zh_wikibooks
# wikibooks_filtered
- Dataset uid: `wikibooks_filtered`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 0.0897 % of total
- 0.2591 % of en
- 0.0965 % of fr
- 0.1691 % of es
- 0.2834 % of indic-hi
- 0.2172 % of pt
- 0.0149 % of zh
- 0.0279 % of ar
- 0.1374 % of vi
- 0.5025 % of id
- 0.3694 % of indic-ur
- 0.5744 % of eu
- 0.0769 % of ca
- 0.0519 % of indic-ta
- 0.1470 % of indic-mr
- 0.0751 % of indic-te
- 0.0156 % of indic-bn
- 0.0476 % of indic-ml
- 0.0087 % of indic-pa
### BigScience processing steps
#### Filters applied to: en
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_en
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: fr
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_fr
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: es
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_es
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: indic-hi
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-hi
- dedup_template_soft
- filter_small_docs_bytes_300
#### Filters applied to: pt
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_pt
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: zh
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_zhs
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: ar
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_ar
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: vi
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_vi
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: id
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_id
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-ur
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: eu
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_eu
- dedup_template_soft
- replace_newline_with_space
#### Filters applied to: ca
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_ca
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: indic-ta
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-ta
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-mr
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-mr
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-te
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-te
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-bn
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-bn
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-ml
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-ml
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-pa
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-pa
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
|
silver/personal_dialog | 2022-07-10T13:05:21.000Z | [
"task_categories:conversational",
"task_ids:dialogue-generation",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:zh",
"license:other",
"arxiv:1901.09672",
"region:us"
] | silver | The PersonalDialog dataset is a large-scale multi-turn Chinese dialogue dataset containing various traits from a large number of speakers.
We are releasing about 5M sessions of carefully filtered dialogues.
Each utterance in PersonalDialog is associated with a speaker marked with traits like Gender, Location, Interest Tags. | @article{zheng2019personalized,
title = {Personalized dialogue generation with diversified traits},
author = {Zheng, Yinhe and Chen, Guanyi and Huang, Minlie and Liu, Song and Zhu, Xuan},
journal = {arXiv preprint arXiv:1901.09672},
year = {2019}
}
@inproceedings{zheng2020pre,
title = {A pre-training based personalized dialogue generation model with persona-sparse data},
author = {Zheng, Yinhe and Zhang, Rongsheng and Huang, Minlie and Mao, Xiaoxi},
booktitle = {Proceedings of the AAAI Conference on Artificial Intelligence},
volume = {34},
number = {05},
pages = {9693--9700},
year = {2020}
} | null | 12 | 6 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- zh
license:
- other
multilinguality:
- monolingual
paperswithcode_id: personaldialog
pretty_name: "PersonalDialog"
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- conversational
task_ids:
- dialogue-generation
---
# Dataset Card for PersonalDialog
## Table of Contents
- [Dataset Card for PersonalDialog](#dataset-card-for-personaldialog)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.zhengyinhe.com/datasets/
- **Repository:** https://github.com/silverriver/PersonalDilaog
- **Paper:** https://arxiv.org/abs/1901.09672
### Dataset Summary
The PersonalDialog dataset is a large-scale multi-turn Chinese dialogue dataset containing various traits from a large number of speakers.
We are releasing about 5M sessions of carefully filtered dialogues.
Each utterance in PersonalDialog is associated with a speaker marked with traits like Gender, Location, Interest Tags.
### Supported Tasks and Leaderboards
- dialogue-generation: The dataset can be used to train a model for generating dialogue responses.
- response-retrieval: The dataset can be used to train a reranker model that can be used to implement a retrieval-based dialogue model.
### Languages
PersonalDialog is in Chinese
PersonalDialog中的对话是中文的
## Dataset Structure
### Data Instances
`train` split:
```json
{
"dialog": ["那么 晚", "加班 了 刚 到 家 呀 !", "吃饭 了 么", "吃 过 了 !"],
"profile": [
{
"tag": ["间歇性神经病", "爱笑的疯子", "他们说我犀利", "爱做梦", "自由", "旅游", "学生", "双子座", "好性格"],
"loc": "福建 厦门", "gender": "male"
}, {
"tag": ["设计师", "健康养生", "热爱生活", "善良", "宅", "音樂", "时尚"],
"loc": "山东 济南", "gender": "male"
}
],
"uid": [0, 1, 0, 1],
}
```
`dev` and `test` split:
```json
{
"dialog": ["没 人性 啊 !", "可以 来 组织 啊", "来 上海 陪姐 打 ?"],
"profile": [
{"tag": [""], "loc": "上海 浦东新区", "gender": "female"},
{"tag": ["嘉庚", "keele", "leicester", "UK", "泉州五中"], "loc": "福建 泉州", "gender": "male"},
],
"uid": [0, 1, 0],
"responder_profile": {"tag": ["嘉庚", "keele", "leicester", "UK", "泉州五中"], "loc": "福建 泉州", "gender": "male"},
"golden_response": "吴经理 派车来 小 泉州 接 么 ?",
"is_biased": true,
}
```
### Data Fields
- `dialog` (list of strings): List of utterances consisting of a dialogue.
- `profile` (list of dicts): List of profiles associated with each speaker.
- `tag` (list of strings): List of tags associated with each speaker.
- `loc` (string): Location of each speaker.
- `gender` (string): Gender of each speaker.
- `uid` (list of int): Speaker id for each utterance in the dialogue.
- `responder_profile` (dict): Profile of the responder. (Only available in `dev` and `test` split)
- `golden_response` (str): Response of the responder. (Only available in `dev` and `test` split)
- `id_biased` (bool): Whether the dialogue is guranteed to be persona related or not. (Only available in `dev` and `test` split)
### Data Splits
|train|valid|test|
|---:|---:|---:|
|5,438,165 | 10,521 | 10,523 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
other-weibo
This dataset is collected from Weibo.
You can refer to the [detailed policy](https://weibo.com/signup/v5/privacy) required to use this dataset.
Please restrict the usage of this dataset to non-commerical purposes.
### Citation Information
```bibtex
@article{zheng2019personalized,
title = {Personalized dialogue generation with diversified traits},
author = {Zheng, Yinhe and Chen, Guanyi and Huang, Minlie and Liu, Song and Zhu, Xuan},
journal = {arXiv preprint arXiv:1901.09672},
year = {2019}
}
@inproceedings{zheng2020pre,
title = {A pre-training based personalized dialogue generation model with persona-sparse data},
author = {Zheng, Yinhe and Zhang, Rongsheng and Huang, Minlie and Mao, Xiaoxi},
booktitle = {Proceedings of the AAAI Conference on Artificial Intelligence},
volume = {34},
number = {05},
pages = {9693--9700},
year = {2020}
}
```
### Contributions
Thanks to [Yinhe Zheng](https://github.com/silverriver) for adding this dataset.
|
gcaillaut/frwiki_el | 2022-09-28T08:52:12.000Z | [
"task_categories:token-classification",
"annotations_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:fr",
"license:wtfpl",
"region:us"
] | gcaillaut | French Wikipedia dataset for Entity Linking | null | null | 1 | 6 | ---
annotations_creators:
- crowdsourced
language_creators:
- machine-generated
language:
- fr
license:
- wtfpl
multilinguality:
- monolingual
pretty_name: French Wikipedia dataset for Entity Linking
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- token-classification
task_ids: []
---
# Dataset Card for frwiki_good_pages_el
## Dataset Description
- Repository: [frwiki_el](https://github.com/GaaH/frwiki_el)
- Point of Contact: [Gaëtan Caillaut](mailto://g.caillaut@brgm.fr)
### Dataset Summary
This dataset contains articles from the French Wikipédia.
It is intended to be used to train Entity Linking (EL) systems. Links in articles are used to detect named entities.
The dataset `frwiki` contains sentences of each Wikipedia pages.
The dataset `entities` contains description for each Wikipedia pages.
### Languages
- French
## Dataset Structure
### frwiki
```
{
"name": "Title of the page",
"wikidata_id": "Identifier of the related Wikidata entity. Can be null.",
"wikipedia_id": "Identifier of the Wikipedia page",
"wikipedia_url": "URL to the Wikipedia page",
"wikidata_url": "URL to the Wikidata page. Can be null.",
"sentences" : [
{
"text": "text of the current sentence",
"ner": ["list", "of", "ner", "labels"],
"mention_mappings": [
(start_of_first_mention, end_of_first_mention),
(start_of_second_mention, end_of_second_mention)
],
"el_wikidata_id": ["wikidata id of first mention", "wikidata id of second mention"],
"el_wikipedia_id": [wikipedia id of first mention, wikipedia id of second mention],
"el_wikipedia_title": ["wikipedia title of first mention", "wikipedia title of second mention"]
}
]
"words": ["words", "in", "the", "sentence"],
"ner": ["ner", "labels", "of", "each", "words"],
"el": ["el", "labels", "of", "each", "words"]
}
```
### entities
```
{
"name": "Title of the page",
"wikidata_id": "Identifier of the related Wikidata entity. Can be null.",
"wikipedia_id": "Identifier of the Wikipedia page",
"wikipedia_url": "URL to the Wikipedia page",
"wikidata_url": "URL to the Wikidata page. Can be null.",
"description": "Description of the entity"
}
``` |
nateraw/kinetics | 2022-06-16T02:30:12.000Z | [
"license:cc-by-4.0",
"region:us"
] | nateraw | null | @misc{https://doi.org/10.48550/arxiv.1705.06950,
doi = {10.48550/ARXIV.1705.06950},
url = {https://arxiv.org/abs/1705.06950},
author = {Kay, Will and Carreira, Joao and Simonyan, Karen and Zhang, Brian and Hillier, Chloe and Vijayanarasimhan, Sudheendra and Viola, Fabio and Green, Tim and Back, Trevor and Natsev, Paul and Suleyman, Mustafa and Zisserman, Andrew},
keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {The Kinetics Human Action Video Dataset},
publisher = {arXiv},
year = {2017},
copyright = {arXiv.org perpetual, non-exclusive license}
} | null | 0 | 6 | ---
license: cc-by-4.0
---
|
fmplaza/offendes | 2023-03-27T08:19:06.000Z | [
"language:es",
"license:apache-2.0",
"region:us"
] | fmplaza | Focusing on young influencers from the well-known social platforms of Twitter, Instagram, and YouTube,
we have collected the corpus OffendES which is composed of Spanish comments manually labeled on offensive pre-defined categories. From the total corpus, we selected 30,416
posts to be publicly published, they correspond to the ones used in the MeOffendES competition at IberLEF 2021. | @inproceedings{plaza-del-arco-etal-2021-offendes,
title = "{O}ffend{ES}: A New Corpus in {S}panish for Offensive Language Research",
author = "{Plaza-del-Arco}, Flor Miriam and Montejo-R{'a}ez,
Arturo and Ure{~n}a-L{'o}pez, L. Alfonso and Martín-Valdivia,
María-Teresa",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
month = sep,
year = "2021",
address = "Held Online",
language = "English",
pages = "1096--1108" } | null | 7 | 6 | ---
license: apache-2.0
language:
- es
---
# Dataset Card for OffendES
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Paper: OffendES:** [A New Corpus in Spanish for Offensive Language Research](https://aclanthology.org/2021.ranlp-1.123.pdf)
- **Leaderboard:** [Leaderboard for OffendES / Spanish](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6388)
- **Point of Contact: fmplaza@ujaen.es**
### Dataset Summary
Focusing on young influencers from the well-known social platforms of Twitter, Instagram, and YouTube, we have collected a corpus composed of Spanish comments manually labeled on offensive pre-defined categories. From the total corpus, we selected 30,416 posts to be publicly published, they correspond to the ones used in the MeOffendES competition at IberLEF 2021. The posts are labeled with the following categories:
- Offensive, the target is a person (OFP). Offensive text targeting a specific individual.
- Offensive, the target is a group of people or collective (OFG). Offensive text targeting a group of people belonging to the same ethnic group, gender or sexual orientation, political ideology, religious belief, or other common characteristics.
- Non-offensive, but with expletive language (NOE). A text that contains rude words, blasphemes, or swearwords but without the aim of offending, and usually with a positive connotation.
- Non-offensive (NO). Text that is neither offensive nor contains expletive language
### Supported Tasks and Leaderboards
This dataset is intended for multi-class offensive classification and binary offensive classification.
Competition [MeOffendES task on offensive detection for Spanish at IberLEF 2021](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6388)
### Languages
- Spanish
## Dataset Structure
### Data Instances
For each instance, there is a string for the id of the tweet, a string for the emotion class, a string for the offensive class, and a string for the event. See the []() to explore more examples.
```
{'comment_id': '8003',
'influencer': 'dalas',
'comment': 'Estupido aburrido',
'label': 'NO',
'influencer_gender': 'man',
'media': youtube
}
```
### Data Fields
- `comment_id`: a string to identify the comment
- `influencer`: a string containing the influencer associated with the comment
- `comment`: a string containing the text of the comment
- `label`: a string containing the offensive gold label
- `influencer_gender`: a string containing the genre of the influencer
- `media`: a string containing the social media platform where the comment has been retrieved
### Data Splits
The OffendES dataset contains 3 splits: _train_, _validation_, and _test_. Below are the statistics for each class.
| OffendES | Number of Instances in Split per class| | |
| ------------- | ---------------------------------|---------------------------------|------------------------------------------|
| `Class` | `Train` | `Validation` | `Test` |
| NO | 13,212 | 64 | 9,651 |
| NOE | 1,235 | 22 | 2,340 |
| OFP | 2,051 | 10 | 1,404 |
| OFG | 212 | 4 | 211 |
| Total | 16,710 | 100 | 13,606 |
## Dataset Creation
### Source Data
Twitter, Youtube, Instagram
#### Who are the annotators?
Amazon Mechanical Turkers
## Additional Information
### Licensing Information
The OffendES dataset is released under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0).
### Citation Information
@inproceedings{plaza-del-arco-etal-2021-offendes,
title = "{O}ffend{ES}: A New Corpus in {S}panish for Offensive Language Research",
author = "{Plaza-del-Arco}, Flor Miriam and Montejo-R{\'a}ez, Arturo and Ure{\~n}a-L{\'o}pez, L. Alfonso and Mart{\'\i}n-Valdivia, Mar{\'\i}a-Teresa",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
month = sep,
year = "2021",
address = "Held Online",
url = "https://aclanthology.org/2021.ranlp-1.123.pdf",
language = "English",
pages = "1096--1108"
}
@article{meoffendes2021,
title="{{Overview of MeOffendEs at IberLEF 2021: Offensive Language Detection in Spanish Variants}}",
author="{Flor Miriam Plaza-del-Arco and Casavantes, Marco and Jair Escalante, Hugo and Martín-Valdivia, M. Teresa and Montejo-Ráez, Arturo and {Montes-y-Gómez}, Manuel and Jarquín-Vásquez, Horacio and Villaseñor-Pineda, Luis}",
journal="Procesamiento del Lenguaje Natural",
url = "https://bit.ly/3QpRDfy",
volume="67",
pages="183--194",
year="2021"
} |
lcampillos/ctebmsp | 2022-07-23T22:48:56.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"language:es",
"license:cc-by-4.0",
"region:us"
] | lcampillos | null | null | null | 1 | 6 | ---
license: cc-by-4.0
language:
- es
multilinguality:
- monolingual
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name:
- CT-EBM-SP
---
# CT-EBM-SP (Clinical Trials for Evidence-based Medicine in Spanish)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://www.lllf.uam.es/ESP/nlpmedterm_en.html
- **Repository:** http://www.lllf.uam.es/ESP/nlpdata/wp2/CT-EBM-SP.zip
- **Paper:** Campillos-Llanos, L., Valverde-Mateos, A., Capllonch-Carrión, A., & Moreno-Sandoval, A. (2021). A clinical trials corpus annotated with UMLS entities to enhance the access to evidence-based medicine. BMC medical informatics and decision making, 21(1), 1-19
- **Point of Contact:** leonardo.campillos AT gmail.com
### Dataset Summary
The [Clinical Trials for Evidence-Based-Medicine in Spanish corpus](http://www.lllf.uam.es/ESP/nlpdata/wp2/) is a collection of 1200 texts about clinical trials studies and clinical trials announcements:
- 500 abstracts from journals published under a Creative Commons license, e.g. available in PubMed or the Scientific Electronic Library Online (SciELO)
- 700 clinical trials announcements published in the European Clinical Trials Register and Repositorio Español de Estudios Clínicos
If you use the CT-EBM-SP resource, please, cite as follows:
```
@article{campillosetal-midm2021,
title = {A clinical trials corpus annotated with UMLS© entities to enhance the access to Evidence-Based Medicine},
author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n and Moreno-Sandoval, Antonio},
journal = {BMC Medical Informatics and Decision Making},
volume={21},
number={1},
pages={1--19},
year={2021},
publisher={BioMed Central}
}
```
### Supported Tasks
Medical Named Entity Recognition
### Languages
Spanish
## Dataset Structure
### Data Instances
- 292 173 tokens
- 46 699 entities of the following [Unified Medical Language System (UMLS)](https://www.nlm.nih.gov/research/umls/index.html) semantic groups:
- ANAT (anatomy and body parts): 6728 entities
- CHEM (chemical and pharmacological substances): 9224 entities
- DISO (pathologic conditions): 13 067 entities
- PROC (therapeutic and diagnostic procedures, and laboratory analyses): 17 680 entities
### Data Splits
- Train: 175 203 tokens, 28 101 entities
- Development: 58 670 tokens, 9629 entities
- Test: 58 300 tokens, 8969 entities
## Dataset Creation
### Source Data
- Abstracts from journals published under a Creative Commons license, available in [PubMed](https://pubmed.ncbi.nlm.nih.gov/) or the [Scientific Electronic Library Online (SciELO)](https://scielo.org/es/)
- Clinical trials announcements published in the [European Clinical Trials Register](https://www.clinicaltrialsregister.eu) and [Repositorio Español de Estudios Clínicos](https://reec.aemps.es)
### Annotations
#### Who are the annotators?
- Leonardo Campillos-Llanos, Computational Linguist, Consejo Superior de Investigaciones Científicas
- Adrián Capllonch-Carrión, Medical Doctor, Centro de Salud Retiro, Hospital Universitario Gregorio Marañón
- Ana Valverde-Mateos, Medical Lexicographer, Spanish Royal Academy of Medicine
## Considerations for Using the Data
**Disclosure**: This dataset is under development and needs to be improved. It should not be used for medical decision making without human assistance and supervision.
This resource is intended for a generalist purpose, and may have bias and/or any other undesirable distortions.
The owner or creator of the models will in no event be liable for any results arising from the use made by third parties of this dataset.
**Descargo de responsabilidad**: Este conjunto de datos se encuentra en desarrollo y no debe ser empleada para la toma de decisiones médicas
La finalidad de este modelo es generalista, y puede tener sesgos y/u otro tipo de distorsiones indeseables.
El propietario o creador de los modelos de ningún modo será responsable de los resultados derivados del uso que las terceras partes hagan de estos datos. |
Nexdata/Taiwanese_Mandarin_Speech_Data_by_Mobile_Phone_Guiding | 2023-08-30T10:39:25.000Z | [
"region:us"
] | Nexdata | null | null | null | 0 | 6 | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for Nexdata/Taiwanese_Mandarin_Speech_Data_by_Mobile_Phone_Guiding
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.nexdata.ai/datasets/64?source=Huggingface
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The data collected 203 Taiwan people, covering Taipei, Kaohsiung, Taichung, Tainan, etc. 137 females, 66 males. It is recorded in quiet indoor environment. It can be used in speech recognition, machine translation, voiceprint recognition model training and algorithm research.
For more details, please refer to the link: https://www.nexdata.ai/datasets/64?source=Huggingface
### Supported Tasks and Leaderboards
automatic-speech-recognition, audio-speaker-identification: The dataset can be used to train a model for Automatic Speech Recognition (ASR).
### Languages
Taiwanese Mandarin
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing
### Citation Information
[More Information Needed]
### Contributions |
sasha/wino_bias_cloze2 | 2022-06-22T15:23:05.000Z | [
"region:us"
] | sasha | null | null | null | 1 | 6 | Entry not found |
sophieb/dynamically_generated_hate_speech_dataset | 2022-06-25T18:02:18.000Z | [
"region:us"
] | sophieb | null | null | null | 0 | 6 | # Dataset card for dynamically generated dataset hate speech detection
## Dataset summary
This dataset that was dynamically generated for training and improving hate speech detection models. A group of trained annotators generated and labeled challenging examples so that hate speech models could be tricked and consequently improved. This dataset contains about 40,000 examples of which 54% are labeled as hate speech. It also provides the target of hate speech, including vulnerable, marginalized, and discriminated groups. Overall, this is a balanced dataset which makes it different from the already available hate speech datasets you can find on the web.
This dataset was presented in the article [Learning from the Worst: Dynamically Generated Datasets to Improve Online Hate Detection published](https://aclanthology.org/2021.acl-long.132.pdf) in 2021. The article describes the process for generating and annotating the data. Also, it describes how they used the generated data for training and improving hate speech detection models. The full author list is the following: Bertie Vidgen (The Alan Turing Institute), Tristan Thrush (Facebook), Zeerak Waseem (University of Sheffield), and Douwe Kiela (Facebook).
|
hungnm/multilingual-amazon-review-sentiment-processed | 2022-07-09T17:41:04.000Z | [
"license:mit",
"region:us"
] | hungnm | null | null | null | 0 | 6 | ---
license: mit
---
|
bhadresh-savani/image-to-style | 2022-07-20T08:58:29.000Z | [
"region:us"
] | bhadresh-savani | null | null | null | 0 | 6 | Entry not found |
LHF/escorpius-m | 2023-05-11T22:28:28.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"multilinguality:multilingual",
"size_categories:100B<n<1T",
"source_datasets:original",
"language:af",
"language:ar",
"language:bn",
"language:ca",
"language:cs",... | LHF | null | null | null | 2 | 6 | ---
license: cc-by-nc-nd-4.0
language:
- af
- ar
- bn
- ca
- cs
- da
- de
- el
- eu
- fa
- fi
- fr
- gl
- hi
- hr
- it
- ja
- ko
- mt
- nl
- no
- oc
- pa
- pl
- pt
- ro
- sl
- sr
- sv
- tr
- uk
- ur
multilinguality:
- multilingual
size_categories:
- 100B<n<1T
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
---
# esCorpius Multilingual
In the recent years, Transformer-based models have lead to significant advances in language modelling for natural language processing. However, they require a vast amount of data to be (pre-)trained and there is a lack of corpora in languages other than English. Recently, several initiatives have presented multilingual datasets obtained from automatic web crawling. However, they present important shortcomings for languages different from English, as they are either too small, or present a low quality derived from sub-optimal cleaning and deduplication. In this repository, we introduce esCorpius-m, a multilingual crawling corpus obtained from near 1 Pb of Common Crawl data. It is the most extensive corpus in some of the languages covered with this level of quality in the extraction, purification and deduplication of web textual content. Our data curation process involves a novel highly parallel cleaning pipeline and encompasses a series of deduplication mechanisms that together ensure the integrity of both document and paragraph boundaries. Additionally, we maintain both the source web page URL and the WARC shard origin URL in order to complain with EU regulations. esCorpius-m has been released under CC BY-NC-ND 4.0 license.
## Usage
Replace `revision` with the language of your choice (in this case, `it` for Italian):
```
dataset = load_dataset('LHF/escorpius-m', split='train', streaming=True, revision='it')
```
## Other corpora
- esCorpius-mr multilingual *raw* corpus (not deduplicated): https://huggingface.co/datasets/LHF/escorpius-mr
- esCorpius original *Spanish only* corpus (deduplicated): https://huggingface.co/datasets/LHF/escorpius
## Citation
Link to paper: https://www.isca-speech.org/archive/pdfs/iberspeech_2022/gutierrezfandino22_iberspeech.pdf / https://arxiv.org/abs/2206.15147
Cite this work:
```
@inproceedings{gutierrezfandino22_iberspeech,
author={Asier Gutiérrez-Fandiño and David Pérez-Fernández and Jordi Armengol-Estapé and David Griol and Zoraida Callejas},
title={{esCorpius: A Massive Spanish Crawling Corpus}},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
year=2022,
booktitle={Proc. IberSPEECH 2022},
pages={126--130},
doi={10.21437/IberSPEECH.2022-26}
}
```
## Disclaimer
We did not perform any kind of filtering and/or censorship to the corpus. We expect users to do so applying their own methods. We are not liable for any misuse of the corpus.
|
relbert/lexical_relation_classification | 2022-07-20T23:24:17.000Z | [
"multilinguality:monolingual",
"size_categories:n<1K",
"language:en",
"license:other",
"region:us"
] | relbert | [Lexical Relation Classification](https://aclanthology.org/P19-1169/) | @inproceedings{wang-etal-2019-spherere,
title = "{S}phere{RE}: Distinguishing Lexical Relations with Hyperspherical Relation Embeddings",
author = "Wang, Chengyu and
He, Xiaofeng and
Zhou, Aoying",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P19-1169",
doi = "10.18653/v1/P19-1169",
pages = "1727--1737",
abstract = "Lexical relations describe how meanings of terms relate to each other. Typical examples include hypernymy, synonymy, meronymy, etc. Automatic distinction of lexical relations is vital for NLP applications, and also challenging due to the lack of contextual signals to discriminate between such relations. In this work, we present a neural representation learning model to distinguish lexical relations among term pairs based on Hyperspherical Relation Embeddings (SphereRE). Rather than learning embeddings for individual terms, the model learns representations of relation triples by mapping them to the hyperspherical embedding space, where relation triples of different lexical relations are well separated. Experiments over several benchmarks confirm SphereRE outperforms state-of-the-arts.",
} | null | 1 | 6 | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- n<1K
pretty_name: Lexical Relation Classification
---
# Dataset Card for "relbert/lexical_relation_classification"
## Dataset Description
- **Repository:** [RelBERT](https://github.com/asahi417/relbert)
- **Paper:** [https://aclanthology.org/P19-1169/](https://aclanthology.org/P19-1169/)
- **Dataset:** Lexical Relation Classification
### Dataset Summary
Five different datasets (`BLESS`, `CogALexV`, `EVALution`, `K&H+N`, `ROOT09`) for lexical relation classification used in [SphereRE](https://www.aclweb.org/anthology/P19-1169/).
### Dataset Summary
This dataset contains 5 different word analogy questions used in [Analogy Language Model](https://aclanthology.org/2021.acl-long.280/).
| name | train | validation | test |
|---------------|------:|-------:|-----:|
| `BLESS` | 18582 | 1327 | 6637 |
| `CogALexV` | 3054 | - | 4260 |
| `EVALution` | 5160 | 372 | 1846 |
| `K&H+N` | 40256 | 2876 | 14377 |
| `ROOT09` | 8933 | 638 | 3191 |
## Dataset Structure
### Data Instances
An example looks as follows.
```
{"head": "turtle", "tail": "live", "relation": "event"}
```
The `stem` and `tail` are the word pair and `relation` is the corresponding relation label.
### Citation Information
```
@inproceedings{wang-etal-2019-spherere,
title = "{S}phere{RE}: Distinguishing Lexical Relations with Hyperspherical Relation Embeddings",
author = "Wang, Chengyu and
He, Xiaofeng and
Zhou, Aoying",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P19-1169",
doi = "10.18653/v1/P19-1169",
pages = "1727--1737",
abstract = "Lexical relations describe how meanings of terms relate to each other. Typical examples include hypernymy, synonymy, meronymy, etc. Automatic distinction of lexical relations is vital for NLP applications, and also challenging due to the lack of contextual signals to discriminate between such relations. In this work, we present a neural representation learning model to distinguish lexical relations among term pairs based on Hyperspherical Relation Embeddings (SphereRE). Rather than learning embeddings for individual terms, the model learns representations of relation triples by mapping them to the hyperspherical embedding space, where relation triples of different lexical relations are well separated. Experiments over several benchmarks confirm SphereRE outperforms state-of-the-arts.",
}
```
### LICENSE
The LICENSE of all the resources are under [CC-BY-NC-4.0](./LICENSE). Thus, they are freely available for academic purpose or individual research, but restricted for commercial use.
|
biglam/hansard_speech | 2022-07-27T12:30:30.000Z | [
"task_categories:text-classification",
"task_categories:text-generation",
"task_ids:multi-class-classification",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categor... | biglam | A dataset containing every speech in the House of Commons from May 1979-July 2020. | @misc{odell, evan_2021,
title={Hansard Speeches 1979-2021: Version 3.1.0},
DOI={10.5281/zenodo.4843485},
abstractNote={<p>Full details are available at <a href="https://evanodell.com/projects/datasets/hansard-data">https://evanodell.com/projects/datasets/hansard-data</a></p> <p><strong>Version 3.1.0 contains the following changes:</strong></p> <p>- Coverage up to the end of April 2021</p>},
note={This release is an update of previously released datasets. See full documentation for details.},
publisher={Zenodo},
author={Odell, Evan},
year={2021},
month={May} } | null | 2 | 6 | ---
annotations_creators:
- no-annotation
language:
- 'en'
language_creators:
- expert-generated
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Hansard Speeches
size_categories:
- 1M<n<10M
source_datasets:
- original
tags:
- speeches
- politics
- parliament
- British
task_categories:
- text-classification
- text-generation
task_ids:
- multi-class-classification
- language-modeling
- masked-language-modeling
---
# Dataset Card for Hansard speech
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://evanodell.com/projects/datasets/hansard-data/
- **Repository:** https://github.com/evanodell/hansard-data3
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Evan Odell](https://github.com/evanodell)
### Dataset Summary
A dataset containing every speech in the House of Commons from May 1979-July 2020. Quoted from the dataset homepage
> Please contact me if you find any errors in the dataset. The integrity of the public Hansard record is questionable at times, and while I have improved it, the data is presented "as is".
### Supported Tasks and Leaderboards
- `text-classification`: This dataset can be used to classify various texts (transcribed from speeches) as different time periods or as different types
- `language-modeling`: This dataset can contribute to the training or the evaluation of language models for historical texts.
### Languages
`en:GB`
## Dataset Structure
### Data Instances
```
{
'id': 'uk.org.publicwhip/debate/1979-05-17a.390.0',
'speech': "Since the Minister for Consumer Affairs said earlier that the bread price rise would be allowed, in view of developing unemployment in the baking industry, and since the Mother's Pride bakery in my constituency is about to close, will the right hon. Gentleman give us a firm assurance that there will be an early debate on the future of the industry, so that the Government may announce that, thanks to the price rise, those workers will not now be put out of work?",
'display_as': 'Eric Heffer',
'party': 'Labour',
'constituency': 'Liverpool, Walton',
'mnis_id': '725',
'date': '1979-05-17',
'time': '',
'colnum': '390',
'speech_class': 'Speech',
'major_heading': 'BUSINESS OF THE HOUSE',
'minor_heading': '',
'oral_heading': '',
'year': '1979',
'hansard_membership_id': '5612',
'speakerid': 'uk.org.publicwhip/member/11615',
'person_id': '',
'speakername': 'Mr. Heffer',
'url': '',
'government_posts': [],
'opposition_posts': [],
'parliamentary_posts': ['Member, Labour Party National Executive Committee']
}
```
### Data Fields
|Variable|Description|
|---|---|
|id|The ID as assigned by mysociety|
|speech|The text of the speech|
|display_as| The standardised name of the MP.|
|party|The party an MP is member of at time of speech|
|constituency| Constituency represented by MP at time of speech|
|mnis_id| The MP's Members Name Information Service number|
|date|Date of speech|
|time|Time of speech|
|colnum |Column number in hansard record|
|speech_class |Type of speech|
|major_heading| Major debate heading|
|minor_heading| Minor debate heading|
|oral_heading| Oral debate heading|
|year |Year of speech|
|hansard_membership_id| ID used by mysociety|
|speakerid |ID used by mysociety|
|person_id |ID used by mysociety|
|speakername| MP name as appeared in Hansard record for speech|
|url| link to speech|
|government_posts| Government posts held by MP (list)|
|opposition_posts |Opposition posts held by MP (list)|
|parliamentary_posts| Parliamentary posts held by MP (list)|
### Data Splits
Train: 2694375
## Dataset Creation
### Curation Rationale
This dataset contains all the speeches made in the House of Commons and can be used for a number of deep learning tasks like detecting how language and societal views have changed over the >40 years. The dataset also provides language closer to the spoken language used in an elite British institution.
### Source Data
#### Initial Data Collection and Normalization
The dataset is created by getting the data from [data.parliament.uk](http://data.parliament.uk/membersdataplatform/memberquery.aspx). There is no normalization.
#### Who are the source language producers?
[N/A]
### Annotations
#### Annotation process
None
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
This is public information, so there should not be any personal and sensitive information
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to understand how language use and society's views have changed over time.
### Discussion of Biases
Because of the long time period this dataset spans, it might contain language and opinions that are unacceptable in modern society.
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
This dataset was built on top of [parlparse](https://github.com/mysociety/parlparse) by [Evan Odell](https://github.com/evanodell)
### Licensing Information
Creative Commons Attribution 4.0 International License
### Citation Information
```
@misc{odell, evan_2021,
title={Hansard Speeches 1979-2021: Version 3.1.0},
DOI={10.5281/zenodo.4843485},
abstractNote={<p>Full details are available at <a href="https://evanodell.com/projects/datasets/hansard-data">https://evanodell.com/projects/datasets/hansard-data</a></p> <p><strong>Version 3.1.0 contains the following changes:</strong></p> <p>- Coverage up to the end of April 2021</p>},
note={This release is an update of previously released datasets. See full documentation for details.},
publisher={Zenodo},
author={Odell, Evan},
year={2021},
month={May} }
```
Thanks to [@shamikbose](https://github.com/shamikbose) for adding this dataset. |
pinecone/movie-posters | 2022-08-20T17:57:23.000Z | [
"region:us"
] | pinecone | null | null | null | 0 | 6 | Entry not found |
KaranChand/atcosim_pruned_xlsr | 2022-08-03T12:57:56.000Z | [
"region:us"
] | KaranChand | null | null | null | 0 | 6 | Entry not found |
NX2411/AIhub-korean-speech-data-large | 2022-08-04T16:23:54.000Z | [
"license:apache-2.0",
"region:us"
] | NX2411 | null | null | null | 1 | 6 | ---
license: apache-2.0
---
|
Abhishekq10/sanad-full | 2022-08-09T13:52:50.000Z | [
"region:us"
] | Abhishekq10 | null | null | null | 0 | 6 | Entry not found |
Hobson/surname-nationality | 2022-12-29T23:14:09.000Z | [
"task_categories:token-classification",
"task_categories:text-classification",
"task_ids:named-entity-recognition",
"size_categories:List[str]",
"source_datasets:List[str]",
"license:mit",
"multilingual",
"RNN",
"name",
"tagging",
"nlp",
"transliterated",
"character-level",
"text-tagging",... | Hobson | null | null | null | 2 | 6 | ---
license: mit
size_categories: List[str]
source_datasets: List[str]
task_categories:
- token-classification
- text-classification
task_ids:
- named-entity-recognition
pretty_name: Popular Surname Nationality Mapping
tags:
- multilingual
- RNN
- name
- tagging
- nlp
- transliterated
- character-level
- text-tagging
- bias
- classification
- language model
- surname
- ethnicity
- multilabel classification
- natural language
---
# Popular Surname Nationality Mapping
Sample of popular surnames for 30+ countries labeled with nationality (language)
|
research-backup/semeval2012_relational_similarity_v2 | 2022-08-16T19:38:09.000Z | [
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:other",
"region:us"
] | research-backup | [SemEVAL 2012 task 2: Relational Similarity](https://aclanthology.org/S12-1047/) | @inproceedings{jurgens-etal-2012-semeval,
title = "{S}em{E}val-2012 Task 2: Measuring Degrees of Relational Similarity",
author = "Jurgens, David and
Mohammad, Saif and
Turney, Peter and
Holyoak, Keith",
booktitle = "*{SEM} 2012: The First Joint Conference on Lexical and Computational Semantics {--} Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation ({S}em{E}val 2012)",
month = "7-8 " # jun,
year = "2012",
address = "Montr{\'e}al, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/S12-1047",
pages = "356--364",
} | null | 0 | 6 | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
pretty_name: SemEval2012 task 2 Relational Similarity
---
# Dataset Card for "relbert/semeval2012_relational_similarity_v2"
## Dataset Description
- **Repository:** [RelBERT](https://github.com/asahi417/relbert)
- **Paper:** [https://aclanthology.org/S12-1047/](https://aclanthology.org/S12-1047/)
- **Dataset:** SemEval2012: Relational Similarity
### Dataset Summary
***IMPORTANT***: This is the same dataset as [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity),
but with a different train/validation split.
Relational similarity dataset from [SemEval2012 task 2](https://aclanthology.org/S12-1047/), compiled to fine-tune [RelBERT](https://github.com/asahi417/relbert) model.
The dataset contains a list of positive and negative word pair from 89 pre-defined relations.
The relation types are constructed on top of following 10 parent relation types.
```shell
{
1: "Class Inclusion", # Hypernym
2: "Part-Whole", # Meronym, Substance Meronym
3: "Similar", # Synonym, Co-hypornym
4: "Contrast", # Antonym
5: "Attribute", # Attribute, Event
6: "Non Attribute",
7: "Case Relation",
8: "Cause-Purpose",
9: "Space-Time",
10: "Representation"
}
```
Each of the parent relation is further grouped into child relation types where the definition can be found [here](https://drive.google.com/file/d/0BzcZKTSeYL8VenY0QkVpZVpxYnc/view?resourcekey=0-ZP-UARfJj39PcLroibHPHw).
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```
{
'relation_type': '8d',
'positives': [ [ "breathe", "live" ], [ "study", "learn" ], [ "speak", "communicate" ], ... ]
'negatives': [ [ "starving", "hungry" ], [ "clean", "bathe" ], [ "hungry", "starving" ], ... ]
}
```
### Data Splits
| name |train|validation|
|---------|----:|---------:|
|semeval2012_relational_similarity_v2| 89 | 89|
### Number of Positive/Negative Word-pairs in each Split
| relation_type | positive (train) | negative (train) | positive (validation) | negative (validation) |
|:----------------|-------------------:|-------------------:|------------------------:|------------------------:|
| 1 | 40 | 592 | 10 | 148 |
| 10 | 48 | 584 | 12 | 146 |
| 10a | 8 | 640 | 2 | 159 |
| 10b | 8 | 638 | 2 | 159 |
| 10c | 8 | 640 | 2 | 160 |
| 10d | 8 | 640 | 2 | 159 |
| 10e | 8 | 636 | 2 | 159 |
| 10f | 8 | 640 | 2 | 159 |
| 1a | 8 | 638 | 2 | 159 |
| 1b | 8 | 638 | 2 | 159 |
| 1c | 8 | 640 | 2 | 160 |
| 1d | 8 | 638 | 2 | 159 |
| 1e | 8 | 636 | 2 | 158 |
| 2 | 80 | 552 | 20 | 138 |
| 2a | 8 | 640 | 2 | 159 |
| 2b | 8 | 637 | 2 | 159 |
| 2c | 8 | 639 | 2 | 159 |
| 2d | 8 | 639 | 2 | 159 |
| 2e | 8 | 640 | 2 | 159 |
| 2f | 8 | 642 | 2 | 160 |
| 2g | 8 | 637 | 2 | 159 |
| 2h | 8 | 640 | 2 | 159 |
| 2i | 8 | 640 | 2 | 160 |
| 2j | 8 | 641 | 2 | 160 |
| 3 | 64 | 568 | 16 | 142 |
| 3a | 8 | 640 | 2 | 159 |
| 3b | 8 | 642 | 2 | 160 |
| 3c | 8 | 639 | 2 | 159 |
| 3d | 8 | 639 | 2 | 159 |
| 3e | 8 | 642 | 2 | 160 |
| 3f | 8 | 643 | 2 | 160 |
| 3g | 8 | 641 | 2 | 160 |
| 3h | 8 | 641 | 2 | 160 |
| 4 | 64 | 568 | 16 | 142 |
| 4a | 8 | 642 | 2 | 160 |
| 4b | 8 | 638 | 2 | 159 |
| 4c | 8 | 640 | 2 | 160 |
| 4d | 8 | 637 | 2 | 159 |
| 4e | 8 | 642 | 2 | 160 |
| 4f | 8 | 642 | 2 | 160 |
| 4g | 8 | 639 | 2 | 159 |
| 4h | 8 | 641 | 2 | 160 |
| 5 | 72 | 560 | 18 | 140 |
| 5a | 8 | 639 | 2 | 159 |
| 5b | 8 | 641 | 2 | 160 |
| 5c | 8 | 640 | 2 | 159 |
| 5d | 8 | 638 | 2 | 159 |
| 5e | 8 | 641 | 2 | 160 |
| 5f | 8 | 641 | 2 | 160 |
| 5g | 8 | 642 | 2 | 160 |
| 5h | 8 | 640 | 2 | 160 |
| 5i | 8 | 640 | 2 | 160 |
| 6 | 64 | 568 | 16 | 142 |
| 6a | 8 | 639 | 2 | 159 |
| 6b | 8 | 641 | 2 | 160 |
| 6c | 8 | 641 | 2 | 160 |
| 6d | 8 | 644 | 2 | 160 |
| 6e | 8 | 641 | 2 | 160 |
| 6f | 8 | 640 | 2 | 159 |
| 6g | 8 | 639 | 2 | 159 |
| 6h | 8 | 640 | 2 | 159 |
| 7 | 64 | 568 | 16 | 142 |
| 7a | 8 | 640 | 2 | 160 |
| 7b | 8 | 637 | 2 | 159 |
| 7c | 8 | 638 | 2 | 159 |
| 7d | 8 | 640 | 2 | 160 |
| 7e | 8 | 638 | 2 | 159 |
| 7f | 8 | 637 | 2 | 159 |
| 7g | 8 | 636 | 2 | 158 |
| 7h | 8 | 636 | 2 | 159 |
| 8 | 64 | 568 | 16 | 142 |
| 8a | 8 | 638 | 2 | 159 |
| 8b | 8 | 641 | 2 | 160 |
| 8c | 8 | 637 | 2 | 159 |
| 8d | 8 | 637 | 2 | 159 |
| 8e | 8 | 637 | 2 | 159 |
| 8f | 8 | 638 | 2 | 159 |
| 8g | 8 | 635 | 2 | 158 |
| 8h | 8 | 639 | 2 | 159 |
| 9 | 72 | 560 | 18 | 140 |
| 9a | 8 | 636 | 2 | 159 |
| 9b | 8 | 640 | 2 | 159 |
| 9c | 8 | 632 | 2 | 158 |
| 9d | 8 | 643 | 2 | 160 |
| 9e | 8 | 644 | 2 | 160 |
| 9f | 8 | 640 | 2 | 159 |
| 9g | 8 | 637 | 2 | 159 |
| 9h | 8 | 640 | 2 | 159 |
| 9i | 8 | 640 | 2 | 159 |
| SUM | 1264 | 56198 | 316 | 14009 |
### Citation Information
```
@inproceedings{jurgens-etal-2012-semeval,
title = "{S}em{E}val-2012 Task 2: Measuring Degrees of Relational Similarity",
author = "Jurgens, David and
Mohammad, Saif and
Turney, Peter and
Holyoak, Keith",
booktitle = "*{SEM} 2012: The First Joint Conference on Lexical and Computational Semantics {--} Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation ({S}em{E}val 2012)",
month = "7-8 " # jun,
year = "2012",
address = "Montr{\'e}al, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/S12-1047",
pages = "356--364",
}
``` |
ChristophSchuhmann/improved_aesthetics_4.5plus | 2022-08-25T05:21:30.000Z | [
"license:apache-2.0",
"region:us"
] | ChristophSchuhmann | null | null | null | 9 | 6 | ---
license: apache-2.0
---
|
inarikami/wikipedia-japanese | 2022-09-11T02:42:50.000Z | [
"region:us"
] | inarikami | null | null | null | 4 | 6 | # Japanese Wikipedia Dataset
This dataset is a comprehensive pull of all Japanese wikipedia article data as of 20220808.
*Note:* Right now its uploaded as a single cleaned gzip file (for faster usage), I'll update this in the future to include a huggingface datasets compatible class and better support for japanese than the existing wikipedia repo.
### Example use case:
```shell
gunzip jawwiki20200808.json.gz
```
```python
import pandas as pd
from datasets import load_dataset
df = pd.read_json(path_or_buf="jawiki20220808.json", lines=True)
# *your preprocessing here*
df.to_csv("jawiki.csv", index=False)
dataset = load_dataset("csv", data_files="jawiki.csv")
dataset['train'][0]
```
The wikipedia articles were processed from their compressed format into a 7 GB jsonl file with filtering removing extraneous characters using the repo: https://github.com/singletongue/WikiCleaner.
Sample Text:
```json
{"title": "東洋大学朝霞キャンパス", "pageid": 910815, "wikidata_id": "Q11527630", "categories": ["出典を必要とする記述のある記事/2018年5月", "ウィキデータにある座標", "東洋大学のキャンパス", "朝霞市の学校", "地図があるページ"], "redirects": ["朝霞キャンパス"], "n_inlinks": 47, "sections": [[[], "東洋大学朝霞キャンパス(とうようだいがくあさかきゃんぱす)は、/(埼玉県/埼玉県)//(朝霞市/朝霞市)/にある/(東洋大学/東洋大学)/のキャンパスである。"], [["概要"], "所在地は/(埼玉県/埼玉県)//(朝霞市/朝霞市)/岡48-1。元々は文系5学部(文学部、経済学部、経営学部、法学部、社会学部)の1、2年次用として開発されたキャンパスである。2005年に文系5学部の白山移転が実施されたため、/(ライフデザイン学部/ライフデザイン学部)/のキャンパスとして使用されていた。また、1号館(岡2-11-10)に設定されていた所在地表記を2006年4月1日より東洋大学朝霞事務部の入る朝霞図書館研究管理棟(岡48-1)へ変更した。なお、文系5学部移転後は1号館および3号館は使用されていない(詳細は後述)。\n\n2020年までの使用学部はライフデザイン学部、大学院は大学院福祉社会デザイン研究科ヒューマンデザイン専攻が設置。ライフデザイン学部(大学院を含む)は2021年4月に朝霞キャンパスから/(東洋大学赤羽台キャンパス/東洋大学赤羽台キャンパス)/へ移転し、2024年に/(東洋大学板倉キャンパス/板倉キャンパス)/で設置されている生命科学部、食環境科学部と、/(東洋大学川越キャンパス/川越キャンパス)/で設置されている理工学部 生体医工学科が朝霞キャンパスに移転する予定になっている。"], [["歴史"], "/(文学部/文学部)/のみの/(単科大学と総合大学/単科大学)/から複数の分野を網羅する総合大学へ脱皮するにあたって、キャンパスの面積不足は大きな課題であった。当初、工学部も含めて、全てを白山キャンパスに設置する予定でいたが、面積の問題からかなわず、川越市長/(伊藤泰吉/伊藤泰吉)/の熱心な働きかけによって工学部を川越市に設置することとなった。その後、文系学部の増強に伴って文系各学部の教養課程を分離することが必要となった。当初は川越キャンパスをそれにあてる予定であったが/(学生運動/学生運動)/の影響により、断念することとなる。しかし、1966年の経営学部設置認可は教養課程の分離を前提としてなされていたことから早急に対応する必要があり、朝霞市郊外の/(黒目川/黒目川)/河畔の広大な土地を地権者から譲渡されることとなり、朝霞キャンパスの整備計画がスタートした。\n\n東洋大学では、当初は2号館(現講義棟)の校地のみを使用してキャンパスを整備する予定でいた。しかし、朝霞キャンパス建設予定地は/(市街化調整区域/市街化調整区域)/となっており、区域変更ないしは公的建築物としての特例認可の手続きが必要であった。東洋大学では速やかに建築許可がなされると考えていたが、河川整備のなされていない/(黒目川/黒目川)/河畔であったことから国の許諾がなかなか降りず、進出計画は難航してしまった。しかし、前述の通り、経営学部の設置認可特認の手前、早急な新キャンパス開設が求められ、急遽市街化地域に土地を入手して1号館を建設。1977年から文系5学部の教養課程(ただし文学部は一部講義のみ)を朝霞キャンパスで開講できる運びとなった。その後に特例認可がなされ、2号館を建設。キャンパスとして本格的に稼動することとなる。\n\n朝霞キャンパス設置当時は郊外型キャンパスの人気が高く、環境のよい朝霞キャンパスは東洋大学の志願者増に貢献した。ところが/(バブル景気/バブル崩壊)/後、受験生の/(都心回帰/都心回帰)/傾向が強まり、さらに/(大学全入時代/大学全入時代)/を迎えると朝霞キャンパスと白山キャンパスに分断されていることがデメリットとなってしまった。そこで東洋大学では白山キャンパスの再開発事業を実施、近隣の土地を取得して2005年から再度文系5学部を白山キャンパスへ集中させた。\n\n東洋大学の当初計画では、市街化調整区域に存在していてこれ以上の拡張が望めない朝霞キャンパスは、現在設置されている体育館などの体育関連施設および学生サークル用施設を残し、他の施設は解体、教育・研究施設としての機能は廃止する予定でいた。学生数の減少による/(朝霞台駅/朝霞台駅)/(/(北朝霞駅/北朝霞駅)/)周辺の商業的なデメリットを憂慮した朝霞市は、キャンパス機能の維持に対して陳情活動が数回実施された。朝霞市による学生利用に適した道路整備など、これまで構築されてきた朝霞市との良好な関係を考慮した東洋大学では新学部を設置することで教育・研究施設としての機能を維持することを決定、2005年の文系5学部白山集中化と同時に朝霞キャンパスにライフデザイン学部を設置した。\n\nしかし、/(少子化/少子化)/や/(2018年問題/2018年問題)/の影響は避けられず、2017年9月に/(東洋大学赤羽台キャンパス/東洋大学赤羽台キャンパス)/を拡張してライフデザイン学部(大学院を含む)を2021年を目途に移転することを発表した。\n\n2015年11月に旧3号館の敷地に/(ヤオコー/ヤオコー)/朝霞岡店が開店。\n\n2018年1月に旧4号館・旧総合体育館・旧テニスコートの敷地に朝霞台中央総合病院が/(TMGあさか医療センター/TMGあさか医療センター)/と改称のうえ新築移転し、446床の新病院となった。"], [["学部"], "なし"], [["大学院"], "なし"], [["施設"], ""], [["施設", "現存する施設"], "講義棟:旧2号館。3階建てのメイン校舎。大講義室のほか、ゼミで使用する少人数教室やLL教室が設置されている。ライフデザイン学部開設に伴い、一部の教室は実習室へ改装された。この校舎の地下にはかつてサークル部室が存在していたが、現在は使用禁止となっている。\n情報実習棟:旧5号館。情報実習用に建てられた3階建ての校舎である。コンクリート打ちっぱなしのデザインは東洋大学の卒業生の手によるもの。\n研究管理棟:東洋大学朝霞事務部の入る3階建ての建物。当初は事務部のほか、文学部・社会学部専任教員用の研究室が割り当てられていた。\n大学院・研究棟:旧研究指導棟。東洋大学専任教員の研究室と大学院の講義室がある。文系5学部が朝霞にあった時代には白山と朝霞の研究室でも全専任教員用の研究室を満たすことができず、この建物が新規に建てられた。5階建てで1階は吹きさらしの屋外広場となっている。ライフデザイン学部の全専任教員の研究室が入るほか、大学院の演習や共同研究室としても使用されている。\n図書館棟:東洋大学図書館朝霞分館の入居する3階建て。2階から入場する形式となっている。この建物の地下には食堂があり、/(TBSテレビ/TBS)/系のテレビドラマ「/(HOTEL/HOTEL)/」で社員食堂シーンを撮影する際に使用されていた。\nコミュニティセンター:公認サークルおよび体育会各部の部室が入居する4階建ての学生会館。1階には演劇サークル用に多目的ホールがあり、2階には会議室と演劇サークル用の練習室、メディアサークル用の音響室が設けられている。\n人間環境デザイン学科実験工房棟:旧研究室棟。ライフデザイン学部の新設に伴い、2005年にリフォームされた。2009年に第18回/(ロングライフビル推進協会/BELCA賞)/ベストリフォーム部門受賞。\n総合体育館:旧総合体育館に代わる体育施設として2014年に竣工した地上2階建ての建物。アリーナやトレーニングルームの他、ライフデザイン学部の実習室も設置されている。"], [["施設", "現存しない施設"], "旧1号館:キャンパス設置時に建設された3階建ての校舎で、真裏は住宅地である。キャンパス開設当初に建設され、最も古く駅から遠い校舎だったが、現在は取り壊され、跡地は売却のうえ民間のマンションになっている。1階の書店では新年度始めに教科書の一斉販売が行われていた。\n旧3号館:市街化調整区域で校舎の増築がなかなか認められないことから、道路を挟んだ1号館の隣に急遽取得した土地に建てられた校舎である。音響機器や衛星通信による遠隔講義に対応した2つの大講義室と大学生協および食堂が設置されていたが、現在は取り壊され、跡地は売却のうえ/(ヤオコー/ヤオコー)/朝霞岡店になっている。\n旧4号館:かつて存在したプレハブ校舎。当初は体育科目の講義や社会学部の演習で使用されていたが、その後は音楽系サークルの練習場として使用された。5号館の設置に伴い、/(建築基準法/建築基準法)/の問題から取り壊され、跡地は芝生として整備されていた。ここの/(公衆電話/公衆電話)/は学内で一番空いているとされ、携帯電話普及前には重宝がられた。1号館などと同様に敷地は売却され、現在は/(TMGあさか医療センター/TMGあさか医療センター)/が建っている。\n旧総合体育館:体育系の講義と体育会の練習設備として使用される3階建ての建物。剣道場、柔道場、卓球場、レスリング場などのほか、フィットネスクラブで使用されている各種運動器具が配置されたトレーニングルームが設置されており、東洋大学の学生教職員であれば、一定の講習を受けることで自由に使用することができた。4号館跡地と一体で売却され、現在はTMGあさか医療センターが建っている。\n旧テニスコート:旧総合体育館隣の東武東上線の線路脇に存在し、体育系の講義やテニスサークルの活動に使用されていた。4号館や総合体育館同様、現在はTMGあさか医療センターが建っている。"], [["特徴"], "開設当初は文系5学部の教養課程を担当する目的であったことから体育施設が充実していた。また、語学用の少人数教室が多く配置されている。\n現在でも市街化調整区域となっているため、周辺の開発が進まない反面、キャンパスの拡張にも制約があり、再開発の計画は思うように進んでいない。\n5階建ての大学院・研究棟は東武鉄道の電車からもよく見え、朝霞市北部のランドマーク的な存在となっている。"], [["アクセス"], "/(東日本旅客鉄道/JR東日本)//(武蔵野線/武蔵野線)//(北朝霞駅/北朝霞駅)/東口および/(東武鉄道/東武)//(東武東上本線/東上線)//(朝霞台駅/朝霞台駅)/東口から徒歩10分\n朝霞台駅・北朝霞駅東口、東武東上線/(朝霞駅/朝霞駅)/東口より/(朝霞市内循環バス/朝霞市内循環バス)/わくわく号・根岸台線 朝霞市斎場停留所から徒歩1分"], [["脚注"], ""], [["外部リンク"], "東洋大学朝霞キャンパス案内図等"]]}
```
## Usage
Clone this repo and unzip the jsonl file using:
```sh
git clone https://huggingface.co/datasets/tensorcat/wikipedia-japanese && cd wikipedia-japanese
gunzip jawiki-20220808.json.gz
``` |
and111/bert_pretrain_phase1 | 2022-08-23T17:14:31.000Z | [
"region:us"
] | and111 | null | null | null | 1 | 6 | ### Dataset Summary
Input data for the **first** phase of BERT pretraining (sequence length 128). All text is tokenized with [bert-base-uncased](https://huggingface.co/bert-base-uncased) tokenizer.
Data is obtained by concatenating and shuffling [wikipedia](https://huggingface.co/datasets/wikipedia) (split: `20220301.en`) and [bookcorpusopen](https://huggingface.co/datasets/bookcorpusopen) datasets and running [reference BERT data preprocessor](https://github.com/google-research/bert/blob/master/create_pretraining_data.py) without masking and input duplication (`dupe_factor = 1`). Documents are split into sentences with the [NLTK](https://www.nltk.org/) sentence tokenizer (`nltk.tokenize.sent_tokenize`).
See the dataset for the **second** phase of pretraining: [bert_pretrain_phase2](https://huggingface.co/datasets/and111/bert_pretrain_phase2). |
Kirili4ik/yandex_jobs | 2022-09-03T17:55:00.000Z | [
"task_categories:text-generation",
"task_categories:summarization",
"task_categories:multiple-choice",
"task_ids:language-modeling",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:ru",... | Kirili4ik | null | null | null | 4 | 6 | ---
annotations_creators:
- expert-generated
language:
- ru
language_creators:
- found
license:
- unknown
multilinguality:
- monolingual
paperswithcode_id: climate-fever
pretty_name: yandex_jobs
size_categories:
- n<1K
source_datasets:
- original
tags:
- vacancies
- jobs
- ru
- yandex
task_categories:
- text-generation
- summarization
- multiple-choice
task_ids:
- language-modeling
---
# Dataset Card for Yandex_Jobs
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This is a dataset of more than 600 IT vacancies in Russian from parsing telegram channel https://t.me/ya_jobs. All the texts are perfectly structured, no missing values.
### Supported Tasks and Leaderboards
`text-generation` with the 'Raw text column'.
`summarization` as for getting from all the info the header.
`multiple-choice` as for the hashtags (to choose multiple from all available in the dataset)
### Languages
The text in the dataset is in only in Russian. The associated BCP-47 code is `ru`.
## Dataset Structure
### Data Instances
The data is parsed from a vacancy of Russian IT company [Yandex](https://ya.ru/).
An example from the set looks as follows:
```
{'Header': 'Разработчик интерфейсов в группу разработки спецпроектов',
'Emoji': '🎳',
'Description': 'Конструктор лендингов — это инструмент Яндекса, который позволяет пользователям создавать лендинги и турбо-лендинги для Яндекс.Директа. Турбо — режим ускоренной загрузки страниц для показа на мобильных. У нас современный стек, смелые планы и высокая динамика.\nМы ищем опытного и открытого новому фронтенд-разработчика.',
'Requirements': '• отлично знаете JavaScript
• разрабатывали на Node.js, применяли фреймворк Express
• умеете создавать веб-приложения на React + Redux
• знаете HTML и CSS, особенности их отображения в браузерах',
'Tasks': '• разрабатывать интерфейсы',
'Pluses': '• писали интеграционные, модульные, функциональные или браузерные тесты
• умеете разворачивать и администрировать веб-сервисы: собирать Docker-образы, настраивать мониторинги, выкладывать в облачные системы, отлаживать в продакшене
• работали с реляционными БД PostgreSQL',
'Hashtags': '#фронтенд #турбо #JS',
'Link': 'https://ya.cc/t/t7E3UsmVSKs6L',
'Raw text': 'Разработчик интерфейсов в группу разработки спецпроектов🎳
Конструктор лендингов — это инструмент Яндекса, который позволяет пользователям создавать лендинги и турбо-лендинги для Яндекс.Директа. Турбо — режим ускоренной загрузки страниц для показа на мобильных. У нас современный стек, смелые планы и высокая динамика.
Мы ищем опытного и открытого новому фронтенд-разработчика.
Мы ждем, что вы:
• отлично знаете JavaScript
• разрабатывали на Node.js, применяли фреймворк Express
• умеете создавать веб-приложения на React + Redux
• знаете HTML и CSS, особенности их отображения в браузерах
Что нужно делать:
• разрабатывать интерфейсы
Будет плюсом, если вы:
• писали интеграционные, модульные, функциональные или браузерные тесты
• умеете разворачивать и администрировать веб-сервисы: собирать Docker-образы, настраивать мониторинги, выкладывать в облачные системы, отлаживать в продакшене
• работали с реляционными БД PostgreSQL
https://ya.cc/t/t7E3UsmVSKs6L
#фронтенд #турбо #JS'
}
```
### Data Fields
- `Header`: A string with a position title (str)
- `Emoji`: Emoji that is used at the end of the title position (usually asosiated with the position) (str)
- `Description`: Short description of the vacancy (str)
- `Requirements`: A couple of required technologies/programming languages/experience (str)
- `Tasks`: Examples of the tasks of the job position (str)
- `Pluses`: A couple of great points for the applicant to have (technologies/experience/etc)
- `Hashtags`: A list of hashtags assosiated with the job (usually programming languages) (str)
- `Link`: A link to a job description (there may be more information, but it is not checked) (str)
- `Raw text`: Raw text with all the formatiing from the channel. Created with other fields. (str)
### Data Splits
There is not enough examples yet to split it to train/test/val in my opinion.
## Dataset Creation
It downloaded and parsed from telegram channel https://t.me/ya_jobs 03.09.2022. All the unparsed examples and the ones missing any field are deleted (from 1600 vacancies to only 600 without any missing fields like emojis or links)
## Considerations for Using the Data
These vacancies are for only one IT company (yandex). This means they can be pretty specific and probably can not be generalized as any vacancies or even any IT vacancies.
## Contributions
- **Point of Contact and Author:** [Kirill Gelvan](telegram: @kirili4ik) |
ai-forever/school_notebooks_RU | 2023-02-09T18:27:24.000Z | [
"task_categories:image-segmentation",
"task_categories:object-detection",
"source_datasets:original",
"language:ru",
"license:mit",
"optical-character-recognition",
"text-detection",
"ocr",
"region:us"
] | ai-forever | null | null | null | 6 | 6 | ---
language:
- ru
license:
- mit
source_datasets:
- original
task_categories:
- image-segmentation
- object-detection
task_ids: []
tags:
- optical-character-recognition
- text-detection
- ocr
---
# School Notebooks Dataset
The images of school notebooks with handwritten notes in Russian.
The dataset annotation contain end-to-end markup for training detection and OCR models, as well as an end-to-end model for reading text from pages.
## Annotation format
The annotation is in COCO format. The `annotation.json` should have the following dictionaries:
- `annotation["categories"]` - a list of dicts with a categories info (categotiy names and indexes).
- `annotation["images"]` - a list of dictionaries with a description of images, each dictionary must contain fields:
- `file_name` - name of the image file.
- `id` for image id.
- `annotation["annotations"]` - a list of dictioraties with a murkup information. Each dictionary stores a description for one polygon from the dataset, and must contain the following fields:
- `image_id` - the index of the image on which the polygon is located.
- `category_id` - the polygon’s category index.
- `attributes` - dict with some additional annotation information. In the `translation` subdict you can find text translation for the line.
- `segmentation` - the coordinates of the polygon, a list of numbers - which are coordinate pairs x and y. |
daspartho/subreddit-posts | 2022-12-23T20:52:04.000Z | [
"license:apache-2.0",
"region:us"
] | daspartho | null | null | null | 1 | 6 | ---
license: apache-2.0
---
Dataset of titles of the top 1000 posts from the top 250 subreddits scraped using [PRAW](https://praw.readthedocs.io/en/stable/index.html).
For steps to create the dataset check out the [dataset](https://github.com/daspartho/predict-subreddit/blob/main/dataset.py) script in the GitHub repo. |
hadiqa123/en_timit_asr | 2022-09-20T15:52:36.000Z | [
"region:us"
] | hadiqa123 | null | null | null | 0 | 6 | Entry not found |
gexai/inquisitiveqg | 2022-09-20T21:22:53.000Z | [
"license:unknown",
"region:us"
] | gexai | A dataset of about 20k questions that are elicited from readers as they naturally read through a document sentence by sentence. Compared to existing datasets, INQUISITIVE questions target more towards high-level (semantic and discourse) comprehension of text. Because these questions are generated while the readers are processing the information, the questions directly communicate gaps between the reader’s and writer’s knowledge about the events described in the text, and are not necessarily answered in the document itself. This type of question reflects a real-world scenario: if one has questions during reading, some of them are answered by the text later on, the rest are not, but any of them would help further the reader’s understanding at the particular point when they asked it. This resource could enable question generation models to simulate human-like curiosity and cognitive processing, which may open up a new realm of applications. | @InProceedings{ko2020inquisitive,
author = {Ko, Wei-Jen and Chen, Te-Yuan and Huang, Yiyan and Durrett, Greg and Li, Junyi Jessy},
title = {Inquisitive Question Generation for High Level Text Comprehension},
booktitle = {Proceedings of EMNLP},
year = {2020},
} | null | 0 | 6 | ---
license: unknown
---
|
zpn/pubchem_selfies | 2022-10-04T16:15:19.000Z | [
"license:openrail",
"region:us"
] | zpn | This dataset contains ~100M molecules from PubChem, with their SMILES and SELFIES representations. | @ARTICLE{Kim2016-sz,
title = "{PubChem} Substance and Compound databases",
author = "Kim, Sunghwan and Thiessen, Paul A and Bolton, Evan E and Chen,
Jie and Fu, Gang and Gindulyte, Asta and Han, Lianyi and He,
Jane and He, Siqian and Shoemaker, Benjamin A and Wang, Jiyao
and Yu, Bo and Zhang, Jian and Bryant, Stephen H",
abstract = "PubChem (https://pubchem.ncbi.nlm.nih.gov) is a public
repository for information on chemical substances and their
biological activities, launched in 2004 as a component of the
Molecular Libraries Roadmap Initiatives of the US National
Institutes of Health (NIH). For the past 11 years, PubChem has
grown to a sizable system, serving as a chemical information
resource for the scientific research community. PubChem consists
of three inter-linked databases, Substance, Compound and
BioAssay. The Substance database contains chemical information
deposited by individual data contributors to PubChem, and the
Compound database stores unique chemical structures extracted
from the Substance database. Biological activity data of
chemical substances tested in assay experiments are contained in
the BioAssay database. This paper provides an overview of the
PubChem Substance and Compound databases, including data sources
and contents, data organization, data submission using PubChem
Upload, chemical structure standardization, web-based interfaces
for textual and non-textual searches, and programmatic access.
It also gives a brief description of PubChem3D, a resource
derived from theoretical three-dimensional structures of
compounds in PubChem, as well as PubChemRDF, Resource
Description Framework (RDF)-formatted PubChem data for data
sharing, analysis and integration with information contained in
other databases.",
journal = "Nucleic Acids Res.",
publisher = "Oxford University Press (OUP)",
volume = 44,
number = "D1",
pages = "D1202--13",
month = jan,
year = 2016,
language = "en"
} | null | 3 | 6 | ---
license: openrail
---
This dataset consists of Pubchem molecules downloaded from: https://ftp.ncbi.nlm.nih.gov/pubchem/Compound/CURRENT-Full/
There are in total ~85M compounds for training, with an additional ~10M held out for validation and testing. |
truongpdd/vietnamese_poetry_story | 2022-09-23T11:32:45.000Z | [
"region:us"
] | truongpdd | null | null | null | 0 | 6 | Entry not found |
arbml/Arabizi_Transliteration | 2022-11-03T13:10:45.000Z | [
"region:us"
] | arbml | null | null | null | 0 | 6 | Entry not found |
arbml/quran_uthmani | 2022-11-03T15:11:24.000Z | [
"region:us"
] | arbml | null | null | null | 0 | 6 | Entry not found |
ywchoi/pmc_0_cleaned | 2022-10-07T17:13:03.000Z | [
"region:us"
] | ywchoi | null | null | null | 0 | 6 | Entry not found |
dennlinger/wiki-paragraphs | 2022-10-13T22:12:37.000Z | [
"task_categories:text-classification",
"task_categories:sentence-similarity",
"task_ids:semantic-similarity-scoring",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:en",
... | dennlinger | null | null | null | 0 | 6 | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- crowdsourced
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
pretty_name: wiki-paragraphs
size_categories:
- 10M<n<100M
source_datasets:
- original
tags:
- wikipedia
- self-similarity
task_categories:
- text-classification
- sentence-similarity
task_ids:
- semantic-similarity-scoring
---
# Dataset Card for `wiki-paragraphs`
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** https://github.com/dennlinger/TopicalChange
- **Paper:** https://arxiv.org/abs/2012.03619
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Dennis Aumiller](aumiller@informatik.uni-heidelberg.de)
### Dataset Summary
The wiki-paragraphs dataset is constructed by automatically sampling two paragraphs from a Wikipedia article. If they are from the same section, they will be considered a "semantic match", otherwise as "dissimilar". Dissimilar paragraphs can in theory also be sampled from other documents, but have not shown any improvement in the particular evaluation of the linked work.
The alignment is in no way meant as an accurate depiction of similarity, but allows to quickly mine large amounts of samples.
### Supported Tasks and Leaderboards
The dataset can be used for "same-section classification", which is a binary classification task (either two sentences/paragraphs belong to the same section or not).
This can be combined with document-level coherency measures, where we can check how many misclassifications appear within a single document.
Please refer to [our paper](https://arxiv.org/abs/2012.03619) for more details.
### Languages
The data was extracted from English Wikipedia, therefore predominantly in English.
## Dataset Structure
### Data Instances
A single instance contains three attributes:
```
{
"sentence1": "<Sentence from the first paragraph>",
"sentence2": "<Sentence from the second paragraph>",
"label": 0/1 # 1 indicates two belong to the same section
}
```
### Data Fields
- sentence1: String containing the first paragraph
- sentence2: String containing the second paragraph
- label: Integer, either 0 or 1. Indicates whether two paragraphs belong to the same section (1) or come from different sections (0)
### Data Splits
We provide train, validation and test splits, which were split as 80/10/10 from a randomly shuffled original data source.
In total, we provide 25375583 training pairs, as well as 3163685 validation and test instances, respectively.
## Dataset Creation
### Curation Rationale
The original idea was applied to self-segmentation of Terms of Service documents. Given that these are of domain-specific nature, we wanted to provide a more generally applicable model trained on Wikipedia data.
It is meant as a cheap-to-acquire pre-training strategy for large-scale experimentation with semantic similarity for long texts (paragraph-level).
Based on our experiments, it is not necessarily sufficient by itself to replace traditional hand-labeled semantic similarity datasets.
### Source Data
#### Initial Data Collection and Normalization
The data was collected based on the articles considered in the Wiki-727k dataset by Koshorek et al. The dump of their dataset can be found through the [respective Github repository](https://github.com/koomri/text-segmentation). Note that we did *not* use the pre-processed data, but rather only information on the considered articles, which were re-acquired from Wikipedia at a more recent state.
This is due to the fact that paragraph information was not retained by the original Wiki-727k authors.
We did not verify the particular focus of considered pages.
#### Who are the source language producers?
We do not have any further information on the contributors; these are volunteers contributing to en.wikipedia.org.
### Annotations
#### Annotation process
No manual annotation was added to the dataset.
We automatically sampled two sections from within the same article; if these belong to the same section, they were assigned a label indicating the "similarity" (1), otherwise the label indicates that they are not belonging to the same section (0).
We sample three positive and three negative samples per section, per article.
#### Who are the annotators?
No annotators were involved in the process.
### Personal and Sensitive Information
We did not modify the original Wikipedia text in any way. Given that personal information, such as dates of birth (e.g., for a person of interest) may be on Wikipedia, this information is also considered in our dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of the dataset is to serve as a *pre-training addition* for semantic similarity learning.
Systems building on this dataset should consider additional, manually annotated data, before using a system in production.
### Discussion of Biases
To our knowledge, there are some works indicating that male people have a several times larger chance of having a Wikipedia page created (especially in historical contexts). Therefore, a slight bias towards over-representation might be left in this dataset.
### Other Known Limitations
As previously stated, the automatically extracted semantic similarity is not perfect; it should be treated as such.
## Additional Information
### Dataset Curators
The dataset was originally developed as a practical project by Lucienne-Sophie Marm� under the supervision of Dennis Aumiller.
Contributions to the original sampling strategy were made by Satya Almasian and Michael Gertz
### Licensing Information
Wikipedia data is available under the CC-BY-SA 3.0 license.
### Citation Information
```
@inproceedings{DBLP:conf/icail/AumillerAL021,
author = {Dennis Aumiller and
Satya Almasian and
Sebastian Lackner and
Michael Gertz},
editor = {Juliano Maranh{\~{a}}o and
Adam Zachary Wyner},
title = {Structural text segmentation of legal documents},
booktitle = {{ICAIL} '21: Eighteenth International Conference for Artificial Intelligence
and Law, S{\~{a}}o Paulo Brazil, June 21 - 25, 2021},
pages = {2--11},
publisher = {{ACM}},
year = {2021},
url = {https://doi.org/10.1145/3462757.3466085},
doi = {10.1145/3462757.3466085}
}
``` |
awacke1/SNOMED-CT-Code-Value-Semantic-Set.csv | 2022-10-29T12:42:02.000Z | [
"license:mit",
"region:us"
] | awacke1 | null | null | null | 3 | 6 | ---
license: mit
---
SNOMED-CT-Code-Value-Semantic-Set.csv |
awacke1/eCQM-Code-Value-Semantic-Set.csv | 2022-10-29T12:40:54.000Z | [
"license:mit",
"region:us"
] | awacke1 | null | null | null | 1 | 6 | ---
license: mit
---
eCQM-Code-Value-Semantic-Set.csv |
awacke1/LOINC-CodeSet-Value-Description.csv | 2022-10-29T12:43:25.000Z | [
"license:mit",
"region:us"
] | awacke1 | null | null | null | 1 | 6 | ---
license: mit
---
LOINC-CodeSet-Value-Description.csv |
nitrosocke/arcane-diffusion-dataset | 2022-10-18T20:58:23.000Z | [
"license:creativeml-openrail-m",
"region:us"
] | nitrosocke | null | null | null | 11 | 6 | ---
license: creativeml-openrail-m
---
# Arcane Diffusion Dataset
Dataset containing the 75 images used to train the [Arcane Diffusion](https://huggingface.co/nitrosocke/Arcane-Diffusion) model.
Settings for training:
```class prompt: illustration style
instance prompt: illustration arcane style
learning rate: 5e-6
lr scheduler: constant
num class images: 1000
max train steps: 5000
``` |
arize-ai/beer_reviews_label_drift_neg | 2022-10-19T13:20:26.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"region:us"
] | arize-ai | This dataset was crafted to be used in our tutorial [Link to the tutorial when
ready]. It consists on product reviews from an e-commerce store. The reviews
are labeled on a scale from 1 to 5 (stars). The training & validation sets are
fully composed by reviews written in english. However, the production set has
some reviews written in spanish. At Arize, we work to surface this issue and
help you solve it. | # @InProceedings{huggingface:dataset,
# title = {A great new dataset},
# author={huggingface, Inc.
# },
# year={2020}
# }
# | null | 0 | 6 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: sentiment-classification-reviews-with-drift
size_categories:
- 10K<n<100K
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for `reviews_with_drift`
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [language](#language)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added (`age`, `gender`, `context`) as well as a made up timestamp `prediction_ts` of when the inference took place.
### Supported Tasks and Leaderboards
`text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).
### language
Text is mainly written in english.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@fjcasti1](https://github.com/fjcasti1) for adding this dataset. |
TheoTsio/Health_Misinfo | 2023-08-28T21:51:26.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"health_misinformation, credibility",
"region:us"
] | TheoTsio | null | null | null | 0 | 6 | ---
task_categories:
- text-classification
language:
- en
tags:
- health_misinformation, credibility
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The health misinfo dataset is an English Document dataset containing just over 6k unique articles related to health issues from web. This dataset was created in an effort to detect the misinformation in health documents. This dataset was created from the relevance judgment of the TREC health misinformation
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
arize-ai/beer_reviews_label_drift_neutral | 2022-10-19T13:19:17.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"region:us"
] | arize-ai | This dataset was crafted to be used in our tutorial [Link to the tutorial when
ready]. It consists on product reviews from an e-commerce store. The reviews
are labeled on a scale from 1 to 5 (stars). The training & validation sets are
fully composed by reviews written in english. However, the production set has
some reviews written in spanish. At Arize, we work to surface this issue and
help you solve it. | # @InProceedings{huggingface:dataset,
# title = {A great new dataset},
# author={huggingface, Inc.
# },
# year={2020}
# }
# | null | 0 | 6 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: sentiment-classification-reviews-with-drift
size_categories:
- 10K<n<100K
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for `reviews_with_drift`
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [language](#language)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added (`age`, `gender`, `context`) as well as a made up timestamp `prediction_ts` of when the inference took place.
### Supported Tasks and Leaderboards
`text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).
### language
Text is mainly written in english.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@fjcasti1](https://github.com/fjcasti1) for adding this dataset. |
toloka/WSDMCup2023 | 2023-09-29T08:39:52.000Z | [
"task_categories:visual-question-answering",
"task_ids:visual-question-answering",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"toloka",
"arxiv:2309.16511",
"region:us"
] | toloka | null | null | null | 1 | 6 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- crowdsourced
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: WSDMCup2023
size_categories:
- 10K<n<100K
source_datasets: []
tags:
- toloka
task_categories:
- visual-question-answering
task_ids:
- visual-question-answering
dataset_info:
features:
- name: image
dtype: string
- name: width
dtype: int64
- name: height
dtype: int64
- name: left
dtype: int64
- name: top
dtype: int64
- name: right
dtype: int64
- name: bottom
dtype: int64
- name: question
dtype: string
splits:
- name: train
num_examples: 38990
- name: train_sample
num_examples: 1000
- name: test_public
num_examples: 1705
- name: test_private
num_examples: 4504
config_name: wsdmcup2023
---
# Dataset Card for WSDMCup2023
## Dataset Description
- **Homepage:** [Toloka Visual Question Answering Challenge](https://toloka.ai/challenges/wsdm2023)
- **Repository:** [WSDM Cup 2023 Starter Pack](https://github.com/Toloka/WSDMCup2023)
- **Paper:** <https://arxiv.org/abs/2309.16511>
- **Leaderboard:** [CodaLab Competition Leaderboard](https://codalab.lisn.upsaclay.fr/competitions/7434#results)
- **Point of Contact:** research@toloka.ai
| Question | Image and Answer |
| --- | --- |
| What do you use to hit the ball? | <img src="https://tlkfrontprod.azureedge.net/portal-production/static/uploaded/images/KUsGAc_eqdMcNxkBXzzl/KUsGAc_eqdMcNxkBXzzl_webp_1280_x2.webp" width="228" alt="What do you use to hit the ball?"> |
| What do people use for cutting? | <img src="https://tlkfrontprod.azureedge.net/portal-production/static/uploaded/images/brXEVYckNLfQKcfNu4DF/brXEVYckNLfQKcfNu4DF_webp_1280_x2.webp" width="228" alt="What do people use for cutting?"> |
| What do we use to support the immune system and get vitamin C? | <img src="https://tlkfrontprod.azureedge.net/portal-production/static/uploaded/images/HQ0A-ZvZCGCmYfTs83K7/HQ0A-ZvZCGCmYfTs83K7_webp_1280_x2.webp" width="228" alt="What do we use to support the immune system and get vitamin C?"> |
### Dataset Summary
The WSDMCup2023 Dataset consists of images associated with textual questions.
One entry (instance) in our dataset is a question-image pair labeled with the ground truth coordinates of a bounding box containing
the visual answer to the given question. The images were obtained from a CC BY-licensed subset of the Microsoft Common Objects in
Context dataset, [MS COCO](https://cocodataset.org/). All data labeling was performed on the [Toloka crowdsourcing platform](https://toloka.ai/).
Our dataset has 45,199 instances split among three subsets: train (38,990 instances), public test (1,705 instances),
and private test (4,504 instances). The entire train dataset was available for everyone since the start of the challenge.
The public test dataset was available since the evaluation phase of the competition but without any ground truth labels.
After the end of the competition, public and private sets were released.
## Dataset Citation
Please cite the challenge results or dataset description as follows.
- Ustalov D., Pavlichenko N., Koshelev S., Likhobaba D., and Smirnova A. [Toloka Visual Question Answering Benchmark](https://arxiv.org/abs/2309.16511). 2023. arXiv: [2309.16511 [cs.CV]](https://arxiv.org/abs/2309.16511).
```bibtex
@inproceedings{TolokaWSDMCup2023,
author = {Ustalov, Dmitry and Pavlichenko, Nikita and Koshelev, Sergey and Likhobaba, Daniil and Smirnova, Alisa},
title = {{Toloka Visual Question Answering Benchmark}},
year = {2023},
eprint = {2309.16511},
eprinttype = {arxiv},
eprintclass = {cs.CV},
language = {english},
}
```
### Supported Tasks and Leaderboards
Grounding Visual Question Answering
### Language
English
## Dataset Structure
### Data Instances
A data instance contains a URL to the picture, information about the image size - width and height, information about the ground truth bounding box - left top and right bottom dots, and contains the question related to the picture.
```
{'image': https://toloka-cdn.azureedge.net/wsdmcup2023/000000000013.jpg,
'width': 640,
'height': 427,
'left': 129,
'top': 192,
'right': 155,
'bottom': 212,
'question': What does it use to breath?}
```
### Data Fields
* image: contains URL to the image
* width: value in pixels of image width
* height: value in pixels of image height
* left: the x coordinate in pixels to determine the left-top dot of the bounding box
* top: the y coordinate in pixels to determine the left-top dot of the bounding box
* right: the x coordinate in pixels to determine the right-bottom dot of the bounding box
* bottom: the y coordinate in pixels to determine the right-bottom dot of the bounding box
* question: a question related to the picture
### Data Splits
There are four splits in the data: train, train_sample, test_public, and test_private. 'train' split contains the full pull for model training.
The 'train-sample' split contains the part of the 'train' split. The 'test_public' split contains public data to test the model.
The 'test_private' split contains private data for the final model test.
### Source Data
The images were obtained from a CC BY-licensed subset of the Microsoft Common Objects in
Context dataset, [MS COCO](https://cocodataset.org/).
### Annotations
All data labeling was performed on the [Toloka crowdsourcing platform](https://toloka.ai/).
Only annotators who self-reported the knowledge of English had access to the annotation task.
### Citation Information
* Competition: https://toloka.ai/challenges/wsdm2023
* CodaLab: https://codalab.lisn.upsaclay.fr/competitions/7434
* Dataset: https://doi.org/10.5281/zenodo.7057740 |
arbml/PAAD | 2022-10-28T12:54:12.000Z | [
"region:us"
] | arbml | null | null | null | 0 | 6 | Entry not found |
arbml/AFND | 2022-10-31T21:21:41.000Z | [
"region:us"
] | arbml | null | null | null | 0 | 6 | Entry not found |
shunk031/cocostuff | 2022-12-09T04:29:27.000Z | [
"language:en",
"license:cc-by-4.0",
"computer-vision",
"object-detection",
"ms-coco",
"arxiv:1612.03716",
"region:us"
] | shunk031 | COCO-Stuff augments all 164K images of the popular COCO dataset with pixel-level stuff annotations. These annotations can be used for scene understanding tasks like semantic segmentation, object detection and image captioning. | @INPROCEEDINGS{caesar2018cvpr,
title={COCO-Stuff: Thing and stuff classes in context},
author={Caesar, Holger and Uijlings, Jasper and Ferrari, Vittorio},
booktitle={Computer vision and pattern recognition (CVPR), 2018 IEEE conference on},
organization={IEEE},
year={2018}
} | null | 0 | 6 | ---
language:
- en
license: cc-by-4.0
tags:
- computer-vision
- object-detection
- ms-coco
datasets:
- stuff-thing
- stuff-only
metrics:
- accuracy
- iou
---
# Dataset Card for COCO-Stuff
[](https://github.com/shunk031/huggingface-datasets_cocostuff/actions/workflows/ci.yaml)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Preprocessing](#dataset-preprocessing)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- Homepage: https://github.com/nightrome/cocostuff
- Repository: https://github.com/nightrome/cocostuff
- Paper (preprint): https://arxiv.org/abs/1612.03716
- Paper (CVPR2018): https://openaccess.thecvf.com/content_cvpr_2018/html/Caesar_COCO-Stuff_Thing_and_CVPR_2018_paper.html
### Dataset Summary
COCO-Stuff is the largest existing dataset with dense stuff and thing annotations.
From the paper:
> Semantic classes can be either things (objects with a well-defined shape, e.g. car, person) or stuff (amorphous background regions, e.g. grass, sky). While lots of classification and detection works focus on thing classes, less attention has been given to stuff classes. Nonetheless, stuff classes are important as they allow to explain important aspects of an image, including (1) scene type; (2) which thing classes are likely to be present and their location (through contextual reasoning); (3) physical attributes, material types and geometric properties of the scene. To understand stuff and things in context we introduce COCO-Stuff, which augments all 164K images of the COCO 2017 dataset with pixel-wise annotations for 91 stuff classes. We introduce an efficient stuff annotation protocol based on superpixels, which leverages the original thing annotations. We quantify the speed versus quality trade-off of our protocol and explore the relation between annotation time and boundary complexity. Furthermore, we use COCO-Stuff to analyze: (a) the importance of stuff and thing classes in terms of their surface cover and how frequently they are mentioned in image captions; (b) the spatial relations between stuff and things, highlighting the rich contextual relations that make our dataset unique; (c) the performance of a modern semantic segmentation method on stuff and thing classes, and whether stuff is easier to segment than things.
### Dataset Preprocessing
### Supported Tasks and Leaderboards
### Languages
All of annotations use English as primary language.
## Dataset Structure
### Data Instances
When loading a specific configuration, users has to append a version dependent suffix:
```python
from datasets import load_dataset
load_dataset("shunk031/cocostuff", "stuff-thing")
```
#### stuff-things
An example of looks as follows.
```json
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x480 at 0x7FCA033C9C40>,
'image_filename': '000000000009.jpg',
'image_id': '9',
'width': 640
'height': 480,
'objects': [
{
'object_id': '121',
'x': 0,
'y': 11,
'w': 640,
'h': 469,
'name': 'food-other'
},
{
'object_id': '143',
'x': 0,
'y': 0
'w': 640,
'h': 480,
'name': 'plastic'
},
{
'object_id': '165',
'x': 0,
'y': 0,
'w': 319,
'h': 118,
'name': 'table'
},
{
'object_id': '183',
'x': 0,
'y': 2,
'w': 631,
'h': 472,
'name': 'unknown-183'
}
],
'stuff_map': <PIL.PngImagePlugin.PngImageFile image mode=L size=640x480 at 0x7FCA0222D880>,
}
```
#### stuff-only
An example of looks as follows.
```json
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x480 at 0x7FCA033C9C40>,
'image_filename': '000000000009.jpg',
'image_id': '9',
'width': 640
'height': 480,
'objects': [
{
'object_id': '121',
'x': 0,
'y': 11,
'w': 640,
'h': 469,
'name': 'food-other'
},
{
'object_id': '143',
'x': 0,
'y': 0
'w': 640,
'h': 480,
'name': 'plastic'
},
{
'object_id': '165',
'x': 0,
'y': 0,
'w': 319,
'h': 118,
'name': 'table'
},
{
'object_id': '183',
'x': 0,
'y': 2,
'w': 631,
'h': 472,
'name': 'unknown-183'
}
]
}
```
### Data Fields
#### stuff-things
- `image`: A `PIL.Image.Image` object containing the image.
- `image_id`: Unique numeric ID of the image.
- `image_filename`: File name of the image.
- `width`: Image width.
- `height`: Image height.
- `stuff_map`: A `PIL.Image.Image` object containing the Stuff + thing PNG-style annotations
- `objects`: Holds a list of `Object` data classes:
- `object_id`: Unique numeric ID of the object.
- `x`: x coordinate of bounding box's top left corner.
- `y`: y coordinate of bounding box's top left corner.
- `w`: Bounding box width.
- `h`: Bounding box height.
- `name`: object name
#### stuff-only
- `image`: A `PIL.Image.Image` object containing the image.
- `image_id`: Unique numeric ID of the image.
- `image_filename`: File name of the image.
- `width`: Image width.
- `height`: Image height.
- `objects`: Holds a list of `Object` data classes:
- `object_id`: Unique numeric ID of the object.
- `x`: x coordinate of bounding box's top left corner.
- `y`: y coordinate of bounding box's top left corner.
- `w`: Bounding box width.
- `h`: Bounding box height.
- `name`: object name
### Data Splits
| name | train | validation |
|-------------|--------:|-----------:|
| stuff-thing | 118,280 | 5,000 |
| stuff-only | 118,280 | 5,000 |
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
From the paper:
> COCO-Stuff contains 172 classes: 80 thing, 91 stuff, and 1 class unlabeled. The 80 thing classes are the same as in COCO [35]. The 91 stuff classes are curated by an expert annotator. The class unlabeled is used in two situations: if a label does not belong to any of the 171 predefined classes, or if the annotator cannot infer the label of a pixel.
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
COCO-Stuff is a derivative work of the COCO dataset. The authors of COCO do not in any form endorse this work. Different licenses apply:
- COCO images: [Flickr Terms of use](http://cocodataset.org/#termsofuse)
- COCO annotations: [Creative Commons Attribution 4.0 License](http://cocodataset.org/#termsofuse)
- COCO-Stuff annotations & code: [Creative Commons Attribution 4.0 License](http://cocodataset.org/#termsofuse)
### Citation Information
```bibtex
@INPROCEEDINGS{caesar2018cvpr,
title={COCO-Stuff: Thing and stuff classes in context},
author={Caesar, Holger and Uijlings, Jasper and Ferrari, Vittorio},
booktitle={Computer vision and pattern recognition (CVPR), 2018 IEEE conference on},
organization={IEEE},
year={2018}
}
```
### Contributions
Thanks to [@nightrome](https://github.com/nightrome) for publishing the COCO-Stuff dataset.
|
rungalileo/wikiner_it | 2022-11-02T19:02:52.000Z | [
"region:us"
] | rungalileo | null | null | null | 0 | 6 | Entry not found |
bigbio/medal | 2022-12-22T15:45:07.000Z | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | bigbio | The Repository for Medical Dataset for Abbreviation Disambiguation for Natural Language Understanding (MeDAL) is
a large medical text dataset curated for abbreviation disambiguation, designed for natural language understanding
pre-training in the medical domain. | @inproceedings{,
title = {MeDAL\: Medical Abbreviation Disambiguation Dataset for Natural Language Understanding Pretraining},
author = {Wen, Zhi and Lu, Xing Han and Reddy, Siva},
booktitle = {Proceedings of the 3rd Clinical Natural Language Processing Workshop},
month = {Nov},
year = {2020},
address = {Online},
publisher = {Association for Computational Linguistics},
url = {https://www.aclweb.org/anthology/2020.clinicalnlp-1.15},
pages = {130--135},
} | null | 0 | 6 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: NLM_LICENSE
pretty_name: MeDAL
homepage: https://github.com/BruceWen120/medal
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_DISAMBIGUATION
---
# Dataset Card for MeDAL
## Dataset Description
- **Homepage:** https://github.com/BruceWen120/medal
- **Pubmed:** True
- **Public:** True
- **Tasks:** NED
The Repository for Medical Dataset for Abbreviation Disambiguation for Natural Language Understanding (MeDAL) is
a large medical text dataset curated for abbreviation disambiguation, designed for natural language understanding
pre-training in the medical domain.
## Citation Information
```
@inproceedings{,
title = {MeDAL\: Medical Abbreviation Disambiguation Dataset for Natural Language Understanding Pretraining},
author = {Wen, Zhi and Lu, Xing Han and Reddy, Siva},
booktitle = {Proceedings of the 3rd Clinical Natural Language Processing Workshop},
month = {Nov},
year = {2020},
address = {Online},
publisher = {Association for Computational Linguistics},
url = {https://www.aclweb.org/anthology/2020.clinicalnlp-1.15},
pages = {130--135},
}
```
|
bigbio/tmvar_v3 | 2023-02-17T14:55:58.000Z | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"arxiv:2204.03637",
"region:us"
] | bigbio | This dataset contains 500 PubMed articles manually annotated with mutation mentions of various kinds and dbsnp normalizations for each of them. In addition, it contains variant normalization options such as allele-specific identifiers from the ClinGen Allele Registry It can be used for NER tasks and NED tasks, This dataset does NOT have splits. | @misc{https://doi.org/10.48550/arxiv.2204.03637,
title = {tmVar 3.0: an improved variant concept recognition and normalization tool},
author = {
Wei, Chih-Hsuan and Allot, Alexis and Riehle, Kevin and Milosavljevic,
Aleksandar and Lu, Zhiyong
},
year = 2022,
publisher = {arXiv},
doi = {10.48550/ARXIV.2204.03637},
url = {https://arxiv.org/abs/2204.03637},
copyright = {Creative Commons Attribution 4.0 International},
keywords = {
Computation and Language (cs.CL), FOS: Computer and information sciences,
FOS: Computer and information sciences
}
} | null | 0 | 6 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: tmVar v3
homepage: https://www.ncbi.nlm.nih.gov/research/bionlp/Tools/tmvar/
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- NAMED_ENTITY_DISAMBIGUATION
---
# Dataset Card for tmVar v3
## Dataset Description
- **Homepage:** https://www.ncbi.nlm.nih.gov/research/bionlp/Tools/tmvar/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED
This dataset contains 500 PubMed articles manually annotated with mutation mentions of various kinds and dbsnp normalizations for each of them. In addition, it contains variant normalization options such as allele-specific identifiers from the ClinGen Allele Registry It can be used for NER tasks and NED tasks, This dataset does NOT have splits.
## Citation Information
```
@misc{https://doi.org/10.48550/arxiv.2204.03637,
title = {tmVar 3.0: an improved variant concept recognition and normalization tool},
author = {
Wei, Chih-Hsuan and Allot, Alexis and Riehle, Kevin and Milosavljevic,
Aleksandar and Lu, Zhiyong
},
year = 2022,
publisher = {arXiv},
doi = {10.48550/ARXIV.2204.03637},
url = {https://arxiv.org/abs/2204.03637},
copyright = {Creative Commons Attribution 4.0 International},
keywords = {
Computation and Language (cs.CL), FOS: Computer and information sciences,
FOS: Computer and information sciences
}
}
```
|
Shengtao/recipe | 2022-11-15T13:45:41.000Z | [
"license:mit",
"region:us"
] | Shengtao | null | null | null | 1 | 6 | ---
license: mit
---
|
dxiao/requirements-ner-id | 2022-11-21T18:40:22.000Z | [
"region:us"
] | dxiao | null | null | null | 0 | 6 | Entry not found |
abdalrahmanshahrour/ArabicTextSummarization | 2022-12-01T17:16:50.000Z | [
"region:us"
] | abdalrahmanshahrour | null | null | null | 0 | 6 | ## This is Arabic news data with 9 categories in csv format
original data link: https://www.kaggle.com/datasets/muhammedfathi/arabic-news-texts-corpus
Data preparation and summary link: https://www.kaggle.com/code/abdalrahmanshahrour/arabic-text-summarization |
1aurent/individuality-of-handwriting | 2023-10-01T15:15:30.000Z | [
"task_categories:image-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:unknown",
"legal",
"signatures",
"CEDAR",
"region:us"
] | 1aurent | null | null | null | 0 | 6 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
task_categories:
- image-classification
pretty_name: Individuality Of Handwriting (CEDAR)
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': original
'1': forgeries
- name: individual
dtype: uint8
- name: figure
dtype: uint8
splits:
- name: train
num_bytes: 195780898.8
num_examples: 2640
download_size: 252337526
dataset_size: 195780898.8
tags:
- legal
- signatures
- CEDAR
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Individuality Of Handwriting (CEDAR)
https://pubmed.ncbi.nlm.nih.gov/12136998/ \
https://cedar.buffalo.edu/NIJ/projectinfo.html
## Abstract
Motivated by several rulings in United States courts concerning expert testimony in general, and handwriting testimony in particular, we undertook a study to objectively validate the hypothesis that handwriting is individual. Handwriting samples of 1,500 individuals, representative of the U.S. population with respect to gender, age, ethnic groups, etc., were obtained. Analyzing differences in handwriting was done by using computer algorithms for extracting features from scanned images of handwriting. Attributes characteristic of the handwriting were obtained, e.g., line separation, slant, character shapes, etc. These attributes, which are a subset of attributes used by forensic document examiners (FDEs), were used to quantitatively establish individuality by using machine learning approaches. Using global attributes of handwriting and very few characters in the writing, the ability to determine the writer with a high degree of confidence was established. The work is a step towards providing scientific support for admitting handwriting evidence in court. The mathematical approach and the resulting software also have the promise of aiding the FDE.
Srihari SN, Cha SH, Arora H, Lee S. Individuality of handwriting. J Forensic Sci. 2002 Jul;47(4):856-72. PMID: 12136998. |
lucadiliello/english_wikipedia | 2022-12-04T19:05:23.000Z | [
"region:us"
] | lucadiliello | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: filename
dtype: string
- name: maintext
dtype: string
- name: source_domain
dtype: string
- name: title
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 10569005563
num_examples: 4184712
download_size: 6144953788
dataset_size: 10569005563
---
# Dataset Card for "english_wikipedia"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Prarabdha/Rick_and_Morty_Transcript | 2022-12-05T16:09:45.000Z | [
"license:mit",
"region:us"
] | Prarabdha | null | null | null | 3 | 6 | ---
license: mit
---
## Context
I got inspiration for this dataset from the [Rick&Morty Scripts](https://www.kaggle.com/datasets/andradaolteanu/rickmorty-scripts) by [Andrada Olteanu](https://www.kaggle.com/andradaolteanu) but felt like dataset was a little small and outdated
This dataset includes almost all the episodes till Season 5. More data will be updated
## Content
Rick and Morty Transcripts:
- index: index of the row
- speaker: the character's name
- dialogue: the dialogue of the character
## Acknowledgements
Thanks to the transcripts made available by
- [RickandMorty.fandom.com](https://rickandmorty.fandom.com/)
- [RickandMorty.newtfire.org](http://rickandmorty.newtfire.org/transcripts.html) |
djghosh/wds_imagenet1k_test | 2022-12-12T21:01:44.000Z | [
"arxiv:1409.0575",
"region:us"
] | djghosh | null | null | null | 0 | 6 | # ImageNet-1k (Test set only)
Original paper: [ImageNet Large Scale Visual Recognition Challenge](https://arxiv.org/abs/1409.0575)
Homepage: https://www.image-net.org/
Bibtex:
```
@article{ILSVRC15,
Author = {Olga Russakovsky and Jia Deng and Hao Su and Jonathan Krause and Sanjeev Satheesh and Sean Ma and Zhiheng Huang and Andrej Karpathy and Aditya Khosla and Michael Bernstein and Alexander C. Berg and Li Fei-Fei},
Title = {{ImageNet Large Scale Visual Recognition Challenge}},
Year = {2015},
journal = {International Journal of Computer Vision (IJCV)},
doi = {10.1007/s11263-015-0816-y},
volume={115},
number={3},
pages={211-252}
}
``` |
Pinwheel/ActsOfAgression | 2023-01-06T11:29:33.000Z | [
"task_categories:video-classification",
"size_categories:1K<n<10K",
"license:mit",
"Fight",
"No-Fight",
"region:us"
] | Pinwheel | null | null | null | 1 | 6 | ---
license: mit
task_categories:
- video-classification
tags:
- Fight
- No-Fight
size_categories:
- 1K<n<10K
--- |
Ziyang/yfcc15m | 2023-01-06T10:38:29.000Z | [
"region:us"
] | Ziyang | null | null | null | 0 | 6 | Entry not found |
dominguesm/brwac | 2023-01-08T14:28:10.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:pt",
... | dominguesm | null | null | null | 1 | 6 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- pt
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: brwac
pretty_name: BrWaC
dataset_info:
features:
- name: doc_id
dtype: string
- name: title
dtype: string
- name: uri
dtype: string
- name: text
sequence:
- name: paragraphs
sequence: string
splits:
- name: train
num_bytes: 18828412956
num_examples: 3530796
download_size: 11616550261
dataset_size: 18828412956
---
# Dataset Card for BrWaC
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [BrWaC homepage](https://www.inf.ufrgs.br/pln/wiki/index.php?title=BrWaC)
- **Repository:** [BrWaC repository](https://www.inf.ufrgs.br/pln/wiki/index.php?title=BrWaC)
- **Paper:** [The brWaC Corpus: A New Open Resource for Brazilian Portuguese](https://www.aclweb.org/anthology/L18-1686/)
- **Point of Contact:** [Jorge A. Wagner Filho](mailto:jawfilho@inf.ufrgs.br)
### Dataset Summary
The BrWaC (Brazilian Portuguese Web as Corpus) is a large corpus constructed following the Wacky framework,
which was made public for research purposes. The current corpus version, released in January 2017, is composed by
3.53 million documents, 2.68 billion tokens and 5.79 million types. Please note that this resource is available
solely for academic research purposes, and you agreed not to use it for any commercial applications. No need to manually download external sources.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Portuguese
## Dataset Structure
### Data Instances
An example from the BrWaC dataset looks as follows:
```
{
"doc_id": "netg-1afc73",
"text": {
"paragraphs": [
[
"Conteúdo recente"
],
[
"ESPUMA MARROM CHAMADA \"NINGUÉM MERECE\""
],
[
"31 de Agosto de 2015, 7:07 , por paulo soavinski - | No one following this article yet."
],
[
"Visualizado 202 vezes"
],
[
"JORNAL ELETRÔNICO DA ILHA DO MEL"
],
[
"Uma espuma marrom escuro tem aparecido com frequência na Praia de Fora.",
"Na faixa de areia ela aparece disseminada e não chama muito a atenção.",
"No Buraco do Aipo, com muitas pedras, ela aparece concentrada.",
"É fácil saber que esta espuma estranha está lá, quando venta.",
"Pequenos algodões de espuma começam a flutuar no espaço, pertinho da Praia do Saquinho.",
"Quem pode ajudar na coleta deste material, envio a laboratório renomado e pagamento de análises, favor entrar em contato com o site."
]
]
},
"title": "ESPUMA MARROM CHAMADA ‟NINGUÃÂM MERECE‟ - paulo soavinski",
"uri": "http://blogoosfero.cc/ilhadomel/pousadasilhadomel.com.br/espuma-marrom-chamada-ninguem-merece"
}
```
### Data Fields
- `doc_id`: The document ID
- `title`: The document title
- `uri`: URI where the document was extracted from
- `text`: A list of document paragraphs (with a list of sentences in it as a list of strings)
### Data Splits
The data is only split into train set with size of 3530796 samples.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{wagner2018brwac,
title={The brwac corpus: A new open resource for brazilian portuguese},
author={Wagner Filho, Jorge A and Wilkens, Rodrigo and Idiart, Marco and Villavicencio, Aline},
booktitle={Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)},
year={2018}
}
``` |
Nicky0007/titulos_noticias_rcn_clasificadas | 2023-01-08T21:38:51.000Z | [
"task_categories:token-classification",
"size_categories:1K<n<10K",
"language:es",
"region:us"
] | Nicky0007 | null | null | null | 0 | 6 | ---
task_categories:
- token-classification
language:
- es
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name
titulos_noticias_rcn_clasificadas
## Dataset Description
Se tomo las noticias de la pagina de RCN y se clasifico los titulos por ['salud' 'tecnologia' 'colombia' 'economia' 'deportes']
salud= 1805 datos,
tecnologia= 1805 datos,
colombia= 1805 datos,
economia= 1805 datos,
deportes= 1805 datos,
Para dar un total de 9030 filas.
pagina: https://www.noticiasrcn.com/
- **Homepage:**
- **Repository:**
- **Point of Contact:**
### Languages
Español
## Dataset Structure
text, label, url |
Lemswasabi/luxembourgish-asr-rtl-lu | 2023-01-08T15:44:54.000Z | [
"language:lb",
"license:cc-by-nc-nd-4.0",
"region:us"
] | Lemswasabi | luxembourgish-asr-rtl-lu dataset is a speech corpus for the under-resourced Luxembourgish language. | null | null | 1 | 6 | ---
license: cc-by-nc-nd-4.0
language:
- lb
---
# About the Speech Corpus
`luxembourgish-asr-rtl-lu` dataset is a speech corpus for the under-resourced Luxembourgish language. The audio-transcription pairs were collected from [RTL.lu](http://www.rtl.lu/).
We used forced alignment to segment the audio files. The transcriptions were validated with the help of language experts at the [Center for the Luxembourgish Language](https://portal.education.lu/zls).
# Citation
```
@misc{lb-wav2vec2,
author = {Nguyen, Le Minh and Nayak, Shekhar and Coler, Matt.},
keywords = {Luxembourgish, multilingual speech recognition, language modelling, wav2vec 2.0 XLSR-53, under-resourced language},
title = {IMPROVING LUXEMBOURGISH SPEECH RECOGNITION WITH CROSS-LINGUAL SPEECH REPRESENTATIONS},
year = {2022},
copyright = {2023 IEEE}
}
```
# Copyright notice
Copyright © 2022 RTL.lu. All rights reserved. |
bio-datasets/e3c | 2023-08-16T08:56:50.000Z | [
"region:us"
] | bio-datasets | The European Clinical Case Corpus (E3C) project aims at collecting and annotating a large corpus of clinical documents in five European languages (Spanish, Basque, English, French and Italian), which will be freely distributed. Annotations include temporal information, to allow temporal reasoning on chronologies, and information about clinical entities based on medical taxonomies, to be used for semantic reasoning. | @report{Magnini2021,
author = {Bernardo Magnini and Begoña Altuna and Alberto Lavelli and Manuela Speranza
and Roberto Zanoli and Fondazione Bruno Kessler},
keywords = {Clinical data,clinical enti-ties,corpus,multilingual,temporal information},
title = {The E3C Project:
European Clinical Case Corpus El proyecto E3C: European Clinical Case Corpus},
url = {https://uts.nlm.nih.gov/uts/umls/home},
year = {2021},
} | null | 0 | 6 | ---
dataset_info:
features:
- name: id
dtype: string
- name: document_id
dtype: int32
- name: text
dtype: string
- name: passages
list:
- name: id
dtype: string
- name: text
dtype: string
- name: offsets
list: int32
- name: entities
list:
- name: id
dtype: string
- name: type
dtype: string
- name: text
dtype: string
- name: offsets
list: int32
- name: semantic_type_id
dtype: string
- name: role
dtype: string
- name: relations
list:
- name: id
dtype: string
- name: type
dtype: string
- name: contextualAspect
dtype: string
- name: contextualModality
dtype: string
- name: degree
dtype: string
- name: docTimeRel
dtype: string
- name: eventType
dtype: string
- name: permanence
dtype: string
- name: polarity
dtype: string
- name: functionInDocument
dtype: string
- name: timex3Class
dtype: string
- name: value
dtype: string
- name: concept_1
dtype: string
- name: concept_2
dtype: string
config_name: e3c_source
splits:
- name: en.layer1
num_bytes: 1645819
num_examples: 84
- name: en.layer2
num_bytes: 881290
num_examples: 171
- name: en.layer2.validation
num_bytes: 101379
num_examples: 19
- name: en.layer3
num_bytes: 7672589
num_examples: 9779
- name: es.layer1
num_bytes: 1398186
num_examples: 81
- name: es.layer2
num_bytes: 907515
num_examples: 162
- name: es.layer2.validation
num_bytes: 103936
num_examples: 18
- name: es.layer3
num_bytes: 6656630
num_examples: 1876
- name: eu.layer1
num_bytes: 2217479
num_examples: 90
- name: eu.layer2
num_bytes: 306291
num_examples: 111
- name: eu.layer2.validation
num_bytes: 95276
num_examples: 10
- name: eu.layer3
num_bytes: 4656179
num_examples: 1232
- name: fr.layer1
num_bytes: 1474138
num_examples: 81
- name: fr.layer2
num_bytes: 905084
num_examples: 168
- name: fr.layer2.validation
num_bytes: 101701
num_examples: 18
- name: fr.layer3
num_bytes: 457927491
num_examples: 25740
- name: it.layer1
num_bytes: 1036560
num_examples: 86
- name: it.layer2
num_bytes: 888138
num_examples: 174
- name: it.layer2.validation
num_bytes: 99549
num_examples: 18
- name: it.layer3
num_bytes: 86243680
num_examples: 10213
download_size: 230213492
dataset_size: 575318910
---
# Dataset Card for E3C
## Dataset Description
- **Homepage:** https://github.com/hltfbk/E3C-Corpus
- **PubMed** False
- **Public:** True
- **Tasks:** NER,RE
The European Clinical Case Corpus (E3C) project aims at collecting and \
annotating a large corpus of clinical documents in five European languages (Spanish, \
Basque, English, French and Italian), which will be freely distributed. Annotations \
include temporal information, to allow temporal reasoning on chronologies, and \
information about clinical entities based on medical taxonomies, to be used for semantic reasoning.
## Citation Information
```
@report{Magnini2021,
author = {Bernardo Magnini and Begoña Altuna and Alberto Lavelli and Manuela Speranza
and Roberto Zanoli and Fondazione Bruno Kessler},
keywords = {Clinical data,clinical enti-ties,corpus,multilingual,temporal information},
title = {The E3C Project:
European Clinical Case Corpus El proyecto E3C: European Clinical Case Corpus},
url = {https://uts.nlm.nih.gov/uts/umls/home},
year = {2021},
}
```
|
kubota/defamation-japanese-twitter | 2023-02-06T18:26:10.000Z | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ja",
"license:cc-by-4.0",
"region:us"
] | kubota | null | 2 | 6 | ---
annotations_creators:
- crowdsourced
language:
- ja
language_creators:
- crowdsourced
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: defamation_japanese_twitter
size_categories:
- 1K<n<10K
source_datasets:
- original
tags: []
task_categories:
- text-classification
task_ids: []
dataset_info:
features:
- name: id
dtype: string
- name: target
sequence: string
- name: label
sequence: string
- name: user_id_list
sequence: int32
---
# defamation_japanese_twitter
# Twitter日本語誹謗中傷検出データセット
<!-- ## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** -->
## Dataset Summary
SNSにおける誹謗中傷検出のためのデータセットです.
5,000件の日本語のツイートに,それぞれ以下で定義している誹謗中傷の対象者と内容をアノテーションしています.アノテーションは,3人のクラウドワーカーにより行われています.2022年2月15日から2022年6月30日までのツイートです.
元のツイートは含まれていないため,Twitter APIを用いてデータセットを収集してください.
中傷対象(target)と中傷内容(label)の2項目がアノテーションされています.
- target :テキストが話題にしている対象者の分類
- label : targetで選択された対象者に対する誹謗中傷の種類の分類
文として成立しておらず意味の取れないものはラベルC(0)としています.
| target | 対象 | 例|
| ---- | ---- | ---- |
| A1(1) | (人種・性別・職業・思想などを共通とする)グループ | (人種・性別・職業・思想などを共通とする)グループ
| A2(2) | 個人(著名人や知人など) | 〇〇大統領,芸能人の〇〇さん,おまえ
| A3(3) | 対象がはっきりしないもの |
| C(0) | 文として成立しておらず意味が取れない |
| label | 誹謗中傷の種類 | 侵害されるもの | 例
| ---- | ---- | ---- | ---- |
| B1(1) | 生命を脅かす,精神的・身体的な危害を加える | 私生活の平穏 | • 殺害予告などの脅迫発言<br>• ◯◯なんていなくなればいいのにな
| B2(2) | 容姿,人格などをけなしている | 名誉感情| • 太っているくせにカッコいいと勘違いしている<br>• 田舎育ちだからファッション感覚がない
| B3(3) | 社会から客観的に受ける価値を低下させる | 名誉権| • ◯◯さんは過去に事件を起こして逮捕されたことがある<br>• ◯◯さんは会社の同僚と不倫をしている
| B4(4) | B1-B3のどれにも当てはまらず中傷性がない | |
| C(0) | 文として成立しておらず意味が取れない |
## Data Fields
- `id` Twitter ID
- `target`: 3名のアノテータのカテゴリAの回答 values: C(0), A1(1), A2(2), A3(3)
- `label`: 3名のアノテータのカテゴリBの回答 values: C(0), B1(1), B2(2), B3(3), B4(4)
- `user_id_list`: 匿名化された回答者のID
## Example Using Twitter API
[](https://colab.research.google.com/github/kubotaissei/defamation_japanese_twitter/blob/master/notebooks/get_dataset_example.ipynb)
```python
# sample code from https://github.com/twitterdev/Twitter-API-v2-sample-code/blob/main/Tweet-Lookup/get_tweets_with_bearer_token.py
import requests
import os
import json
from datasets import load_dataset
# To set your enviornment variables in your terminal run the following line:
# export 'BEARER_TOKEN'='<your_bearer_token>'
bearer_token = os.environ.get("BEARER_TOKEN")
def create_url(ids: list):
tweet_fields = "tweet.fields=created_at"
ids = f"ids={','.join(ids)}"
url = "https://api.twitter.com/2/tweets?{}&{}".format(ids, tweet_fields)
return url
def bearer_oauth(r):
"""
Method required by bearer token authentication.
"""
r.headers["Authorization"] = f"Bearer {bearer_token}"
r.headers["User-Agent"] = "v2TweetLookupPython"
return r
def connect_to_endpoint(url):
response = requests.request("GET", url, auth=bearer_oauth)
if response.status_code != 200:
raise Exception(
"Request returned an error: {} {}".format(
response.status_code, response.text
)
)
return response.json()
def get_text_data(examples):
url = create_url(examples["id"])
json_response = connect_to_endpoint(url)
# print(json_response["data"])
text_dict = {data["id"]: data["text"] for data in json_response["data"]}
time_dict = {data["id"]: data["created_at"] for data in json_response["data"]}
return {
"text": [text_dict.get(id) for id in examples["id"]],
"created_at": [time_dict.get(id) for id in examples["id"]],
}
dataset = load_dataset("kubota/defamation-japanese-twitter")
dataset = dataset.map(get_text_data, batched=True, batch_size=100)
dataset["train"].to_pandas().head()
```
<!-- ## Data Splits
[More Information Needed]
## Dataset Creation
## Curation Rationale
[More Information Needed]
## Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed] -->
## Contributions
Thanks to [@kubotaissei](https://github.com/kubotaissei) for adding this dataset. | ||
ivelin/rico_refexp_combined | 2023-01-20T16:46:06.000Z | [
"task_categories:question-answering",
"size_categories:100K<n<1M",
"language:en",
"license:cc",
"ui refexp",
"region:us"
] | ivelin | null | null | null | 3 | 6 | ---
license: cc
task_categories:
- question-answering
language:
- en
tags:
- ui refexp
pretty_name: UI RefExp Combined
size_categories:
- 100K<n<1M
dataset_info:
features:
- name: image
dtype: image
- name: image_id
dtype: string
- name: prompt
dtype: string
- name: target_bounding_box
struct:
- name: xmax
dtype: float64
- name: xmin
dtype: float64
- name: ymax
dtype: float64
- name: ymin
dtype: float64
splits:
- name: train
num_bytes: 42127199077.08
num_examples: 390084
- name: validation
num_bytes: 409042403.17
num_examples: 3191
- name: test
num_bytes: 456349755.528
num_examples: 3912
download_size: 27184189035
dataset_size: 42992591235.778
---
# Dataset Card for "rico_refexp_combined"
This dataset combines the crowdsourced RICO RefExp prompts from the [UIBert dataset](https://huggingface.co/datasets/ivelin/rico_sca_refexp_synthetic) and the synthetically generated prompts from the [seq2act dataset](https://huggingface.co/datasets/ivelin/rico_sca_refexp_synthetic). |
nglaura/pubmedlay-summarization | 2023-04-11T10:10:19.000Z | [
"task_categories:summarization",
"language:en",
"license:apache-2.0",
"region:us"
] | nglaura | null | null | null | 0 | 6 | ---
license: apache-2.0
task_categories:
- summarization
language:
- en
pretty_name: PubMed-Lay
---
# LoRaLay: A Multilingual and Multimodal Dataset for Long Range and Layout-Aware Summarization
A collaboration between [reciTAL](https://recital.ai/en/), [MLIA](https://mlia.lip6.fr/) (ISIR, Sorbonne Université), [Meta AI](https://ai.facebook.com/), and [Università di Trento](https://www.unitn.it/)
## PubMed-Lay dataset for summarization
PubMed-Lay is an enhanced version of the PubMed summarization dataset, for which layout information is provided.
### Data Fields
- `article_id`: article id
- `article_words`: sequence of words constituting the body of the article
- `article_bboxes`: sequence of corresponding word bounding boxes
- `norm_article_bboxes`: sequence of corresponding normalized word bounding boxes
- `abstract`: a string containing the abstract of the article
- `article_pdf_url`: URL of the article's PDF
### Data Splits
This dataset has 3 splits: _train_, _validation_, and _test_.
| Dataset Split | Number of Instances |
| ------------- | --------------------|
| Train | 78,234 |
| Validation | 4,084 |
| Test | 4,350 |
## Citation
``` latex
@article{nguyen2023loralay,
title={LoRaLay: A Multilingual and Multimodal Dataset for Long Range and Layout-Aware Summarization},
author={Nguyen, Laura and Scialom, Thomas and Piwowarski, Benjamin and Staiano, Jacopo},
journal={arXiv preprint arXiv:2301.11312},
year={2023}
}
```
|
qwedsacf/story-generation | 2023-02-02T11:00:46.000Z | [
"task_categories:text-generation",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:en",
"story-generation",
"region:us"
] | qwedsacf | null | null | null | 6 | 6 | ---
language:
- en
multilinguality:
- monolingual
task_categories:
- text-generation
task_ids: []
tags:
- story-generation
dataset_info:
features:
- name: summary
dtype: string
- name: story
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 385345341
num_examples: 427223
download_size: 213423683
dataset_size: 385345341
size_categories:
- 100K<n<1M
---
# Story generation
## Dataset Description
- **Homepage:** https://laion.ai/
### Dataset Summary
This dataset contains summaries and stories from [RUCAIBox/Story-Generation](https://huggingface.co/datasets/RUCAIBox/Story-Generation) dataset.
## Dataset Structure
### Data Fields
- `summary`: The summary of the story
- `story`: The story |
Cohere/miracl-ar-queries-22-12 | 2023-02-06T12:00:30.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:ar",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | null | 0 | 6 | ---
annotations_creators:
- expert-generated
language:
- ar
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (ar) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-ar-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ar-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-ar-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ar-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-ar-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ar-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-ar-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-ar-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-ar-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ar-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-ar-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-ar-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
metaeval/syntactic-augmentation-nli | 2023-06-13T07:28:15.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"language:en",
"license:mit",
"region:us"
] | metaeval | null | null | null | 0 | 6 | ---
license: mit
task_ids:
- natural-language-inference
task_categories:
- text-classification
language:
- en
---
https://github.com/Aatlantise/syntactic-augmentation-nli/tree/master/datasets
```
@inproceedings{min-etal-2020-syntactic,
title = "Syntactic Data Augmentation Increases Robustness to Inference Heuristics",
author = "Min, Junghyun and
McCoy, R. Thomas and
Das, Dipanjan and
Pitler, Emily and
Linzen, Tal",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.acl-main.212",
doi = "10.18653/v1/2020.acl-main.212",
pages = "2339--2352",
}
``` |
IlyaGusev/rulm_tokenized | 2023-01-31T19:22:56.000Z | [
"license:other",
"region:us"
] | IlyaGusev | null | null | null | 0 | 6 | ---
license: other
---
|
erkam/clevr-with-depth | 2023-02-03T02:09:24.000Z | [
"region:us"
] | erkam | null | null | null | 1 | 6 | ---
dataset_info:
features:
- name: image
dtype: image
- name: depth
dtype: image
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 115079852.0
num_examples: 1400
- name: test
num_bytes: 24726160.0
num_examples: 300
- name: val
num_bytes: 24696560.0
num_examples: 300
download_size: 164000762
dataset_size: 164502572.0
---
# Dataset Card for "clevr-with-depth"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jrahn/yolochess_lichess-elite_2211 | 2023-02-08T07:19:54.000Z | [
"task_categories:text-classification",
"task_categories:reinforcement-learning",
"size_categories:10M<n<100M",
"license:cc",
"chess",
"region:us"
] | jrahn | null | null | null | 2 | 6 | ---
dataset_info:
features:
- name: fen
dtype: string
- name: move
dtype: string
- name: result
dtype: string
- name: eco
dtype: string
splits:
- name: train
num_bytes: 1794337922
num_examples: 22116598
download_size: 1044871571
dataset_size: 1794337922
task_categories:
- text-classification
- reinforcement-learning
license: cc
tags:
- chess
size_categories:
- 10M<n<100M
---
# Dataset Card for "yolochess_lichess-elite_2211"
Source: https://database.nikonoel.fr/ - filtered from https://database.lichess.org for November 2022
Features:
- fen = Chess board position in [FEN](https://en.wikipedia.org/wiki/Forsyth%E2%80%93Edwards_Notation) format
- move = Move played by a strong human player in this position
- result = Final result of the match
- eco = [ECO](https://en.wikipedia.org/wiki/Encyclopaedia_of_Chess_Openings)-code of the Opening played
Samples: 22.1 million |
Isamu136/big-animal-dataset | 2023-02-08T21:02:10.000Z | [
"region:us"
] | Isamu136 | null | null | null | 2 | 6 | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 1198940745.5549998
num_examples: 62149
download_size: 0
dataset_size: 1198940745.5549998
---
# Dataset Card for "big-animal-dataset"
Hi! I combined animals 10 dataset, the oxford pets dataset, stanford dogs dataset, and the cats vs dogs dataset for a large animal dataset.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
karukas/arxiv-abstract-matching | 2023-02-09T20:48:55.000Z | [
"region:us"
] | karukas | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
splits:
- name: train
num_bytes: 7119340064
num_examples: 203037
- name: validation
num_bytes: 216202656
num_examples: 6436
- name: test
num_bytes: 216585242
num_examples: 6440
download_size: 3635681697
dataset_size: 7552127962
---
# Dataset Card for "arxiv-abstract-matching"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
karukas/mediasum-summary-matching | 2023-02-11T00:05:53.000Z | [
"region:us"
] | karukas | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
splits:
- name: train
num_bytes: 4149687650
num_examples: 443596
- name: validation
num_bytes: 92028438
num_examples: 10000
- name: test
num_bytes: 94033599
num_examples: 10000
download_size: 2438334598
dataset_size: 4335749687
---
# Dataset Card for "mediasum-summary-matching"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jonathan-roberts1/GID | 2023-03-31T15:38:31.000Z | [
"task_categories:image-classification",
"task_categories:zero-shot-image-classification",
"license:other",
"region:us"
] | jonathan-roberts1 | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': arbor woodland
'1': artificial grassland
'2': dry cropland
'3': garden plot
'4': industrial land
'5': irrigated land
'6': lake
'7': natural grassland
'8': paddy field
'9': pond
'10': river
'11': rural residential
'12': shrub land
'13': traffic land
'14': urban residential
splits:
- name: train
num_bytes: 1777210275
num_examples: 30000
download_size: 1263253291
dataset_size: 1777210275
license: other
task_categories:
- image-classification
- zero-shot-image-classification
---
# Dataset Card for "GID"
## Dataset Description
- **Paper** [Land-cover classification with high-resolution remote sensing images using transferable deep models](https://www.sciencedirect.com/science/article/pii/S0034425719303414)
### Licensing Information
Public domain.
## Citation Information
[Land-cover classification with high-resolution remote sensing images using transferable deep models](https://www.sciencedirect.com/science/article/pii/S0034425719303414)
```
@article{GID2020,
title = {Land-cover classification with high-resolution remote sensing images using transferable deep models},
author = {Tong, Xin-Yi and Xia, Gui-Song and Lu, Qikai and Shen, Huanfeng and Li, Shengyang and You, Shucheng and Zhang, Liangpei},
year = 2020,
journal = {Remote Sensing of Environment},
volume = 237,
pages = 111322
}
``` |
jonathan-roberts1/CLRS | 2023-03-31T15:35:22.000Z | [
"task_categories:image-classification",
"task_categories:zero-shot-image-classification",
"license:other",
"region:us"
] | jonathan-roberts1 | null | null | null | 0 | 6 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airport
'1': bare land
'2': beach
'3': bridge
'4': commercial
'5': desert
'6': farmland
'7': forest
'8': golf course
'9': highway
'10': industrial
'11': meadow
'12': mountain
'13': overpass
'14': park
'15': parking
'16': playground
'17': port
'18': railway
'19': railway station
'20': residential
'21': river
'22': runway
'23': stadium
'24': storage tank
splits:
- name: train
num_bytes: 2969926932
num_examples: 15000
download_size: 2327956775
dataset_size: 2969926932
license: other
task_categories:
- image-classification
- zero-shot-image-classification
---
# Dataset Card for "CLRS"
## Dataset Description
- **Paper** [CLRS: Continual Learning Benchmark for Remote Sensing Image Scene Classification](https://www.mdpi.com/1424-8220/20/4/1226/pdf)
-
### Licensing Information
For academic purposes.
## Citation Information
[CLRS: Continual Learning Benchmark for Remote Sensing Image Scene Classification](https://www.mdpi.com/1424-8220/20/4/1226/pdf)
```
@article{s20041226,
title = {CLRS: Continual Learning Benchmark for Remote Sensing Image Scene Classification},
author = {Li, Haifeng and Jiang, Hao and Gu, Xin and Peng, Jian and Li, Wenbo and Hong, Liang and Tao, Chao},
year = 2020,
journal = {Sensors},
volume = 20,
number = 4,
doi = {10.3390/s20041226},
issn = {1424-8220},
url = {https://www.mdpi.com/1424-8220/20/4/1226},
article-number = 1226,
pubmedid = 32102294,
}
``` |
Rahmaa/ElsevieR_ClEaN | 2023-02-19T17:57:46.000Z | [
"license:openrail",
"region:us"
] | Rahmaa | null | null | null | 0 | 6 | ---
license: openrail
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.