id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
persian_ner | 2023-01-25T14:42:29.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:fa",
"license:cc-by-4.0",
"region:us"
] | null | The dataset includes 250,015 tokens and 7,682 Persian sentences in total. It is available in 3 folds to be used in turn as training and test sets. The NER tags are in IOB format. | @inproceedings{poostchi-etal-2016-personer,
title = "{P}erso{NER}: {P}ersian Named-Entity Recognition",
author = "Poostchi, Hanieh and
Zare Borzeshi, Ehsan and
Abdous, Mohammad and
Piccardi, Massimo",
booktitle = "Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
month = dec,
year = "2016",
address = "Osaka, Japan",
publisher = "The COLING 2016 Organizing Committee",
url = "https://www.aclweb.org/anthology/C16-1319",
pages = "3381--3389",
abstract = "Named-Entity Recognition (NER) is still a challenging task for languages with low digital resources. The main difficulties arise from the scarcity of annotated corpora and the consequent problematic training of an effective NER pipeline. To abridge this gap, in this paper we target the Persian language that is spoken by a population of over a hundred million people world-wide. We first present and provide ArmanPerosNERCorpus, the first manually-annotated Persian NER corpus. Then, we introduce PersoNER, an NER pipeline for Persian that leverages a word embedding and a sequential max-margin classifier. The experimental results show that the proposed approach is capable of achieving interesting MUC7 and CoNNL scores while outperforming two alternatives based on a CRF and a recurrent neural network.",
} | null | 0 | 5 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- fa
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: Persian NER
dataset_info:
- config_name: fold1
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': I-event
'2': I-fac
'3': I-loc
'4': I-org
'5': I-pers
'6': I-pro
'7': B-event
'8': B-fac
'9': B-loc
'10': B-org
'11': B-pers
'12': B-pro
splits:
- name: train
num_bytes: 3362102
num_examples: 5121
- name: test
num_bytes: 1646481
num_examples: 2560
download_size: 1931170
dataset_size: 5008583
- config_name: fold2
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': I-event
'2': I-fac
'3': I-loc
'4': I-org
'5': I-pers
'6': I-pro
'7': B-event
'8': B-fac
'9': B-loc
'10': B-org
'11': B-pers
'12': B-pro
splits:
- name: train
num_bytes: 3344561
num_examples: 5120
- name: test
num_bytes: 1664022
num_examples: 2561
download_size: 1931170
dataset_size: 5008583
- config_name: fold3
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': I-event
'2': I-fac
'3': I-loc
'4': I-org
'5': I-pers
'6': I-pro
'7': B-event
'8': B-fac
'9': B-loc
'10': B-org
'11': B-pers
'12': B-pro
splits:
- name: train
num_bytes: 3310491
num_examples: 5121
- name: test
num_bytes: 1698092
num_examples: 2560
download_size: 1931170
dataset_size: 5008583
---
# Dataset Card for [Persian NER]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/HaniehP/PersianNER)
- **Repository:** [Github](https://github.com/HaniehP/PersianNER)
- **Paper:** [Aclweb](https://www.aclweb.org/anthology/C16-1319)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The dataset includes 7,682 Persian sentences, split into 250,015 tokens and their NER labels. It is available in 3 folds to be used in turn as training and test sets. The NER tags are in IOB format.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
### Data Fields
- `id`: id of the sample
- `tokens`: the tokens of the example text
- `ner_tags`: the NER tags of each token
The NER tags correspond to this list:
```
"O", "I-event", "I-fac", "I-loc", "I-org", "I-pers", "I-pro", "B-event", "B-fac", "B-loc", "B-org", "B-pers", "B-pro"
```
### Data Splits
Training and test splits
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
Hanieh Poostchi, Ehsan Zare Borzeshi, Mohammad Abdous, Massimo Piccardi
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
Hanieh Poostchi, Ehsan Zare Borzeshi, Mohammad Abdous, Massimo Piccardi
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
Dataset is published for academic use only
### Dataset Curators
[More Information Needed]
### Licensing Information
Creative Commons Attribution 4.0 International License.
### Citation Information
@inproceedings{poostchi-etal-2016-personer,
title = "{P}erso{NER}: {P}ersian Named-Entity Recognition",
author = "Poostchi, Hanieh and
Zare Borzeshi, Ehsan and
Abdous, Mohammad and
Piccardi, Massimo",
booktitle = "Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
month = dec,
year = "2016",
address = "Osaka, Japan",
publisher = "The COLING 2016 Organizing Committee",
url = "https://www.aclweb.org/anthology/C16-1319",
pages = "3381--3389",
abstract = "Named-Entity Recognition (NER) is still a challenging task for languages with low digital resources. The main difficulties arise from the scarcity of annotated corpora and the consequent problematic training of an effective NER pipeline. To abridge this gap, in this paper we target the Persian language that is spoken by a population of over a hundred million people world-wide. We first present and provide ArmanPerosNERCorpus, the first manually-annotated Persian NER corpus. Then, we introduce PersoNER, an NER pipeline for Persian that leverages a word embedding and a sequential max-margin classifier. The experimental results show that the proposed approach is capable of achieving interesting MUC7 and CoNNL scores while outperforming two alternatives based on a CRF and a recurrent neural network.",
}
### Contributions
Thanks to [@KMFODA](https://github.com/KMFODA) for adding this dataset. |
polsum | 2022-11-03T16:07:56.000Z | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:pl",
"license:cc-by-3.0",
"region:us"
] | null | Polish Summaries Corpus: the corpus of Polish news summaries. | @inproceedings{
ogro:kop:14:lrec,
author = "Ogrodniczuk, Maciej and Kopeć, Mateusz",
pdf = "http://nlp.ipipan.waw.pl/Bib/ogro:kop:14:lrec.pdf",
title = "The {P}olish {S}ummaries {C}orpus",
pages = "3712--3715",
crossref = "lrec:14"
}
@proceedings{
lrec:14,
editor = "Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios",
isbn = "978-2-9517408-8-4",
title = "Proceedings of the Ninth International {C}onference on {L}anguage {R}esources and {E}valuation, {LREC}~2014",
url = "http://www.lrec-conf.org/proceedings/lrec2014/index.html",
booktitle = "Proceedings of the Ninth International {C}onference on {L}anguage {R}esources and {E}valuation, {LREC}~2014",
address = "Reykjavík, Iceland",
key = "LREC",
year = "2014",
organization = "European Language Resources Association (ELRA)"
} | null | 1 | 5 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- pl
license:
- cc-by-3.0
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-summarization
paperswithcode_id: null
pretty_name: Polish Summaries Corpus
dataset_info:
features:
- name: id
dtype: string
- name: date
dtype: string
- name: title
dtype: string
- name: section
dtype: string
- name: authors
dtype: string
- name: body
dtype: string
- name: summaries
sequence:
- name: ratio
dtype: int32
- name: type
dtype: string
- name: author
dtype: string
- name: body
dtype: string
- name: spans
sequence:
- name: start
dtype: int32
- name: end
dtype: int32
- name: span_text
dtype: string
splits:
- name: train
num_bytes: 34787575
num_examples: 569
download_size: 6082812
dataset_size: 34787575
---
# Dataset Card for Polish Summaries Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://zil.ipipan.waw.pl/PolishSummariesCorpus
- **Repository:** http://zil.ipipan.waw.pl/PolishSummariesCorpus
- **Paper:** http://nlp.ipipan.waw.pl/Bib/ogro:kop:14:lrec.pdf
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Mateusz Kopeć](http://zil.ipipan.waw.pl/MateuszKopec)
### Dataset Summary
The Corpus contains a large number of manual summaries of news articles,
with many independently created summaries for a single text. Such approach is supposed to overcome the annotator bias, which is often described as a problem during the evaluation of the summarization algorithms against a single gold standard.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
Polish
## Dataset Structure
### Data Instances
See below an example from the dataset. Detailed descriptions of the fields are provided in the following section.
```
{'authors': 'Krystyna Forowicz',
'body': "ROZMOWA\n\nProf. Krzysztof Ernst, kierownik Zakładu Optyki Instytutu Fizyki Doświadczalnej Uniwersytetu Warszawskiego\n\nLidarowe oczy\n\nRYS. MAREK KONECKI\n\nJutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar? \n\nPROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym.\n\nCzy to kosztowne urządzenie będzie służyło tylko naukowcom?\n\nTego typu lidar jest rzeczywiście drogi, kosztuje około miliona marek niemieckich. Jest to najnowsza generacja tego typu lidarów. DIAL - lidar absorbcji różnicowej jest urządzeniem inteligentnym, to znaczy potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, korzyść naukową - rozwijamy badania nad tym urządzeniem, staramy się m.in. rozszerzyć jego zastosowanie także na inne substancje występujące w atmosferze. I korzyść dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska. Nad lidarem pracują specjaliści od laserów i od komputerów. Współpracujemy z doskonałym laboratorium prof. Ludgera Wöste z Freie Universitat Berlin rozwijającym m.in. problematykę lidarową. Pakiet software'u wzbogacamy o nowe algorytmy, które potrafią lepiej i dokładniej rozszyfrowywać sygnał lidarowy, a w konsekwencji skażenia. Żeby przetworzyć tzw. sygnał lidarowy, czyli to co wraca po rozproszeniu światła do układu, i otrzymać rozsądne dane dotyczące rozkładu koncentracji - trzeba dokonać skomplikowanych operacji. \n\nBadania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej, dzięki której ten lidar u nas zaistniał i dla której w ramach naszych zobowiązań wykonujemy pomiary zanieczyszczeń nad naszą wspólną granicą. Zasadniczy koszt jego budowy pokryła uzyskana od Fundacji dotacja. Część pieniędzy przekazał też Narodowy Fundusz Ochrony Środowiska i Gospodarki Wodnej oraz Komitet Badań Naukowych.\n\nCzy wszystkie zanieczyszczenia będzie można wykryć za pomocą lidaru?\n\nNie ma takiego jednostkowego urządzenia, które by wykrywało i mierzyło wszystkie szkodliwe gazy w atmosferze łącznie z dostarczeniem informacji o ich rozkładzie. Ale np. obecnie prowadzimy badania mające na celu rozszerzenie możliwości lidaru o taką substancję jak fosgen. Tym szkodliwym gazem może być skażone powietrze w miastach, w których zlokalizowane są zakłady chemiczne, np. w Bydgoszczy pewne ilości fosgenu emitują Zakłady Chemiczne Organika- Zachem. \n\nLidar typu DIAL jest oparty na pomiarze absorbcji różnicowej, czyli muszą być zastosowane dwie wiązki laserowe o dwóch różnych długościach fali, z których jedna jest absorbowana, a druga nie jest absorbowana przez substancję, którą chcemy wykryć. Cząsteczki, które wykrywamy mają pasma absorbcji w bliskim nadfiolecie. Możemy np. badać zawartość ozonu w troposferze. Okazuje się bowiem, że o ile brak tego gazu w wysokich warstwach atmosfery powoduje groźny efekt cieplarniany, to jego nadmiar tuż nad Ziemią jest szkodliwy. Groźne są też substancje gazowe, jak np. tlenki azotu, będące następstwem spalin samochodowych. A samochodów przybywa.\n\nCzy stać nas będzie na prowadzenie pomiarów ozonu w miastach? \n\nKoszt jednego dnia kampanii pomiarowej firmy zachodnie szacują na kilka tysięcy DM. Potrzebne są pieniądze na utrzymanie lidaru, na prowadzenie badań. Nasze przedsięwzięcie nie ma charakteru komercyjnego. Koszt pomiarów będzie znacznie niższy. Chcemy np. mierzyć w Warszawie rozkłady koncentracji tlenków azotu, ich ewolucję czasową nad różnymi arteriami miasta. Chcielibyśmy rozwinąć tutaj współpracę z państwowymi i wojewódzkimi służbami ochrony środowiska. Tego typu badania były prowadzone np. w Lyonie. Okazało się, że najwięcej tlenków azotu występuje niekoniecznie tam gdzie są one produkowane, to znaczy nie przy najruchliwszych ulicach, jeśli są one dobrze wentylowane a gromadzą się one w małych uliczkach. Przede wszystkim jednak do końca tego roku zamierzamy zakończyć pomiary skażeń atmosferycznych nad granicą polsko-niemiecką. Koncentrujemy się głównie na Czarnym Trójkącie - obszarze u zbiegu trzech granic: Polski, Niemiec i Czech, do niedawna uważanym za najbardziej zdegradowany region w Europie. Prowadziliśmy pomiary w samym Turowie, gdzie elektrownia Turoszowska jest głównym źródłem emisji. W planie mamy Bogatynię, zagłębie miedziowe. \n\nW Czarnym Trójkącie istnieje wiele stacjonarnych stacji monitoringowych.\n\nNasz lidar ma większe możliwości niż stacje monitoringowe. Mierzy zanieczyszczenia nie tylko lokalnie, ale też ich rozkład w przestrzeni, z wysoką rozdzielczością przestrzenną i na odległość kilku kilometrów. Możemy zatem śledzić ewolucję rozprzestrzeniania się tych zanieczyszczeń, ich kierunek i zmiany spowodowane m.in. warunkami atmosferycznymi. Wyniki naszych pomiarów porównujemy z danymi uzyskanymi ze stacji monitoringowych. \n\nJak wypadł Czarny Trójkąt?\n\nKiedy występowaliśmy o finansowanie tego projektu do Fundacji Współpracy Polsko-Niemieckiej zanieczyszczenie powietrza w Czarnym Trójkącie było dużo większe niż obecnie i wszystko wskazuje na to, że będzie dalej spadać. Obecnie stężenie dwutlenku siarki jest na granicy naszych możliwości pomiarowych. Dla regionu Turoszowskiego to dobra wiadomość i dla stosunków polsko-niemieckich też.\n\nTypów lidarów jest wiele \n\nTen lidar pracuje w obszarze bliskiego nadfioletu i promieniowania widzialnego, które jest wynikiem wykorzystania drugiej lub trzeciej harmonicznej lasera szafirowego, pracującego na granicy czerwieni i podczerwieni. DIAL jest tym typem lidara, który dzisiaj ma zdecydowanie największe wzięcie w ochronie środowiska. Z lidarów korzysta meteorologia. W Stanach Zjednoczonych lidary umieszcza się na satelitach (program NASA). Określają na przestrzeni kilkudziesięciu kilometrów rozkłady temperatury, wilgotności, ciśnienia, a także prędkości wiatru. Wykrywają pojawianie się huraganów, a nawet mogą określać rozmiary oka tajfunu.\n\nIle takich urządzeń jest w Europie?\n\n- W Europie takich lidarów jak nasz jest zaledwie kilka. Większość z nich mierzy ozon, dwutlenek siarki i tlenek azotu. Wykrywanie toluenu i benzenu jest oryginalnym rozwiązaniem. Długość fali dla benzenu jest już na skraju możliwości widmowych. Nasz lidar typu DIAL jest najnowocześniejszym w Polsce. Ponadto jest lidarem ruchomym, zainstalowanym na samochodzie. Ale historia lidarów w naszym kraju jest dłuższa i zaczęła się na początku lat 60. Pierwsze próby prowadzone były w stacji geofizycznej PAN w Belsku, niedługo po skonstruowaniu pierwszego w świecie lasera rubinowego. Potem powstał lidar stacjonarny, również typu DIAL, w Gdańsku, a w Krakowie sodary - urządzenia oparte na falach akustycznych, wygodne np. do pomiarów szybkości wiatru. Lidar umieszczony na samochodzie i zbudowany w latach 80 na Politechnice Poznańskiej w perspektywie miał być lidarem typu DIAL.\n\nFizycy dotychczas nie zajmowali się ochroną środowiska?\n\nTaka specjalizacja powstala na Wydziale Fizyki UW dwa lata temu. Gwarancją sukcesu naszego programu dydaktyczno-badawczego jest udział w nim zakładów należących do Instytutu Fizyki Doświadczalnej UW, Pracowni Przetwarzania Informacji (zdjęć satelitarnych) Instytutu Geofizyki i, co bardzo ważne, współpraca z Freie Universität Berlin. Mamy również na UW Międzywydziałowe Studia Ochrony Środowiska i studentom przekazujemy informacje o lidarze i fizycznych metodach badania środowiska. Nasze działania dydaktyczne bardzo efektywnie wspiera NFOŚ.\n\nRozmawiała Krystyna Forowicz",
'date': '1997-04-21',
'id': '199704210011',
'section': 'Nauka i Technika',
'summaries': {'author': ['I',
'I',
'I',
'C',
'C',
'C',
'K',
'K',
'K',
'G',
'G',
'G',
'J',
'J',
'J'],
'body': ['Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar?PROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym.Czy to kosztowne urządzenie będzie służyło tylko naukowcom? Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, naukową - rozwijamy badania nad tym urządzeniem. I korzyść dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska. Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej, dzięki której ten lidar u nas zaistniał i dla której w ramach naszych zobowiązań wykonujemy pomiary zanieczyszczeń nad naszą wspólną granicą.',
'Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar?PROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym.Czy to kosztowne urządzenie będzie służyło tylko naukowcom? Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, naukową - rozwijamy badania nad tym urządzeniem. I korzyść dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska. Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej, dzięki której ten lidar u nas zaistniał i dla której w ramach naszych zobowiązań wykonujemy pomiary zanieczyszczeń nad naszą wspólną granicą. Czy wszystkie zanieczyszczenia będzie można wykryć za pomocą lidaru?Nie ma takiego jednostkowego urządzenia, które by wykrywało i mierzyło wszystkie szkodliwe gazy w atmosferze łącznie z dostarczeniem informacji o ich rozkładzie. Możemy np. badać zawartość ozonu w troposferze. W Europie takich lidarów jak nasz jest zaledwie kilka. Większość z nich mierzy ozon, dwutlenek siarki i tlenek azotu. Fizycy dotychczas nie zajmowali się ochroną środowiska?Taka specjalizacja powstala na Wydziale Fizyki UW dwa lata temu. Gwarancją sukcesu naszego programu dydaktyczno-badawczego jest udział w nim zakładów należących do Instytutu Fizyki Doświadczalnej UW, Pracowni Przetwarzania Informacji Instytutu Geofizyki i współpraca z Freie Universität Berlin.',
'Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar?PROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym. Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej, dzięki której ten lidar u nas zaistniał.',
'Jutro odbędzie sie pokaz nowego polskiego lidara typu DIAL. lidar Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. DIAL - lidar absorbcji różnicowej jest urządzeniem inteligentnym, to znaczy potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. Z lidara korzyść mamy potrójną: użyteczną, naukową I dydaktyczną. Żeby przetworzyć sygnał lidarowy, czyli to co wraca po rozproszeniu światła do układu, i otrzymać dane dotyczące rozkładu koncentracji - trzeba dokonać skomplikowanych operacji. muszą być zastosowane dwie wiązki laserowe o dwóch różnych długościach fali, z których jedna jest absorbowana, a druga nie jest absorbowana przez substancję, którą chcemy wykryć.',
'Jutro odbędzie sie pokaz nowego polskiego lidara typu DIAL. lidar Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym. Jest to najnowsza generacja tego typu lidarów. DIAL - lidar absorbcji różnicowej jest urządzeniem inteligentnym, to znaczy potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, korzyść naukową - rozwijamy badania nad tym urządzeniem. I korzyść dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska. Żeby przetworzyć tzw. sygnał lidarowy, czyli to co wraca po rozproszeniu światła do układu, i otrzymać rozsądne dane dotyczące rozkładu koncentracji - trzeba dokonać skomplikowanych operacji. Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej, dzięki której ten lidar u nas zaistniał i dla której w ramach naszych zobowiązań wykonujemy pomiary zanieczyszczeń nad naszą wspólną granicą. Zasadniczy koszt jego budowy pokryła uzyskana od Fundacji dotacja. Część pieniędzy przekazał też Narodowy Fundusz Ochrony Środowiska i Gospodarki Wodnej oraz Komitet Badań Naukowych. Lidar typu DIAL jest oparty na pomiarze absorbcji różnicowej, czyli muszą być zastosowane dwie wiązki laserowe o dwóch różnych długościach fali, z których jedna jest absorbowana, a druga nie jest absorbowana przez substancję, którą chcemy wykryć.',
'Jutro odbędzie sie pokaz nowego polskiego lidara typu DIAL. lidar Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. DIAL - lidar absorbcji różnicowej jest urządzeniem inteligentnym, to znaczy potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. Z lidara korzyść mamy potrójną: użyteczną, naukową I dydaktyczną.',
'Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar? \nPROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym. DIAL - lidar absorbcji różnicowej jest urządzeniem inteligentnym, to znaczy potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, korzyść naukową - rozwijamy badania nad tym urządzeniem I korzyść dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.Nasz lidar ma większe możliwości niż stacje monitoringowe. Mierzy zanieczyszczenia nie tylko lokalnie, ale też ich rozkład w przestrzeni, z wysoką rozdzielczością przestrzenną i na odległość kilku kilometrów.',
'Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar? \nPROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym.Tego typu lidar jest drogi, kosztuje około miliona marek niemieckich. DIAL - lidar absorbcji różnicowej jest urządzeniem inteligentnym, to znaczy potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, korzyść naukową - rozwijamy badania nad tym urządzeniem, staramy się m.in. rozszerzyć jego zastosowanie także na inne substancje występujące w atmosferze. I korzyść dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.Lidar typu DIAL jest oparty na pomiarze absorbcji różnicowej, czyli muszą być zastosowane dwie wiązki laserowe o dwóch różnych długościach fali, z których jedna jest absorbowana, a druga nie jest absorbowana przez substancję, którą chcemy wykryć. Cząsteczki, które wykrywamy mają pasma absorbcji w bliskim nadfiolecie.Nasz lidar ma większe możliwości niż stacje monitoringowe. Mierzy zanieczyszczenia nie tylko lokalnie, ale też ich rozkład w przestrzeni, z wysoką rozdzielczością przestrzenną i na odległość kilku kilometrów. Możemy zatem śledzić ewolucję rozprzestrzeniania się tych zanieczyszczeń, ich kierunek i zmiany spowodowane m.in. warunkami atmosferycznymi. Wyniki naszych pomiarów porównujemy z danymi uzyskanymi ze stacji monitoringowych.',
'Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar? \nPROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, korzyść naukową i dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.',
'Jutro odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar? \n\nPROF. KRZYSZTOF ERNST: urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.\nto najnowsza generacja tego typu lidarów. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. korzyść mamy potrójną: użyteczną, przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, naukową - rozwijamy badania nad urządzeniem I dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.\nNasze przedsięwzięcie nie ma charakteru komercyjnego. Chcemy np. mierzyć w Warszawie rozkłady koncentracji tlenków azotu. Koncentrujemy się głównie na Czarnym Trójkącie - obszarze u zbiegu granic: Polski, Niemiec i Czech, do niedawna uważanym za najbardziej zdegradowany region w Europie.',
'Jutro odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar? \n\nPROF. KRZYSZTOF ERNST: urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.\n\nto kosztowne urządzenie będzie służyło tylko naukowcom?\n\nlidar jest rzeczywiście drogi. to najnowsza generacja tego typu lidarów. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. korzyść mamy potrójną: użyteczną, przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, naukową - rozwijamy badania nad tym urządzeniem I dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.\n\nCzy wszystkie zanieczyszczenia będzie można wykryć za pomocą lidaru?\n\nNie ma takiego jednostkowego urządzenia, które by wykrywało i mierzyło wszystkie szkodliwe gazy w atmosferze. Ale prowadzimy badania mające na celu rozszerzenie możliwości lidaru o taką substancję jak fosgen.\n\nstać nas będzie na prowadzenie pomiarów ozonu w miastach? \n\nNasze przedsięwzięcie nie ma charakteru komercyjnego. Chcemy np. mierzyć w Warszawie rozkłady koncentracji tlenków azotu, ich ewolucję czasową nad różnymi arteriami miasta. Koncentrujemy się głównie na Czarnym Trójkącie - obszarze u zbiegu granic: Polski, Niemiec i Czech, do niedawna uważanym za najbardziej zdegradowany region w Europie. zanieczyszczenie było dużo większe niż obecnie i wszystko wskazuje na to, że będzie dalej spadać.\nDIAL dzisiaj ma zdecydowanie największe wzięcie w ochronie środowiska. \n\nFizycy dotychczas nie zajmowali się ochroną środowiska?\n\nTaka specjalizacja powstala na Wydziale Fizyki UW dwa lata temu.',
'Co to jest lidar? \n\nPROF. KRZYSZTOF ERNST: urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.\nto najnowsza generacja tego typu lidarów. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. korzyść mamy potrójną: użyteczną, wykonujemy pomiary skażeń atmosferycznych, naukową - rozwijamy badania nad urządzeniem I dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.',
'Co to jest lidar? \nPROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. staramy się rozszerzyć jego zastosowanie na inne substancje występujące w atmosferze. Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej. zamierzamy zakończyć pomiary skażeń atmosferycznych nad granicą polsko-niemiecką. Nasz lidar ma większe możliwości niż stacje monitoringowe. Możemy śledzić ewolucję rozprzestrzeniania się zanieczyszczeń, ich kierunek i zmiany. Gwarancją sukcesu naszego programu dydaktyczno-badawczego jest udział w nim zakładów należących do Instytutu Fizyki Doświadczalnej UW, Pracowni Przetwarzania Informacji Instytutu Geofizyki i współpraca z Freie Universität Berlin.',
"Co to jest lidar? \nPROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. DIAL - lidar absorbcji różnicowej potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. staramy się rozszerzyć jego zastosowanie także na inne substancje występujące w atmosferze. Pakiet software'u wzbogacamy o nowe algorytmy, które potrafią dokładniej rozszyfrowywać sygnał lidarowy, a w konsekwencji skażenia. Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej. \n\nChcemy mierzyć w Warszawie rozkłady koncentracji tlenków azotu, ich ewolucję czasową nad różnymi arteriami miasta. zamierzamy zakończyć pomiary skażeń atmosferycznych nad granicą polsko-niemiecką. Nasz lidar ma większe możliwości niż stacje monitoringowe. Możemy śledzić ewolucję rozprzestrzeniania się zanieczyszczeń, ich kierunek i zmiany spowodowane m.in. warunkami atmosferycznymi. \n\nDIAL jest tym typem lidara, który dzisiaj ma największe wzięcie w ochronie środowiska. Z lidarów korzysta meteorologia. W Europie takich lidarów jak nasz jest zaledwie kilka. Nasz lidar jest najnowocześniejszym w Polsce. Ponadto jest lidarem ruchomym, zainstalowanym na samochodzie. \n\nFizycy dotychczas nie zajmowali się ochroną środowiska?\nTaka specjalizacja powstala na Wydziale Fizyki UW dwa lata temu. Gwarancją sukcesu naszego programu dydaktyczno-badawczego jest udział w nim zakładów należących do Instytutu Fizyki Doświadczalnej UW, Pracowni Przetwarzania Informacji Instytutu Geofizyki i współpraca z Freie Universität Berlin.",
'Co to jest lidar? \nPROF. KRZYSZTOF ERNST: to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. zamierzamy zakończyć pomiary skażeń atmosferycznych nad granicą polsko-niemiecką. Nasz lidar ma większe możliwości niż stacje monitoringowe. Możemy śledzić ewolucję rozprzestrzeniania się zanieczyszczeń, ich kierunek i zmiany.'],
'ratio': [10, 20, 5, 10, 20, 5, 10, 20, 5, 10, 20, 5, 10, 20, 5],
'spans': [{'end': [244, 396, 457, 867, 922, 1022, 1103, 1877],
'span_text': ['Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar?',
'PROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym.',
'Czy to kosztowne urządzenie będzie służyło tylko naukowcom?',
'Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych,',
'naukową - rozwijamy badania nad tym urządzeniem',
'.',
'I korzyść dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.',
'Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej, dzięki której ten lidar u nas zaistniał i dla której w ramach naszych zobowiązań wykonujemy pomiary zanieczyszczeń nad naszą wspólną granicą.'],
'start': [153, 247, 398, 760, 875, 1020, 1023, 1631]},
{'end': [244,
396,
457,
867,
922,
1022,
1103,
1878,
2132,
2296,
2969,
6225,
6985,
7047,
7282,
7326,
7383],
'span_text': ['Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar?',
'PROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym.',
'Czy to kosztowne urządzenie będzie służyło tylko naukowcom?',
'Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych,',
'naukową - rozwijamy badania nad tym urządzeniem',
'.',
'I korzyść dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.',
'Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej, dzięki której ten lidar u nas zaistniał i dla której w ramach naszych zobowiązań wykonujemy pomiary zanieczyszczeń nad naszą wspólną granicą.',
'Czy wszystkie zanieczyszczenia będzie można wykryć za pomocą lidaru?',
'Nie ma takiego jednostkowego urządzenia, które by wykrywało i mierzyło wszystkie szkodliwe gazy w atmosferze łącznie z dostarczeniem informacji o ich rozkładzie.',
'Możemy np. badać zawartość ozonu w troposferze.',
'W Europie takich lidarów jak nasz jest zaledwie kilka. Większość z nich mierzy ozon, dwutlenek siarki i tlenek azotu.',
'',
'Fizycy dotychczas nie zajmowali się ochroną środowiska?',
'Taka specjalizacja powstala na Wydziale Fizyki UW dwa lata temu. Gwarancją sukcesu naszego programu dydaktyczno-badawczego jest udział w nim zakładów należących do Instytutu Fizyki Doświadczalnej UW, Pracowni Przetwarzania Informacji',
'Instytutu Geofizyki i',
'współpraca z Freie Universität Berlin.'],
'start': [153,
247,
398,
760,
875,
1020,
1023,
1631,
2064,
2134,
2921,
6108,
6984,
6992,
7049,
7304,
7344]},
{'end': [244, 396, 1103, 1774, 1877],
'span_text': ['Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar?',
'PROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym.',
'',
'Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej, dzięki której ten lidar u nas zaistniał',
'.'],
'start': [153, 247, 1102, 1631, 1876]},
{'end': [159,
227,
243,
360,
804,
882,
1025,
1044,
1103,
1454,
1540,
1629,
2848],
'span_text': ['Jutro',
'odbędzie sie pokaz nowego polskiego lidara typu DIAL.',
'lidar',
'Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.',
'DIAL - lidar absorbcji różnicowej jest urządzeniem inteligentnym, to znaczy potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. Z lidara korzyść mamy potrójną: użyteczną,',
'naukową',
'I',
'dydaktyczną',
'.',
'Żeby przetworzyć',
'sygnał lidarowy, czyli to co wraca po rozproszeniu światła do układu, i otrzymać',
'dane dotyczące rozkładu koncentracji - trzeba dokonać skomplikowanych operacji.',
'muszą być zastosowane dwie wiązki laserowe o dwóch różnych długościach fali, z których jedna jest absorbowana, a druga nie jest absorbowana przez substancję, którą chcemy wykryć.'],
'start': [153,
173,
238,
270,
591,
875,
1022,
1033,
1101,
1437,
1459,
1549,
2670]},
{'end': [159, 227, 243, 396, 922, 1103, 1629, 2062, 2582, 2848],
'span_text': ['Jutro',
'odbędzie sie pokaz nowego polskiego lidara typu DIAL.',
'lidar',
'Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym.',
'Jest to najnowsza generacja tego typu lidarów. DIAL - lidar absorbcji różnicowej jest urządzeniem inteligentnym, to znaczy potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, korzyść naukową - rozwijamy badania nad tym urządzeniem',
'. I korzyść dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.',
'Żeby przetworzyć tzw. sygnał lidarowy, czyli to co wraca po rozproszeniu światła do układu, i otrzymać rozsądne dane dotyczące rozkładu koncentracji - trzeba dokonać skomplikowanych operacji.',
'Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej, dzięki której ten lidar u nas zaistniał i dla której w ramach naszych zobowiązań wykonujemy pomiary zanieczyszczeń nad naszą wspólną granicą. Zasadniczy koszt jego budowy pokryła uzyskana od Fundacji dotacja. Część pieniędzy przekazał też Narodowy Fundusz Ochrony Środowiska i Gospodarki Wodnej oraz Komitet Badań Naukowych.',
'',
'Lidar typu DIAL jest oparty na pomiarze absorbcji różnicowej, czyli muszą być zastosowane dwie wiązki laserowe o dwóch różnych długościach fali, z których jedna jest absorbowana, a druga nie jest absorbowana przez substancję, którą chcemy wykryć.'],
'start': [153, 173, 238, 270, 542, 1020, 1437, 1631, 2581, 2602]},
{'end': [159, 227, 243, 360, 804, 882, 1025, 1044, 1102],
'span_text': ['Jutro',
'odbędzie sie pokaz nowego polskiego lidara typu DIAL.',
'lidar',
'Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.',
'DIAL - lidar absorbcji różnicowej jest urządzeniem inteligentnym, to znaczy potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. Z lidara korzyść mamy potrójną: użyteczną,',
'naukową',
'I',
'dydaktyczną',
'.'],
'start': [153, 173, 238, 270, 591, 875, 1022, 1033, 1101]},
{'end': [246, 396, 922, 1102, 4763],
'span_text': ['Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar?',
'PROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym.',
'DIAL - lidar absorbcji różnicowej jest urządzeniem inteligentnym, to znaczy potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, korzyść naukową - rozwijamy badania nad tym urządzeniem',
'I korzyść dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.',
'Nasz lidar ma większe możliwości niż stacje monitoringowe. Mierzy zanieczyszczenia nie tylko lokalnie, ale też ich rozkład w przestrzeni, z wysoką rozdzielczością przestrzenną i na odległość kilku kilometrów.'],
'start': [153, 247, 590, 1022, 4555]},
{'end': [246, 396, 480, 542, 1021, 1102, 2920, 4989],
'span_text': ['Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar?',
'PROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym.',
'Tego typu lidar jest',
'drogi, kosztuje około miliona marek niemieckich.',
'DIAL - lidar absorbcji różnicowej jest urządzeniem inteligentnym, to znaczy potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, korzyść naukową - rozwijamy badania nad tym urządzeniem, staramy się m.in. rozszerzyć jego zastosowanie także na inne substancje występujące w atmosferze.',
'I korzyść dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.',
'Lidar typu DIAL jest oparty na pomiarze absorbcji różnicowej, czyli muszą być zastosowane dwie wiązki laserowe o dwóch różnych długościach fali, z których jedna jest absorbowana, a druga nie jest absorbowana przez substancję, którą chcemy wykryć. Cząsteczki, które wykrywamy mają pasma absorbcji w bliskim nadfiolecie.',
'Nasz lidar ma większe możliwości niż stacje monitoringowe. Mierzy zanieczyszczenia nie tylko lokalnie, ale też ich rozkład w przestrzeni, z wysoką rozdzielczością przestrzenną i na odległość kilku kilometrów. Możemy zatem śledzić ewolucję rozprzestrzeniania się tych zanieczyszczeń, ich kierunek i zmiany spowodowane m.in. warunkami atmosferycznymi. Wyniki naszych pomiarów porównujemy z danymi uzyskanymi ze stacji monitoringowych.'],
'start': [153, 247, 459, 493, 590, 1022, 2602, 4555]},
{'end': [246, 360, 626, 883, 920, 1102],
'span_text': ['Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar?',
'PROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.',
'',
'Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, korzyść naukową',
'i',
'dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.'],
'start': [153, 247, 625, 760, 919, 1032]},
{'end': [158,
262,
271,
359,
397,
590,
761,
803,
867,
907,
922,
1025,
1102,
3311,
3516,
3595,
3623,
3675,
4226,
4332],
'span_text': ['Jutro',
'odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar? \n\nPROF. KRZYSZTOF',
'ERNST:',
'urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.',
'',
'to najnowsza generacja tego typu lidarów.',
'Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen.',
'korzyść mamy potrójną: użyteczną,',
'przy jego pomocy wykonujemy pomiary skażeń atmosferycznych,',
'naukową - rozwijamy badania nad',
'urządzeniem',
'I',
'dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.',
'',
'Nasze przedsięwzięcie nie ma charakteru komercyjnego.',
'Chcemy np. mierzyć w Warszawie rozkłady',
'koncentracji tlenków azotu',
'.',
'Koncentrujemy się głównie na Czarnym Trójkącie - obszarze u zbiegu',
'granic: Polski, Niemiec i Czech, do niedawna uważanym za najbardziej zdegradowany region w Europie.'],
'start': [153,
172,
263,
279,
396,
548,
699,
769,
806,
875,
911,
1022,
1033,
3310,
3462,
3556,
3596,
3674,
4158,
4233]},
{'end': [158,
262,
271,
359,
398,
459,
498,
543,
590,
761,
803,
867,
922,
1025,
1102,
2242,
2300,
2406,
3247,
3311,
3516,
3595,
3675,
4226,
4333,
5130,
5241,
5439,
5661,
5756,
7113],
'span_text': ['Jutro',
'odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar? \n\nPROF. KRZYSZTOF',
'ERNST:',
'urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.',
'',
'to kosztowne urządzenie będzie służyło tylko naukowcom?',
'lidar jest rzeczywiście drogi',
'.',
'to najnowsza generacja tego typu lidarów.',
'Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen.',
'korzyść mamy potrójną: użyteczną,',
'przy jego pomocy wykonujemy pomiary skażeń atmosferycznych,',
'naukową - rozwijamy badania nad tym urządzeniem',
'I',
'dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.',
'Czy wszystkie zanieczyszczenia będzie można wykryć za pomocą lidaru?\n\nNie ma takiego jednostkowego urządzenia, które by wykrywało i mierzyło wszystkie szkodliwe gazy w atmosferze',
'. Ale',
'prowadzimy badania mające na celu rozszerzenie możliwości lidaru o taką substancję jak fosgen.',
'',
'stać nas będzie na prowadzenie pomiarów ozonu w miastach?',
'Nasze przedsięwzięcie nie ma charakteru komercyjnego.',
'Chcemy np. mierzyć w Warszawie rozkłady',
'koncentracji tlenków azotu, ich ewolucję czasową nad różnymi arteriami miasta.',
'Koncentrujemy się głównie na Czarnym Trójkącie - obszarze u zbiegu',
'granic: Polski, Niemiec i Czech, do niedawna uważanym za najbardziej zdegradowany region w Europie.',
'zanieczyszczenie',
'było dużo większe niż obecnie i wszystko wskazuje na to, że będzie dalej spadać.',
'',
'DIAL',
'dzisiaj ma zdecydowanie największe wzięcie w ochronie środowiska.',
'Fizycy dotychczas nie zajmowali się ochroną środowiska?\n\nTaka specjalizacja powstala na Wydziale Fizyki UW dwa lata temu.'],
'start': [153,
172,
263,
279,
396,
402,
469,
541,
548,
699,
769,
806,
875,
1022,
1033,
2062,
2294,
2312,
3245,
3251,
3462,
3556,
3596,
4158,
4233,
5114,
5160,
5438,
5656,
5690,
6990]},
{'end': [262, 271, 359, 397, 590, 761, 803, 807, 867, 907, 922, 1025, 1102],
'span_text': ['Co to jest lidar? \n\nPROF. KRZYSZTOF',
'ERNST:',
'urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.',
'',
'to najnowsza generacja tego typu lidarów.',
'Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen.',
'korzyść mamy potrójną: użyteczną,',
'',
'wykonujemy pomiary skażeń atmosferycznych,',
'naukową - rozwijamy badania nad',
'urządzeniem',
'I',
'dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.'],
'start': [227,
263,
279,
396,
548,
699,
769,
806,
824,
875,
911,
1022,
1033]},
{'end': [245,
360,
761,
936,
971,
1022,
1733,
1878,
4159,
4614,
4772,
4818,
4860,
4906,
7283,
7326,
7383],
'span_text': ['Co to jest lidar?',
'PROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.',
'Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen.',
'staramy się',
'rozszerzyć jego zastosowanie',
'na inne substancje występujące w atmosferze.',
'Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej',
'.',
'zamierzamy zakończyć pomiary skażeń atmosferycznych nad granicą polsko-niemiecką.',
'Nasz lidar ma większe możliwości niż stacje monitoringowe.',
'Możemy',
'śledzić ewolucję rozprzestrzeniania się',
'zanieczyszczeń, ich kierunek i zmiany',
'.',
'Gwarancją sukcesu naszego programu dydaktyczno-badawczego jest udział w nim zakładów należących do Instytutu Fizyki Doświadczalnej UW, Pracowni Przetwarzania Informacji',
'Instytutu Geofizyki i',
'współpraca z Freie Universität Berlin.'],
'start': [227,
246,
699,
924,
942,
977,
1631,
1876,
4076,
4555,
4765,
4778,
4823,
4904,
7114,
7305,
7344]},
{'end': [245,
360,
625,
761,
936,
1022,
1311,
1357,
1436,
1733,
1878,
3247,
3311,
3563,
3676,
4159,
4614,
4772,
4818,
4906,
5410,
5439,
5701,
5789,
6163,
6364,
6472,
7048,
7283,
7326,
7383],
'span_text': ['Co to jest lidar?',
'PROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.',
'DIAL - lidar absorbcji różnicowej',
'potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen.',
'staramy się',
'rozszerzyć jego zastosowanie także na inne substancje występujące w atmosferze.',
"Pakiet software'u",
'wzbogacamy o nowe algorytmy, które potrafią',
'dokładniej rozszyfrowywać sygnał lidarowy, a w konsekwencji skażenia.',
'Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej',
'.',
'',
'',
'Chcemy',
'mierzyć w Warszawie rozkłady koncentracji tlenków azotu, ich ewolucję czasową nad różnymi arteriami miasta.',
'zamierzamy zakończyć pomiary skażeń atmosferycznych nad granicą polsko-niemiecką.',
'Nasz lidar ma większe możliwości niż stacje monitoringowe.',
'Możemy',
'śledzić ewolucję rozprzestrzeniania się',
'zanieczyszczeń, ich kierunek i zmiany spowodowane m.in. warunkami atmosferycznymi.',
'',
'',
'DIAL jest tym typem lidara, który dzisiaj ma',
'największe wzięcie w ochronie środowiska. Z lidarów korzysta meteorologia.',
'W Europie takich lidarów jak nasz jest zaledwie kilka.',
'Nasz lidar',
'jest najnowocześniejszym w Polsce. Ponadto jest lidarem ruchomym, zainstalowanym na samochodzie.',
'Fizycy dotychczas nie zajmowali się ochroną środowiska?',
'Taka specjalizacja powstala na Wydziale Fizyki UW dwa lata temu. Gwarancją sukcesu naszego programu dydaktyczno-badawczego jest udział w nim zakładów należących do Instytutu Fizyki Doświadczalnej UW, Pracowni Przetwarzania Informacji',
'Instytutu Geofizyki i',
'współpraca z Freie Universität Berlin.'],
'start': [227,
246,
591,
668,
924,
942,
1293,
1313,
1366,
1631,
1876,
3246,
3310,
3556,
3567,
4076,
4555,
4765,
4778,
4823,
5409,
5438,
5656,
5714,
6108,
6353,
6374,
6990,
7049,
7305,
7344]},
{'end': [245, 271, 360, 761, 4159, 4614, 4772, 4818, 4860, 4905],
'span_text': ['Co to jest lidar?',
'PROF. KRZYSZTOF ERNST:',
'to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.',
'Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen.',
'zamierzamy zakończyć pomiary skażeń atmosferycznych nad granicą polsko-niemiecką.',
'Nasz lidar ma większe możliwości niż stacje monitoringowe.',
'Możemy',
'śledzić ewolucję rozprzestrzeniania się',
'zanieczyszczeń, ich kierunek i zmiany',
'.'],
'start': [227, 246, 276, 699, 4076, 4555, 4765, 4778, 4823, 4904]}],
'type': ['extract',
'extract',
'extract',
'extract',
'extract',
'extract',
'extract',
'extract',
'extract',
'extract',
'extract',
'extract',
'extract',
'extract',
'extract']},
'title': 'Lidarowe oczy'}
```
### Data Fields
- `id`: a `string` example identifier
- `date`: date of the original article (`string`)
- `title`: title of the original article (`string`)
- `section`: the section of the newspaper the original article belonged to (`string`)
- `authors`: original article authors (`string`)
- `body`: original article body (list of `string`s)
- `summaries`: a dictionary feature containing summaries of the original article with the following attributes:
- `ratio`: ratio of summary - percentage of the original article (list of `int32`s)
- `type`: type of summary - extractive (`extract`) or abstractive (`abstract`) (list of `string`s)
- `author`: acronym of summary author (list of `string`)
- `body`: body of summary (list of `string`)
- `spans`: a list containing spans for extractive summaries (empty for abstractive summaries):
- `start`: start of span (`int32`)
- `end`: end of span (`int32`)
- `span_text`: span text (`string`)
### Data Splits
Single train split
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
```
@inproceedings{
ogro:kop:14:lrec,
author = "Ogrodniczuk, Maciej and Kopeć, Mateusz",
pdf = "http://nlp.ipipan.waw.pl/Bib/ogro:kop:14:lrec.pdf",
title = "The {P}olish {S}ummaries {C}orpus",
pages = "3712--3715",
crossref = "lrec:14"
}
@proceedings{
lrec:14,
editor = "Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios",
isbn = "978-2-9517408-8-4",
title = "Proceedings of the Ninth International {C}onference on {L}anguage {R}esources and {E}valuation, {LREC}~2014",
url = "http://www.lrec-conf.org/proceedings/lrec2014/index.html",
booktitle = "Proceedings of the Ninth International {C}onference on {L}anguage {R}esources and {E}valuation, {LREC}~2014",
address = "Reykjavík, Iceland",
key = "LREC",
year = "2014",
organization = "European Language Resources Association (ELRA)"
}
```
### Contributions
Thanks to [@kldarek](https://github.com/kldarek) for adding this dataset. |
ro_sts | 2022-11-18T21:42:20.000Z | [
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:semantic-similarity-scoring",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|other-sts-b",
"language:ro",
"license:... | null | The RO-STS (Romanian Semantic Textual Similarity) dataset contains 8628 pairs of sentences with their similarity score. It is a high-quality translation of the STS benchmark dataset. | @inproceedings{dumitrescu2021liro,
title={Liro: Benchmark and leaderboard for romanian language tasks},
author={Dumitrescu, Stefan Daniel and Rebeja, Petru and Lorincz, Beata and Gaman, Mihaela and Avram, Andrei and Ilie, Mihai and Pruteanu, Andrei and Stan, Adriana and Rosia, Lorena and Iacobescu, Cristina and others},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1)},
year={2021}
} | null | 0 | 5 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- ro
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|other-sts-b
task_categories:
- text-classification
task_ids:
- text-scoring
- semantic-similarity-scoring
paperswithcode_id: null
pretty_name: RO-STS
dataset_info:
features:
- name: score
dtype: float32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
config_name: ro_sts
splits:
- name: train
num_bytes: 879073
num_examples: 5749
- name: test
num_bytes: 194330
num_examples: 1379
- name: validation
num_bytes: 245926
num_examples: 1500
download_size: 1267607
dataset_size: 1319329
---
# Dataset Card for RO-STS
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [GitHub](https://github.com/dumitrescustefan/RO-STS)
- **Repository:** [GitHub](https://github.com/dumitrescustefan/RO-STS)
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [email](dumitrescu.stefan@gmail.com)
### Dataset Summary
We present RO-STS - the Semantic Textual Similarity dataset for the Romanian language. It is a high-quality translation of the [STS English dataset](https://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark). RO-STS contains 8,628 sentence pairs with their similarity scores. The original English sentences were collected from news headlines, captions of images and user forums, and are categorized accordingly. The Romanian release follows this categorization and provides the same train/validation/test split with 5,749/1,500/1,379 sentence pairs in each subset.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
The text dataset is in Romanian (`ro`)
## Dataset Structure
### Data Instances
An example looks like this:
```
{'score': 1.5,
'sentence1': 'Un bărbat cântă la harpă.',
'sentence2': 'Un bărbat cântă la claviatură.',
}
```
### Data Fields
- `score`: a float representing the semantic similarity score where 0.0 is the lowest score and 5.0 is the highest
- `sentence1`: a string representing a text
- `sentence2`: another string to compare the previous text with
### Data Splits
The train/validation/test split contain 5,749/1,500/1,379 sentence pairs.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
[Needs More Information]
#### Initial Data Collection and Normalization
*To construct the dataset, we first obtained automatic translations using Google's translation engine. These were then manually checked, corrected, and cross-validated by human volunteers. *
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
CC BY-SA 4.0 License
### Citation Information
```
@inproceedings{dumitrescu2021liro,
title={Liro: Benchmark and leaderboard for romanian language tasks},
author={Dumitrescu, Stefan Daniel and Rebeja, Petru and Lorincz, Beata and Gaman, Mihaela and Avram, Andrei and Ilie, Mihai and Pruteanu, Andrei and Stan, Adriana and Rosia, Lorena and Iacobescu, Cristina and others},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1)},
year={2021}
}
```
### Contributions
Thanks to [@lorinczb](https://github.com/lorinczb) for adding this dataset. |
scielo | 2023-06-01T14:59:47.000Z | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"language:es",
"language:pt",
"license:unknown",
"arxiv:1905.01852",
"region:us"
] | null | A parallel corpus of full-text scientific articles collected from Scielo database in the following languages: English, Portuguese and Spanish. The corpus is sentence aligned for all language pairs, as well as trilingual aligned for a small subset of sentences. Alignment was carried out using the Hunalign algorithm. | @inproceedings{soares2018large,
title={A Large Parallel Corpus of Full-Text Scientific Articles},
author={Soares, Felipe and Moreira, Viviane and Becker, Karin},
booktitle={Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018)},
year={2018}
} | null | 1 | 5 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
- es
- pt
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: SciELO
dataset_info:
- config_name: en-es
features:
- name: translation
dtype:
translation:
languages:
- en
- es
splits:
- name: train
num_bytes: 71777213
num_examples: 177782
download_size: 22965217
dataset_size: 71777213
- config_name: en-pt
features:
- name: translation
dtype:
translation:
languages:
- en
- pt
splits:
- name: train
num_bytes: 1032669686
num_examples: 2828917
download_size: 322726075
dataset_size: 1032669686
- config_name: en-pt-es
features:
- name: translation
dtype:
translation:
languages:
- en
- pt
- es
splits:
- name: train
num_bytes: 147472132
num_examples: 255915
download_size: 45556562
dataset_size: 147472132
config_names:
- en-es
- en-pt
- en-pt-es
---
# Dataset Card for SciELO
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[SciELO](https://sites.google.com/view/felipe-soares/datasets#h.p_92uSCyAjWSRB)
- **Repository:**
- **Paper:** [A Large Parallel Corpus of Full-Text Scientific Articles](https://arxiv.org/abs/1905.01852)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
A parallel corpus of full-text scientific articles collected from Scielo database in the following languages:English, Portuguese and Spanish.
The corpus is sentence aligned for all language pairs, as well as trilingual aligned for a small subset of sentences.
Alignment was carried out using the Hunalign algorithm.
### Supported Tasks and Leaderboards
The underlying task is machine translation.
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{soares2018large,
title={A Large Parallel Corpus of Full-Text Scientific Articles},
author={Soares, Felipe and Moreira, Viviane and Becker, Karin},
booktitle={Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018)},
year={2018}
}
```
### Contributions
Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset. |
stsb_mt_sv | 2022-11-18T21:48:42.000Z | [
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:semantic-similarity-scoring",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|o... | null | null | @article{isbister2020not,
title={Why Not Simply Translate? A First Swedish Evaluation Benchmark for Semantic Similarity},
author={Isbister, Tim and Sahlgren, Magnus},
journal={arXiv preprint arXiv:2009.03116},
year={2020}
} | null | 1 | 5 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
- machine-generated
language:
- sv
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|other-sts-b
task_categories:
- text-classification
task_ids:
- text-scoring
- semantic-similarity-scoring
paperswithcode_id: null
pretty_name: Swedish Machine Translated STS-B
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: score
dtype: float32
config_name: plain_text
splits:
- name: test
num_bytes: 171823
num_examples: 1379
- name: validation
num_bytes: 218843
num_examples: 1500
- name: train
num_bytes: 772847
num_examples: 5749
download_size: 383047
dataset_size: 1163513
---
# Dataset Card for Swedish Machine Translated STS-B
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [stsb-mt-sv homepage](https://github.com/timpal0l/sts-benchmark-swedish)
- **Repository:** [stsb-mt-sv repository](https://github.com/timpal0l/sts-benchmark-swedish)
- **Paper:** [Why Not Simply Translate? A First Swedish Evaluation Benchmark for Semantic Similarity
](https://arxiv.org/abs/2009.03116)
- **Point of Contact:** [Tim Isbister](mailto:timisbisters@gmail.com)
### Dataset Summary
This dataset is a Swedish machine translated version for semantic textual similarity.
### Supported Tasks and Leaderboards
This dataset can be used to evaluate text similarity on Swedish.
### Languages
The text in the dataset is in Swedish. The associated BCP-47 code is `sv`.
## Dataset Structure
### Data Instances
What a sample looks like:
```
{'score': '4.2',
'sentence1': 'Undrar om jultomten kommer i år pga Corona..?',
'sentence2': 'Jag undrar om jultomen kommer hit i år med tanke på covid-19',
}
```
### Data Fields
- `score`: a float representing the semantic similarity score. Where 0.0 is the lowest score and 5.0 is the highest.
- `sentence1`: a string representing a text
- `sentence2`: another string to compare the semantic with
### Data Splits
The data is split into a training, validation and test set. The final split sizes are as follow:
| Train | Valid | Test |
| ------ | ----- | ---- |
| 5749 | 1500 | 1379 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The machine translated version were put together by @timpal0l
### Licensing Information
[Needs More Information]
### Citation Information
```
@article{isbister2020not,
title={Why Not Simply Translate? A First Swedish Evaluation Benchmark for Semantic Similarity},
author={Isbister, Tim and Sahlgren, Magnus},
journal={arXiv preprint arXiv:2009.03116},
year={2020}
}
```
### Contributions
Thanks to [@timpal0l](https://github.com/timpal0l) for adding this dataset. |
tashkeela | 2022-11-03T16:07:53.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:ar",
"l... | null | Arabic vocalized texts.
it contains 75 million of fully vocalized words mainly97 books from classical and modern Arabic language. | @article{zerrouki2017tashkeela,
title={Tashkeela: Novel corpus of Arabic vocalized texts, data for auto-diacritization systems},
author={Zerrouki, Taha and Balla, Amar},
journal={Data in brief},
volume={11},
pages={147},
year={2017},
publisher={Elsevier}
} | null | 0 | 5 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- ar
license:
- gpl-2.0
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: null
pretty_name: Tashkeela
tags:
- diacritics-prediction
dataset_info:
features:
- name: text
dtype: string
- name: book
dtype: string
config_name: plain_text
splits:
- name: train
num_bytes: 1081110249
num_examples: 97
download_size: 183393530
dataset_size: 1081110249
---
# Dataset Card for Tashkeela
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Tashkeela](https://sourceforge.net/projects/tashkeela/)
- **Repository:** [Tashkeela](https://sourceforge.net/projects/tashkeela/)
- **Paper:** [Tashkeela: Novel corpus of Arabic vocalized texts, data for auto-diacritization systems](https://www.sciencedirect.com/science/article/pii/S2352340917300112)
- **Point of Contact:** [Taha Zerrouki](mailto:t_zerrouki@esi.dz)
### Dataset Summary
It contains 75 million of fully vocalized words mainly
97 books from classical and modern Arabic language.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset is based on Arabic.
## Dataset Structure
### Data Instances
```
{'book': 'zip://Tashkeela-arabic-diacritized-text-utf8-0.3/texts.txt/msa/al-kalema.org/أشكال-التجارب-في-مَثَل-الزارع.htm.txt::https://sourceforge.net/projects/tashkeela/files/latest/download',
'text': 'الكلمة\n\n\nصفحه اصلی\nاشترك\nالكتاب المقدس\nجميع المقالات\nالترتيب بالموضوع\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nهذا المقال على نسخة PDF\n\n\nأشكال التجارب في مَثَل الزارع\n\n\tقد رأينا في مقال " \nوسائل واشكال التجارب" الأشكال التي من الممكن أن تتخذها التجارب (وخاصة الاختبارات التي تأتي من خلال الآلام والاضطهاد وأشراك إطاعة شهوات الإنسان العتيق، الجسد)، نستطيع أيضاً أن نرى هذه الأقسام عاملة في مثال الزارع. هناك مجموعتين في مثال الزارع أنه برغم من سماعهم واستقبالهم للكلمة، إلا أنهم لم يجلبوا ثماراً. والسؤال هو لماذا؟\n\n1. التجارب في القسم الثاني من مثال الزارع\n\nفيما يخص القسم الثاني من مثال الزارع، تخبرنا عنها متى 13: 20- 21 ولوقا 8: 13 \nمتى 13: 20- 21\n" وَالْمَزْرُوعُ عَلَى الأَمَاكِنِ الْمُحْجِرَةِ هُوَ الَّذِي يَسْمَعُ الْكَلِمَةَ، وَحَالاً يَقْبَلُهَا بِفَرَحٍ، وَلكِنْ لَيْسَ لَهُ أَصْلٌ فِي ذَاتِهِ، بَلْ هُوَ إِلَى حِينٍ. فَإِذَا حَدَثَ ضِيقٌ أَوِ اضْطِهَادٌ مِنْ أَجْلِ الْكَلِمَةِ فَحَالاً يَعْثُرُ."\nلوقا 8: 13\n" وَالَّذِينَ عَلَى الصَّخْرِ هُمُ الَّذِينَ مَتَى سَمِعُوا يَقْبَلُونَ الْكَلِمَةَ بِفَرَحٍ، وَهؤُلاَءِ لَيْسَ لَهُمْ أَصْلٌ، فَيُؤْمِنُونَ إِلَى حِينٍ، وَفِي وَقْتِ التَّجْرِبَةِ يَرْتَدُّونَ."\n\nكما نرى، الناس في هذا القسم سمعوا الكلمة وحالاً قبلوها بفرح! بمعنى آخر، لقد كانوا متحمسين جداً تجاه الكلمة. ثم جاءت التجارب والاختبارات في شكل ضيق واضطهاد من أجل الكلمة، أي أنه بسبب الكلمة، اضطهد هؤلاء الناس. وعندئذ توقفوا. عوضاً عن أن يحفظوا ويتمسكوا بالكلمة التي قد حدث واستقبلوها بفرح، تراجعوا وسقطوا بعيداً، إن كنت مؤمناً صغيراً مليء بالحماسة تجاه الله، وبالرغم من أنه قد يبدو أنه لا يوجد شيطان من حولك، فهذا لن يستمر إلى الأبد. فالتجارب والاختبارات آتية. ستحتاج إلى أن تحفظ وتتمسك بالإيمان وبالكلمة التي قد حدث واستقبلتها بفرح. كما تقول لنا الكلمة:\nعبرانيين 10: 35- 39\n" فَلاَ تَطْرَحُوا ثِقَتَكُمُ الَّتِي لَهَا مُجَازَاةٌ عَظِيمَةٌ. لأَنَّكُمْ تَحْتَاجُونَ إِلَى الصَّبْرِ، حَتَّى إِذَا صَنَعْتُمْ مَشِيئَةَ اللهِ تَنَالُونَ الْمَوْعِدَ. لأَنَّهُ بَعْدَ قَلِيل جِدًّا «سَيَأْتِي الآتِي وَلاَ يُبْطِئُ. أَمَّا الْبَارُّ فَبِالإِيمَانِ يَحْيَا، وَإِنِ ارْتَدَّ لاَ تُسَرُّ بِهِ نَفْسِي». وَأَمَّا نَحْنُ فَلَسْنَا مِنَ الارْتِدَادِ لِلْهَلاَكِ، بَلْ مِنَ الإِيمَانِ لاقْتِنَاءِ النَّفْسِ."\n\nوالضيق قد يأخذ أشكالاً عديدة. رأيت أناساً يسقطون، تاركين الإيمان لأن آبائهم أو أقاربهم وأصدقائهم قد عارضوهم ورفضوهم بسبب إيمانهم. بالطبع قد يأخذ الاضطهاد أشكالاً أكثر من ذلك أيضاً، مثل أن تلقى في سجن أو أن تعذب لأجل إيمانك. قد يسبب الموت كذلك، كما حدث مع اسطفانوس ويعقوب أخو يوحنا. وتقول الكلمة من أجلك ومن أجل كل الذين حوكموا:\nرومية 16: 19- 20\n" لأَنَّ طَاعَتَكُمْ ذَاعَتْ إِلَى الْجَمِيعِ، فَأَفْرَحُ أَنَا بِكُمْ، وَأُرِيدُ أَنْ تَكُونُوا حُكَمَاءَ لِلْخَيْرِ وَبُسَطَاءَ لِلشَّرِّ. وَإِلهُ السَّلاَمِ سَيَسْحَقُ الشَّيْطَانَ تَحْتَ أَرْجُلِكُمْ سَرِيعًا."\nو بطرس الأولى 5: 8- 10\n" اُصْحُوا وَاسْهَرُوا. لأَنَّ إِبْلِيسَ خَصْمَكُمْ كَأَسَدٍ زَائِرٍ، يَجُولُ مُلْتَمِسًا مَنْ يَبْتَلِعُهُ هُوَ. فَقَاوِمُوهُ، رَاسِخِينَ فِي الإِيمَانِ، عَالِمِينَ أَنَّ نَفْسَ هذِهِ الآلاَمِ تُجْرَى عَلَى إِخْوَتِكُمُ الَّذِينَ فِي الْعَالَمِ. وَإِلهُ كُلِّ نِعْمَةٍ الَّذِي دَعَانَا إِلَى مَجْدِهِ الأَبَدِيِّ فِي الْمَسِيحِ يَسُوعَ، بَعْدَمَا تَأَلَّمْتُمْ يَسِيرًا، هُوَ يُكَمِّلُكُمْ، وَيُثَبِّتُكُمْ، وَيُقَوِّيكُمْ، وَيُمَكِّنُكُمْ."\n\nتمسك بالإيمان حتى النهاية. ضع حياتك ووضعك بين يدي الله وكن مستعداً لمواجهة أي شيء قد يحدث، أجل وحتى السخرية والعذاب. الله معك، سيقويك وسيعينك تماماً مثلما فعل مع يسوع في بستان جسثيماني. وتماماً مثلما فعل مع بولس في السجن عندما اضطهد من قِبَل اليهود (أعمال الرسل 23: 11). وكما قال بولس في كورنثوس الثانية 1: 7:" عَالِمِينَ أَنَّكُمْ كَمَا أَنْتُمْ شُرَكَاءُ فِي الآلاَمِ، كَذلِكَ فِي التَّعْزِيَةِ أَيْضًا." فالعزاء الآتي من الله يوازن أي سخرية أو أي عذاب قد يأتي إلينا من أي إنسان.\n\n2. التجارب في القسم الثالث من مثال الزارع\n\nبخصوص القسم الثالث من مثال الزارع، فنقرأ عنه في مرقس 4: 18- 19\n\n" وَهؤُلاَءِ هُمُ الَّذِينَ زُرِعُوا بَيْنَ الشَّوْكِ: هؤُلاَءِ هُمُ الَّذِينَ يَسْمَعُونَ الْكَلِمَةَ، وَهُمُومُ هذَا الْعَالَمِ وَغُرُورُ الْغِنَى وَشَهَوَاتُ سَائِرِ الأَشْيَاءِ تَدْخُلُ وَتَخْنُقُ الْكَلِمَةَ فَتَصِيرُ بِلاَ ثَمَرٍ."\nو لوقا 8: 14\n" وَالَّذِي سَقَطَ بَيْنَ الشَّوْكِ هُمُ الَّذِينَ يَسْمَعُونَ، ثُمَّ يَذْهَبُونَ فَيَخْتَنِقُونَ مِنْ هُمُومِ الْحَيَاةِ وَغِنَاهَا وَلَذَّاتِهَا، وَلاَ يُنْضِجُونَ ثَمَرًا."\n\nهؤلاء قد سمعوا الكلمة وفهموها ولكنهم صاروا بلا ثمر، وما هو السبب؟ السبب هو لأنهم تركوا أبواب قلوبهم مفتوحة لأشواك " وَهُمُومُ هذَا الْعَالَمِ وَغُرُورُ الْغِنَى وَشَهَوَاتُ سَائِرِ الأَشْيَاءِ" (مرقس 4: 19)، والتي تدخل فتخنق الكلمة، كما رأينا يعقوب دائماً ما يقول:\nيعقوب 1: 13- 15\n" لاَ يَقُلْ أَحَدٌ إِذَا جُرِّبَ: «إِنِّي أُجَرَّبُ مِنْ قِبَلِ اللهِ»، لأَنَّ اللهَ غَيْرُ مُجَرَّبٍ بِالشُّرُورِ، وَهُوَ لاَ يُجَرِّبُ أَحَدًا. وَلكِنَّ كُلَّ وَاحِدٍ يُجَرَّبُ إِذَا انْجَذَبَ وَانْخَدَعَ مِنْ شَهْوَتِهِ. ثُمَّ الشَّهْوَةُ إِذَا حَبِلَتْ تَلِدُ خَطِيَّةً، وَالْخَطِيَّةُ إِذَا كَمَلَتْ تُنْتِجُ مَوْتًا."\nوتيموثاوس الأولى 6: 9 تقول لنا\n" وَأَمَّا الَّذِينَ يُرِيدُونَ أَنْ يَكُونُوا أَغْنِيَاءَ، فَيَسْقُطُونَ فِي تَجْرِبَةٍ وَفَخٍّ وَشَهَوَاتٍ كَثِيرَةٍ غَبِيَّةٍ وَمُضِرَّةٍ، تُغَرِّقُ النَّاسَ فِي الْعَطَبِ وَالْهَلاَكِ."\n\nيجب أن نلاحظ شيئاً هنا: أن تأثير هموم الحياة هو نفس التأثير الذي لتجارب الغنى وشهوات الأشياء الأخرى. فهموم الحياة أيضاً لا تجلب الثمار، إذاً فإن اردت أن تكون مسيحياً مثمراً، أي مسيحي حقيقي وليس فقط مسيحي اسمي، فيجب عليك أن تزيل أشواك الهموم والغنى وملذات الحياة وأن تمنعهم من العودة مرة أخرى. تحتاج إلى أن تفعل شيئاً، تحتاج إلى أن تتغير والله سيعينك في هذا إن كنت حقاً تريده. التجارب في القسم الثالث من مثال الزارع لا تأتي من خلال الاضطهاد والآلام عن طريق الشيطان. ولكن هنا تأخذ التجارب صوراً أكثر مكراً والتي مع هذا تتطلب مقاومتنا. الاهتمام بما يهتم به هذا العالم ("هموم هذا العالم")، الرغبة في الغنى أو اشتهاء الأشياء الأخرى هي أمور خطيرة جداً. إنها أشواك يجب إزالتها. كما رأينا بولس يقول:\nرومية 13: 14\n" بَلِ الْبَسُوا الرَّبَّ يَسُوعَ الْمَسِيحَ، وَلاَ تَصْنَعُوا تَدْبِيرًا لِلْجَسَدِ لأَجْلِ الشَّهَوَاتِ."\n\n" لاَ تَصْنَعُوا تَدْبِيرًا لِلْجَسَدِ" والتي تعني أنه يجب علينا أن لا نهتم بالجسد وشهواته. ولكن عوضاً عن ذلك ينبغي لنا أن نطعم أنفسنا بلبن الكلمة الصافي الذي ننمو بواستطه (بطرس الأولى 2: 2).\n\n\nتاسوس كيولاشوجلو'}
```
### Data Fields
- `book` (str): Book filename.
- `text` (str): Text of the book.
### Data Splits
The dataset is not split.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
The Modern Standard Arabic texts crawled from the Internet.
#### Who are the source language producers?
Websites.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[GNU General Public License, version 2 (GPLv2)](https://opensource.org/licenses/GPL-2.0).
### Citation Information
The dataset was published on this [paper](https://www.sciencedirect.com/science/article/pii/S2352340917300112#!):
```
@article{zerrouki2017tashkeela,
title={Tashkeela: Novel corpus of Arabic vocalized texts, data for auto-diacritization systems},
author={Zerrouki, Taha and Balla, Amar},
journal={Data in brief},
volume={11},
pages={147},
year={2017},
publisher={Elsevier}
}
```
### Contributions
Thanks to [@zaidalyafeai](https://github.com/zaidalyafeai) for adding this dataset. |
times_of_india_news_headlines | 2022-11-03T16:15:42.000Z | [
"task_categories:text2text-generation",
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"task_ids:fact-checking-retrieval",
"task_ids:text-simplification",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1M<... | null | This news dataset is a persistent historical archive of noteable events in the Indian subcontinent from start-2001 to mid-2020, recorded in realtime by the journalists of India. It contains approximately 3.3 million events published by Times of India. Times Group as a news agency, reaches out a very wide audience across Asia and drawfs every other agency in the quantity of english articles published per day. Due to the heavy daily volume over multiple years, this data offers a deep insight into Indian society, its priorities, events, issues and talking points and how they have unfolded over time. It is possible to chop this dataset into a smaller piece for a more focused analysis, based on one or more facets. | @data{DVN/DPQMQH_2020,
author = {Kulkarni, Rohit},
publisher = {Harvard Dataverse},
title = {{Times of India News Headlines}},
year = {2020},
version = {V1},
doi = {10.7910/DVN/DPQMQH},
url = {https://doi.org/10.7910/DVN/DPQMQH}
} | null | 0 | 5 | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- en
license:
- cc0-1.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text2text-generation
- text-retrieval
task_ids:
- document-retrieval
- fact-checking-retrieval
- text-simplification
paperswithcode_id: null
pretty_name: Times of India News Headlines
dataset_info:
features:
- name: publish_date
dtype: string
- name: headline_category
dtype: string
- name: headline_text
dtype: string
splits:
- name: train
num_bytes: 260939306
num_examples: 3297173
download_size: 0
dataset_size: 260939306
---
# Dataset Card for Times of India News Headlines
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/J7BYRX
- **Repository:** [More Information Needed]
- **Paper:** [More Information Needed]
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
This news dataset is a persistent historical archive of noteable events in the Indian subcontinent from start-2001 to mid-2020, recorded in realtime by the journalists of India. It contains approximately 3.3 million events published by Times of India. Times Group as a news agency, reaches out a very wide audience across Asia and drawfs every other agency in the quantity of english articles published per day. Due to the heavy daily volume over multiple years, this data offers a deep insight into Indian society, its priorities, events, issues and talking points and how they have unfolded over time. It is possible to chop this dataset into a smaller piece for a more focused analysis, based on one or more facets.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
```
{
'publish_date': '20010530',
'headline_category': city.kolkata,
'headline_text': "Malda fake notes"
}
```
### Data Fields
- `publish_date`: Date of publishing in yyyyMMdd format
- `headline_category`: Category of event in ascii, dot-delimited values
- `headline_text`: Headline of article en la Engrezi (2020-07-10)
### Data Splits
This dataset has no splits.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was created by Rohit Kulkarni.
### Licensing Information
The data is under the [CC0: Public Domain](https://creativecommons.org/publicdomain/zero/1.0/)
### Citation Information
```
@data{DVN/DPQMQH_2020,
author = {Kulkarni, Rohit},
publisher = {Harvard Dataverse},
title = {{Times of India News Headlines}},
year = {2020},
version = {V1},
doi = {10.7910/DVN/DPQMQH},
url = {https://doi.org/10.7910/DVN/DPQMQH}
}
```
### Contributions
Thanks to [@tanmoyio](https://github.com/tanmoyio) for adding this dataset. |
twi_wordsim353 | 2022-11-03T16:07:57.000Z | [
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:semantic-similarity-scoring",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"language:tw",
"li... | null | A translation of the word pair similarity dataset wordsim-353 to Twi.
The dataset was presented in the paper
Alabi et al.: Massive vs. Curated Embeddings for Low-Resourced
Languages: the Case of Yorùbá and Twi (LREC 2020). | @inproceedings{alabi-etal-2020-massive,
title = "Massive vs. Curated Embeddings for Low-Resourced Languages: the Case of {Y}or{\\`u}b{\\'a} and {T}wi",
author = "Alabi, Jesujoba and
Amponsah-Kaakyire, Kwabena and
Adelani, David and
Espa{\\~n}a-Bonet, Cristina",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://www.aclweb.org/anthology/2020.lrec-1.335",
pages = "2754--2762",
language = "English",
ISBN = "979-10-95546-34-4",
} | null | 1 | 5 | ---
annotations_creators:
- crowdsourced
language_creators:
- expert-generated
language:
- en
- tw
license:
- unknown
multilinguality:
- multilingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- text-scoring
- semantic-similarity-scoring
paperswithcode_id: null
pretty_name: Yorùbá Wordsim-353
dataset_info:
features:
- name: twi1
dtype: string
- name: twi2
dtype: string
- name: similarity
dtype: float32
splits:
- name: test
num_bytes: 7285
num_examples: 274
download_size: 6141
dataset_size: 7285
---
# Dataset Card for Yorùbá Wordsim-353
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** -https://www.aclweb.org/anthology/2020.lrec-1.335/
- **Repository:** https://github.com/ajesujoba/YorubaTwi-Embedding
- **Paper:** https://www.aclweb.org/anthology/2020.lrec-1.335/
- **Leaderboard:** -
- **Point of Contact:** [Kwabena Amponsah-Kaakyire](mailto:s8kwampo@stud.uni-saarland.de)
### Dataset Summary
A translation of the word pair similarity dataset wordsim-353 to Twi. However, only 274 (out of 353) pairs of words were translated
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Twi (ISO 639-1: tw)
## Dataset Structure
### Data Instances
An instance consists of a pair of words as well as their similarity. The dataset contains both the original English words (from wordsim-353) as well as their translation to Twi.
### Data Fields
- `twi1`: the first word of the pair; translation to Twi
- `twi2`: the second word of the pair; translation to Twi
- `similarity`: similarity rating according to the English dataset
### Data Splits
Only the test data is available
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{alabi-etal-2020-massive,
title = "Massive vs. Curated Embeddings for Low-Resourced Languages: the Case of {Y}or{\`u}b{\'a} and {T}wi",
author = "Alabi, Jesujoba and
Amponsah-Kaakyire, Kwabena and
Adelani, David and
Espa{\~n}a-Bonet, Cristina",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://www.aclweb.org/anthology/2020.lrec-1.335",
pages = "2754--2762",
abstract = "The success of several architectures to learn semantic representations from unannotated text and the availability of these kind of texts in online multilingual resources such as Wikipedia has facilitated the massive and automatic creation of resources for multiple languages. The evaluation of such resources is usually done for the high-resourced languages, where one has a smorgasbord of tasks and test sets to evaluate on. For low-resourced languages, the evaluation is more difficult and normally ignored, with the hope that the impressive capability of deep learning architectures to learn (multilingual) representations in the high-resourced setting holds in the low-resourced setting too. In this paper we focus on two African languages, Yor{\`u}b{\'a} and Twi, and compare the word embeddings obtained in this way, with word embeddings obtained from curated corpora and a language-dependent processing. We analyse the noise in the publicly available corpora, collect high quality and noisy data for the two languages and quantify the improvements that depend not only on the amount of data but on the quality too. We also use different architectures that learn word representations both from surface forms and characters to further exploit all the available information which showed to be important for these languages. For the evaluation, we manually translate the wordsim-353 word pairs dataset from English into Yor{\`u}b{\'a} and Twi. We extend the analysis to contextual word embeddings and evaluate multilingual BERT on a named entity recognition task. For this, we annotate with named entities the Global Voices corpus for Yor{\`u}b{\'a}. As output of the work, we provide corpora, embeddings and the test suits for both languages.",
language = "English",
ISBN = "979-10-95546-34-4",
}
```
### Contributions
Thanks to [@dadelani](https://github.com/dadelani) for adding this dataset. |
wrbsc | 2023-01-25T15:02:59.000Z | [
"task_categories:text-classification",
"task_ids:semantic-similarity-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:pl",
"license:cc-by-sa-3.0",
"region:us"
] | null | WUT Relations Between Sentences Corpus contains 2827 pairs of related sentences.
Relationships are derived from Cross-document Structure Theory (CST), which enables multi-document summarization through identification of cross-document rhetorical relationships within a cluster of related documents.
Every relation was marked by at least 3 annotators. | @misc{11321/305,
title = {{WUT} Relations Between Sentences Corpus},
author = {Oleksy, Marcin and Fikus, Dominika and Wolski, Michal and Podbielska, Malgorzata and Turek, Agnieszka and Kędzia, Pawel},
url = {http://hdl.handle.net/11321/305},
note = {{CLARIN}-{PL} digital repository},
copyright = {Attribution-{ShareAlike} 3.0 Unported ({CC} {BY}-{SA} 3.0)},
year = {2016}
} | null | 0 | 5 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- pl
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- semantic-similarity-classification
pretty_name: wrbsc
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: relationship
dtype:
class_label:
names:
'0': Krzyżowanie_się
'1': Tło_historyczne
'2': Źródło
'3': Dalsze_informacje
'4': Zawieranie
'5': Opis
'6': Uszczegółowienie
'7': Parafraza
'8': Spełnienie
'9': Mowa_zależna
'10': Zmiana_poglądu
'11': Streszczenie
'12': Tożsamość
'13': Sprzeczność
'14': Modalność
'15': Cytowanie
splits:
- name: train
num_bytes: 779881
num_examples: 2827
download_size: 1273815
dataset_size: 779881
---
# Dataset Card for wrbsc
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://clarin-pl.eu/dspace/handle/11321/305
- **Repository:** https://clarin-pl.eu/dspace/handle/11321/305
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
WUT Relations Between Sentences Corpus contains 2827 pairs of related sentences. Relationships are derived from Cross-document Structure Theory (CST), which enables multi-document summarization through identification of cross-document rhetorical relationships within a cluster of related documents. Every relation was marked by at least 3 annotators.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
Polish
## Dataset Structure
### Data Instances
An example contains two related sentences and a class representing the type of relationship between those sentences.
```
{'relationship': 0,
'sentence1': 'Znajdujące się w Biurze Bezpieczeństwa Narodowego akta Komisji Weryfikacyjnej WSI zostały przewiezione do siedziby Służby Kontrwywiadu Wojskowego.',
'sentence2': '2008-07-03: Wywiezienie akt dotyczących WSI – sprawa dla prokuratury?'}
```
### Data Fields
- `sentence1`: the first sentence being compared (`string`)
- `sentence2`: the second sentence being compared (`string`)
- `relationship`: the type of relationship between those sentences. Can be one of 16 classes listed below:
- `Krzyżowanie_się`: crossing
- `Tło_historyczne`: historical background
- `Źródło`: source
- `Dalsze_informacje`: additional information
- `Zawieranie`: inclusion
- `Opis`: description
- `Uszczegółowienie`: further detail
- `Parafraza`: paraphrase
- `Spełnienie`: fulfillment
- `Mowa_zależna`: passive voice
- `Zmiana_poglądu`: change of opinion
- `Streszczenie`: summarization
- `Tożsamość`: identity
- `Sprzeczność`: conflict
- `Modalność`: modality
- `Cytowanie`: quotation
### Data Splits
Single train split
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0)
### Citation Information
```
@misc{11321/305,
title = {{WUT} Relations Between Sentences Corpus},
author = {Oleksy, Marcin and Fikus, Dominika and Wolski, Micha{\l} and Podbielska, Ma{\l}gorzata and Turek, Agnieszka and Kędzia, Pawe{\l}},
url = {http://hdl.handle.net/11321/305},
note = {{CLARIN}-{PL} digital repository},
copyright = {Attribution-{ShareAlike} 3.0 Unported ({CC} {BY}-{SA} 3.0)},
year = {2016}
}
```
### Contributions
Thanks to [@kldarek](https://github.com/kldarek) for adding this dataset. |
yoruba_wordsim353 | 2022-11-03T16:07:49.000Z | [
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:semantic-similarity-scoring",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"language:yo",
"li... | null | A translation of the word pair similarity dataset wordsim-353 to Yorùbá.
The dataset was presented in the paper
Alabi et al.: Massive vs. Curated Embeddings for Low-Resourced
Languages: the Case of Yorùbá and Twi (LREC 2020). | @inproceedings{alabi-etal-2020-massive,
title = "Massive vs. Curated Embeddings for Low-Resourced Languages: the Case of {Y}or{\\`u}b{\\'a} and {T}wi",
author = "Alabi, Jesujoba and
Amponsah-Kaakyire, Kwabena and
Adelani, David and
Espa{\\~n}a-Bonet, Cristina",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://www.aclweb.org/anthology/2020.lrec-1.335",
pages = "2754--2762",
language = "English",
ISBN = "979-10-95546-34-4",
} | null | 0 | 5 | ---
annotations_creators:
- crowdsourced
language_creators:
- expert-generated
language:
- en
- yo
license:
- unknown
multilinguality:
- multilingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- text-scoring
- semantic-similarity-scoring
paperswithcode_id: null
pretty_name: Wordsim-353 In Yorùbá (YorubaWordsim353)
dataset_info:
features:
- name: english1
dtype: string
- name: english2
dtype: string
- name: yoruba1
dtype: string
- name: yoruba2
dtype: string
- name: similarity
dtype: float32
splits:
- name: test
num_bytes: 19299
num_examples: 353
download_size: 17039
dataset_size: 19299
---
# Dataset Card for wordsim-353 in Yorùbá (yoruba_wordsim353)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** -
- **Repository:** https://github.com/ajesujoba/YorubaTwi-Embedding
- **Paper:** https://www.aclweb.org/anthology/2020.lrec-1.335/
- **Leaderboard:** -
- **Point of Contact:** Jesujoba Alabi ( jesujobaoluwadara.alabi (at) dfki.de ) and David Adelani ( didelani (at) lsv.uni-saarland.de )
### Dataset Summary
A translation of the word pair similarity dataset wordsim-353 to Yorùbá.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Yorùbá (ISO 639-1: yo)
## Dataset Structure
### Data Instances
An instance consists of a pair of words as well as their similarity. The dataset contains both the original English words (from wordsim-353) as well as their translation to Yorùbá.
### Data Fields
- `english1`: the first word of the pair; the original English word
- `english2`: the second word of the pair; the original English word
- `yoruba1`: the first word of the pair; translation to Yorùbá
- `yoruba2`: the second word of the pair; translation to Yorùbá
- `similarity`: similarity rating according to the English dataset
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@michael-aloys](https://github.com/michael-aloys) for adding this dataset. |
youtube_caption_corrections | 2023-01-25T15:03:42.000Z | [
"task_categories:other",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:slot-filling",
"annotations_creators:expert-generated",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"... | null | Dataset built from pairs of YouTube captions where both 'auto-generated' and
'manually-corrected' captions are available for a single specified language.
This dataset labels two-way (e.g. ignoring single-sided insertions) same-length
token differences in the `diff_type` column. The `default_seq` is composed of
tokens from the 'auto-generated' captions. When a difference occurs between
the 'auto-generated' vs 'manually-corrected' captions types, the `correction_seq`
contains tokens from the 'manually-corrected' captions. | null | null | 4 | 5 | ---
annotations_creators:
- expert-generated
- machine-generated
language_creators:
- machine-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- other
- text-generation
- fill-mask
task_ids:
- slot-filling
pretty_name: YouTube Caption Corrections
tags:
- token-classification-of-text-errors
dataset_info:
features:
- name: video_ids
dtype: string
- name: default_seq
sequence: string
- name: correction_seq
sequence: string
- name: diff_type
sequence:
class_label:
names:
'0': NO_DIFF
'1': CASE_DIFF
'2': PUNCUATION_DIFF
'3': CASE_AND_PUNCUATION_DIFF
'4': STEM_BASED_DIFF
'5': DIGIT_DIFF
'6': INTRAWORD_PUNC_DIFF
'7': UNKNOWN_TYPE_DIFF
'8': RESERVED_DIFF
splits:
- name: train
num_bytes: 355978939
num_examples: 10769
download_size: 222479455
dataset_size: 355978939
---
# Dataset Card for YouTube Caption Corrections
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/2dot71mily/youtube_captions_corrections
- **Repository:** https://github.com/2dot71mily/youtube_captions_corrections
- **Paper:** [N/A]
- **Leaderboard:** [N/A]
- **Point of Contact:** Emily McMilin
### Dataset Summary
This dataset is built from pairs of YouTube captions where both an auto-generated and a manually-corrected caption are available for a single specified language. It currently only in English, but scripts at repo support other languages. The motivation for creating it was from viewing errors in auto-generated captions at a recent virtual conference, with the hope that there could be some way to help correct those errors.
The dataset in the repo at https://github.com/2dot71mily/youtube_captions_corrections records in a non-destructive manner all the differences between an auto-generated and a manually-corrected caption for thousands of videos. The dataset here focuses on the subset of those differences which are mutual and have the same size in token length difference, which means it excludes token insertion or deletion differences between the two captions. Therefore dataset here remains a non-destructive representation of the original auto-generated captions, but excludes some of the differences that are found in the manually-corrected captions.
### Supported Tasks and Leaderboards
- `token-classification`: The tokens in `default_seq` are from the auto-generated YouTube captions. If `diff_type` is labeled greater than `0` at a given index, then the associated token in same index in the `default_seq` was found to be different to the token in the manually-corrected YouTube caption, and therefore we assume it is an error. A model can be trained to learn when there are errors in the auto-generated captions.
- `slot-filling`: The `correction_seq` is sparsely populated with tokens from the manually-corrected YouTube captions in the locations where there was found to be a difference to the token in the auto-generated YouTube captions. These 'incorrect' tokens in the `default_seq` can be masked in the locations where `diff_type` is labeled greater than `0`, so that a model can be trained to hopefully find a better word to fill in, rather than the 'incorrect' one.
End to end, the models could maybe first identify and then replace (with suitable alternatives) errors in YouTube and other auto-generated captions that are lacking manual corrections
### Languages
English
## Dataset Structure
### Data Instances
If `diff_type` is labeled greater than `0` at a given index, then the associated token in same index in the `default_seq` was found to have a difference to the token in the manually-corrected YouTube caption. The `correction_seq` is sparsely populated with tokens from the manually-corrected YouTube captions at those locations of differences.
`diff_type` labels for tokens are as follows:
0: No difference
1: Case based difference, e.g. `hello` vs `Hello`
2: Punctuation difference, e.g. `hello` vs `hello`
3: Case and punctuation difference, e.g. `hello` vs `Hello,`
4: Word difference with same stem, e.g. `thank` vs `thanked`
5: Digit difference, e.g. `2` vs `two`
6: Intra-word punctuation difference, e.g. `autogenerated` vs `auto-generated`
7: Unknown type of difference, e.g. `laughter` vs `draft`
8: Reserved for unspecified difference
{
'video_titles': '_QUEXsHfsA0',
'default_seq': ['you', 'see', "it's", 'a', 'laughter', 'but', 'by', 'the', 'time', 'you', 'see', 'this', 'it', "won't", 'be', 'so', 'we', 'have', 'a', 'big']
'correction_seq': ['', 'see,', '', '', 'draft,', '', '', '', '', '', 'read', 'this,', '', '', 'be.', 'So', '', '', '', '']
'diff_type': [0, 2, 0, 0, 7, 0, 0, 0, 0, 0, 7, 2, 0, 0, 2, 1, 0, 0, 0, 0]
}
### Data Fields
- 'video_ids': Unique ID used by YouTube for each video. Can paste into `https://www.youtube.com/watch?v=<{video_ids}` to see video
- 'default_seq': Tokenized auto-generated YouTube captions for the video
- 'correction_seq': Tokenized manually-corrected YouTube captions only at those locations, where there is a difference between the auto-generated and manually-corrected captions
- 'diff_type': A value greater than `0` at every token where there is a difference between the auto-generated and manually-corrected captions
### Data Splits
No data splits
## Dataset Creation
### Curation Rationale
It was created after viewing errors in auto-generated captions at a recent virtual conference, with the hope that there could be some way to help correct those errors.
### Source Data
#### Initial Data Collection and Normalization
All captions are requested via `googleapiclient` and `youtube_transcript_api` at the `channel_id` and language granularity, using scripts written at https://github.com/2dot71mily/youtube_captions_corrections.
The captions are tokenized on spaces and the manually-corrected sequence has here been reduced to only include differences between it and the auto-generated sequence.
#### Who are the source language producers?
Auto-generated scripts are from YouTube and the manually-corrected scripts are from creators, and any support they may have (e.g. community or software support)
### Annotations
#### Annotation process
Scripts at repo, https://github.com/2dot71mily/youtube_captions_corrections take a diff of the two captions and use this to create annotations.
#### Who are the annotators?
YouTube creators, and any support they may have (e.g. community or software support)
### Personal and Sensitive Information
All content publicly available on YouTube
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Emily McMilin
### Licensing Information
MIT License
### Citation Information
https://github.com/2dot71mily/youtube_captions_corrections
### Contributions
Thanks to [@2dot71mily](https://github.com/2dot71mily) for adding this dataset. |
Aisha/BAAD6 | 2022-10-22T05:30:28.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:found",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:found",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:unkno... | Aisha | null | null | null | 0 | 5 | ---
annotations_creators:
- found
- crowdsourced
- expert-generated
language_creators:
- found
- crowdsourced
language:
- bn
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: 'BAAD6: Bangla Authorship Attribution Dataset (6 Authors)'
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
## Description
**BAAD6** is an **Authorship Attribution dataset for Bengali Literature**. It was collected and analyzed by Hemayet et al [[1]](https://ieeexplore.ieee.org/document/8631977). The data was obtained from different online posts and blogs. This dataset is balanced among the 6 Authors with 350 sample texts per author. This is a relatively small dataset but is noisy given the sources it was collected from and its cleaning procedure. Nonetheless, it may help evaluate authorship attribution systems as it resembles texts often available on the Internet. Details about the dataset are given in the table below.
| Author | Samples | Word count | Unique word |
| ------ | ------ | ------ | ------ |
|fe|350|357k|53k|
| ij | 350 | 391k | 72k
| mk | 350 | 377k | 47k
| rn | 350 | 231k | 50k
| hm | 350 | 555k | 72k
| rg | 350 | 391k | 58k
**Total** | 2,100 | 2,304,338 | 230,075
**Average** | 350 | 384,056.33 | 59,006.67
## Citation
If you use this dataset, please cite the paper [A Comparative Analysis of Word Embedding Representations in Authorship Attribution of Bengali Literature](https://ieeexplore.ieee.org/document/8631977).
```
@INPROCEEDINGS{BAAD6Dataset,
author={Ahmed Chowdhury, Hemayet and Haque Imon, Md. Azizul and Islam, Md. Saiful},
booktitle={2018 21st International Conference of Computer and Information Technology (ICCIT)},
title={A Comparative Analysis of Word Embedding Representations in Authorship Attribution of Bengali Literature},
year={2018},
volume={},
number={},
pages={1-6},
doi={10.1109/ICCITECHN.2018.8631977}
}
```
This dataset is also available in Mendeley: [BAAD6 dataset](https://data.mendeley.com/datasets/w9wkd7g43f/5). Always make sure to use the latest version of the dataset. Cite the dataset directly by:
```
@misc{BAAD6Dataset,
author = {Ahmed Chowdhury, Hemayet and Haque Imon, Md. Azizul and Khatun, Aisha and Islam, Md. Saiful},
title = {BAAD6: Bangla Authorship Attribution Dataset},
year={2018},
doi = {10.17632/w9wkd7g43f.5},
howpublished= {\url{https://data.mendeley.com/datasets/w9wkd7g43f/5}}
}
``` |
AlekseyKorshuk/horror-scripts | 2022-02-10T18:26:41.000Z | [
"region:us"
] | AlekseyKorshuk | This dataset is designed to generate lyrics with HuggingArtists. | @InProceedings{huggingartists:dataset,
title = {Lyrics dataset},
author={Aleksey Korshuk
},
year={2021}
} | null | 1 | 5 | Entry not found |
DDSC/europarl | 2022-07-01T15:42:03.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:da",
"license:cc-by-4.0",
"region:us"
] | DDSC | null | null | null | 2 | 5 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- da
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: TwitterSent
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for DKHate
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Direct Download**: http://danlp-downloads.alexandra.dk/datasets/europarl.sentiment2.zip
### Dataset Summary
This dataset consists of Danish data from the European Parliament that has been annotated for sentiment analysis by the [Alexandra Institute](https://github.com/alexandrainst) - all credits go to them.
### Supported Tasks and Leaderboards
This dataset is suitable for sentiment analysis.
### Languages
This dataset is in Danish.
## Dataset Structure
### Data Instances
Every entry in the dataset has a document and an associated label.
### Data Fields
An entry in the dataset consists of the following fields:
- `text` (`str`): The text content.
- `label` (`str`): The label of the `text`. Can be "positiv", "neutral" or "negativ" for positive, neutral and negative sentiment, respectively.
### Data Splits
A `train` and `test` split is available, with the test split being 30% of the dataset, randomly sampled in a stratified fashion. There are 669 documents in the training split and 288 in the test split.
## Additional Information
### Dataset Curators
The collection and annotation of the dataset is solely due to the [Alexandra Institute](https://github.com/alexandrainst).
### Licensing Information
The dataset is released under the CC BY 4.0 license.
### Citation Information
```
@misc{europarl,
title={EuroParl},
author={Alexandra Institute},
year={2020},
note={\url{https://danlp-alexandra.readthedocs.io/en/latest/docs/datasets.html#europarl-sentiment2}}
}
```
### Contributions
Thanks to [@saattrupdan](https://github.com/saattrupdan) for adding this dataset to the Hugging Face Hub. |
Nexdata/accented_mandarin | 2023-08-31T03:09:30.000Z | [
"region:us"
] | Nexdata | null | null | null | 3 | 5 | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for accented_mandarin
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://nexdata.ai/?source=Huggingface
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The dataset contains 2,000 hours of Mandarin Chinese speech data. The data is collected from local speakers in 26 provinces like Henan, Shanxi, Sichuan, Hunan, Fujian, etc.The content covers generic catagory,human machine interaction, smart home command and control, in-car,numbers etc. The format is 16kHz, 16bit, uncompressed wav, mono channel. The sentence accuracy is over 97%.
For more details, please refer to the link: https://nexdata.ai/speechRecognition?source=Huggingface
### Supported Tasks and Leaderboards
automatic-speech-recognition, audio-speaker-identification: The dataset can be used to train a model for Automatic Speech Recognition (ASR).
### Languages
Accented Mandarin
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing
### Citation Information
[More Information Needed]
### Contributions
|
Fraser/dream-coder | 2022-04-25T10:49:02.000Z | [
"language:en",
"license:mit",
"program-synthesis",
"region:us"
] | Fraser | null | null | null | 2 | 5 | ---
language:
- en
thumbnail: "https://huggingface.co/datasets/Fraser/dream-coder/resolve/main/img.png"
tags:
- program-synthesis
license: "mit"
datasets:
- program-synthesis
---
# Program Synthesis Data
Generated program synthesis datasets used to train [dreamcoder](https://github.com/ellisk42/ec).
Currently just supports text & list data.

|
GroNLP/ik-nlp-22_transqe | 2022-10-21T08:06:50.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:translation",
"size_categories:unknown",
"source_datasets:extended|esnli",
"language:en... | GroNLP | The e-SNLI dataset extends the Stanford Natural Language Inference Dataset to
include human-annotated natural language explanations of the entailment
relations. This version includes an automatic translation to Dutch and two quality estimation annotations
for each translated field. | @incollection{NIPS2018_8163,
title = {e-SNLI: Natural Language Inference with Natural Language Explanations},
author = {Camburu, Oana-Maria and Rockt\"{a}schel, Tim and Lukasiewicz, Thomas and Blunsom, Phil},
booktitle = {Advances in Neural Information Processing Systems 31},
editor = {S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett},
pages = {9539--9549},
year = {2018},
publisher = {Curran Associates, Inc.},
url = {http://papers.nips.cc/paper/8163-e-snli-natural-language-inference-with-natural-language-explanations.pdf}
} | null | 0 | 5 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
- machine-generated
language:
- en
- nl
license:
- apache-2.0
multilinguality:
- translation
size_categories:
- unknown
source_datasets:
- extended|esnli
task_categories:
- text-classification
task_ids:
- natural-language-inference
pretty_name: iknlp22-transqe
tags:
- quality-estimation
---
# Dataset Card for IK-NLP-22 Project 3: Translation Quality-driven Data Selection for Natural Language Inference
## Table of Contents
- [Dataset Card for IK-NLP-22 Project 3: Translation Quality-driven Data Selection for Natural Language Inference](#dataset-card-for-ik-nlp-22-project-3-translation-quality-driven-data-selection-for-natural-language-inference)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Splits](#data-splits)
- [Data Example](#data-example)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Source:** [Github](https://github.com/OanaMariaCamburu/e-SNLI)
- **Point of Contact:** [Gabriele Sarti](mailto:ik-nlp-course@rug.nl)
### Dataset Summary
This dataset contains the full [e-SNLI](https://huggingface.co/datasets/esnli) dataset, automatically translated to Dutch using the [Helsinki-NLP/opus-mt-en-nl](https://huggingface.co/Helsinki-NLP/opus-mt-en-nl) neural machine translation model. The translation of each field has been anotated with two quality estimation scores using the referenceless version of the [COMET](https://github.com/Unbabel/COMET/) metric by Unbabel.
The intended usage of this corpus is restricted to the scope of final project for the 2022 edition of the Natural Language Processing course at the Information Science Master's Degree (IK) at the University of Groningen, taught by [Arianna Bisazza](https://research.rug.nl/en/persons/arianna-bisazza) and [Gabriele Sarti](https://research.rug.nl/en/persons/gabriele-sarti), with the assistance of [Anjali Nair](https://nl.linkedin.com/in/anjalinair012).
*The e-SNLI corpus was made freely available by the authors on Github. The present dataset was created for educational purposes, and is based on the original e-SNLI dataset by Camburu et al..All rights of the present contents are attributed to the original authors.*
### Languages
The language data of this corpus is in English (BCP-47 `en`) and Dutch (BCP-47 `nl`).
## Dataset Structure
### Data Instances
The dataset contains a single condiguration by default, named `plain_text`, with the three original splits `train`, `validation` and `test`. Every split contains the following fields:
| **Field** | **Description** |
|------------|-----------------------------|
|`premise_en`| The original English premise.|
|`premise_nl`| The premise automatically translated to Dutch.|
|`hypothesis_en`| The original English hypothesis.|
|`hypothesis_nl`| The hypothesis automatically translated to Dutch.|
|`label`| The label of the data instance (0 for entailment, 1 for neutral, 2 for contradiction).|
|`explanation_1_en`| The first explanation for the assigned label in English.|
|`explanation_1_nl`| The first explanation automatically translated to Dutch.|
|`explanation_2_en`| The second explanation for the assigned label in English.|
|`explanation_2_nl`| The second explanation automatically translated to Dutch.|
|`explanation_3_en`| The third explanation for the assigned label in English.|
|`explanation_3_nl`| The third explanation automatically translated to Dutch.|
|`da_premise`| The quality estimation produced by the `wmt20-comet-qe-da` model for the premise translation.|
|`da_hypothesis`| The quality estimation produced by the `wmt20-comet-qe-da` model for the hypothesis translation.|
|`da_explanation_1`| The quality estimation produced by the `wmt20-comet-qe-da` model for the first explanation translation.|
|`da_explanation_2`| The quality estimation produced by the `wmt20-comet-qe-da` model for the second explanation translation.|
|`da_explanation_3`| The quality estimation produced by the `wmt20-comet-qe-da` model for the third explanation translation.|
|`mqm_premise`| The quality estimation produced by the `wmt21-comet-qe-mqm` model for the premise translation.|
|`mqm_hypothesis`| The quality estimation produced by the `wmt21-comet-qe-mqm` model for the hypothesis translation.|
|`mqm_explanation_1`| The quality estimation produced by the `wmt21-comet-qe-mqm` model for the first explanation translation.|
|`mqm_explanation_2`| The quality estimation produced by the `wmt21-comet-qe-mqm` model for the second explanation translation.|
|`mqm_explanation_3`| The quality estimation produced by the `wmt21-comet-qe-mqm` model for the third explanation translation.|
Explanation 2 and 3 and related quality estimation scores are only present in the `validation` and `test` splits.
### Data Splits
| config| train | validation | test |
|------------:|---------|------------|------|
|`plain_text` | 549'367 | 9842 | 9824 |
For your analyses, use the amount of data that is the most reasonable for your computational setup. The more, the better.
### Data Example
The following is an example of entry 2000 taken from the `test` split:
```json
{
"premise_en": "A young woman wearing a yellow sweater and black pants is ice skating outdoors.",
"premise_nl": "Een jonge vrouw met een gele trui en zwarte broek schaatst buiten.",
"hypothesis_en": "a woman is practicing for the olympics",
"hypothesis_nl": "een vrouw oefent voor de Olympische Spelen",
"label": 1,
"explanation_1_en": "You can not infer it's for the Olympics.",
"explanation_1_nl": "Het is niet voor de Olympische Spelen.",
"explanation_2_en": "Just because a girl is skating outdoors does not mean she is practicing for the Olympics.",
"explanation_2_nl": "Alleen omdat een meisje buiten schaatst betekent niet dat ze oefent voor de Olympische Spelen.",
"explanation_3_en": "Ice skating doesn't imply practicing for the olympics.",
"explanation_3_nl": "Schaatsen betekent niet oefenen voor de Olympische Spelen.",
"da_premise": "0.6099",
"mqm_premise": "0.1298",
"da_hypothesis": "0.8504",
"mqm_hypothesis": "0.1521",
"da_explanation_1": "0.0001",
"mqm_explanation_1": "0.1237",
"da_explanation_2": "0.4017",
"mqm_explanation_2": "0.1467",
"da_explanation_3": "0.6069",
"mqm_explanation_3": "0.1389"
}
```
### Dataset Creation
The dataset was created through the following steps:
- Translating every field of the original e-SNLI corpus to Dutch using the [Helsinki-NLP/opus-mt-en-nl](https://huggingface.co/Helsinki-NLP/opus-mt-en-nl) neural machine translation model.
- Annotating the quality estimation of the translations with two referenceless versions of the [COMET](https://github.com/Unbabel/COMET/) metric by Unbabel.
## Additional Information
### Dataset Curators
For problems on this 🤗 Datasets version, please contact us at [ik-nlp-course@rug.nl](mailto:ik-nlp-course@rug.nl).
### Licensing Information
The dataset is licensed under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0.html).
### Citation Information
Please cite the authors if you use these corpora in your work:
```bibtex
@incollection{NIPS2018_8163,
title = {e-SNLI: Natural Language Inference with Natural Language Explanations},
author = {Camburu, Oana-Maria and Rockt\"{a}schel, Tim and Lukasiewicz, Thomas and Blunsom, Phil},
booktitle = {Advances in Neural Information Processing Systems 31},
editor = {S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett},
pages = {9539--9549},
year = {2018},
publisher = {Curran Associates, Inc.},
url = {http://papers.nips.cc/paper/8163-e-snli-natural-language-inference-with-natural-language-explanations.pdf}
}
``` |
SocialGrep/one-million-reddit-confessions | 2022-07-01T18:48:52.000Z | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | SocialGrep | null | null | null | 1 | 5 | ---
annotations_creators:
- lexyr
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
paperswithcode_id: null
---
# Dataset Card for one-million-reddit-confessions
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets?utm_source=huggingface&utm_medium=link&utm_campaign=onemillionconfessions)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=onemillionconfessions)
### Dataset Summary
This corpus contains a million posts from the following subreddits:
- /r/trueoffmychest
- /r/confession
- /r/confessions
- /r/offmychest
Posts are annotated with their score.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a Reddit post.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': the domain of the data point's link.
- 'url': the destination of the data point's link, if any.
- 'selftext': the self-text of the data point, if any.
- 'title': the title of the post data point.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
CC-BY v4.0
### Contributions
[Needs More Information] |
SocialGrep/ten-million-reddit-answers | 2022-07-01T17:38:25.000Z | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | SocialGrep | A spiritual successor to our One Million Questions, this NLP dataset contains an outstanding ten million of /r/AskReddit answers, going back from the end of November of 2020. | null | null | 6 | 5 | ---
annotations_creators:
- lexyr
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
source_datasets:
- original
paperswithcode_id: null
---
# Dataset Card for ten-million-reddit-answers
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets?utm_source=huggingface&utm_medium=link&utm_campaign=tenmillionanswers)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=tenmillionanswers)
### Dataset Summary
This corpus contains ten million question-answer pairs, labeled with score and pre-packaged with results of a basic sentiment predictor.
The data was procured from /r/AskReddit using [SocialGrep](https://socialgrep.com/?utm_source=huggingface&utm_medium=link&utm_campaign=tenmillionanswers).
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
CC-BY v4.0
### Contributions
[Needs More Information] |
Tahsin-Mayeesha/Bengali-SQuAD | 2022-10-25T09:06:50.000Z | [
"task_categories:question-answering",
"multilinguality:monolingual",
"language:bn",
"region:us"
] | Tahsin-Mayeesha | null | null | null | 0 | 5 | ---
language:
- bn
multilinguality:
- monolingual
task_categories:
- question-answering
---
# Overview
This dataset contains the data for the paper [Deep learning based question answering system in Bengali](https://www.tandfonline.com/doi/full/10.1080/24751839.2020.1833136). It is a translated version of [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset to bengali language. Preprocessing details can be found in the paper. |
MonoHime/ru_sentiment_dataset | 2021-05-20T00:57:22.000Z | [
"language:ru",
"sentiment",
"text-classification",
"region:us"
] | MonoHime | null | null | null | 3 | 5 | ---
language:
- ru
tags:
- sentiment
- text-classification
---
# Dataset with sentiment of Russian text
Contains aggregated dataset of Russian texts from 6 datasets.
## Labels meaning
0: NEUTRAL
1: POSITIVE
2: NEGATIVE
## Datasets
**[Sentiment Analysis in Russian](https://www.kaggle.com/c/sentiment-analysis-in-russian/data)**
> Sentiments (positive, negative or neutral) of news in russian language from Kaggle competition.
**[Russian Language Toxic Comments](https://www.kaggle.com/blackmoon/russian-language-toxic-comments/)**
> Small dataset with labeled comments from 2ch.hk and pikabu.ru.
**[Dataset of car reviews for machine learning (sentiment analysis)](https://github.com/oldaandozerskaya/auto_reviews)**
> Glazkova A. The evaluation of the proximity of text categories for solving electronic documents classification tasks //VESTNIK TOMSKOGO GOSUDARSTVENNOGO UNIVERSITETA-UPRAVLENIE VYCHISLITELNAJA TEHNIKA I INFORMATIKA-TOMSK STATE UNIVERSITY JOURNAL OF CONTROL AND COMPUTER SCIENCE. – 2015. – Т. 31. – №. 2. – С. 18-25.
**[Sentiment datasets by Blinov](https://github.com/natasha/corus/issues/14)**
> Datasets contain reviews from different scopes.
**[LINIS Crowd](http://www.linis-crowd.org/)**
> Произведение «LINIS Crowd SENT - тональный словарь и коллекция текстов с тональной разметкой» созданное автором по имени Sergei Koltcov, Olessia Koltsova и Svetlana Alexeeva.
**[Russian Hotel Reviews Dataset](https://drive.google.com/drive/folders/17sa3h4XHcG0MJGrbfOsbL-kDW29CuJul)**
> Hotel reviews in Russian |
YuAnthony/chid | 2022-02-23T05:19:14.000Z | [
"region:us"
] | YuAnthony | null | null | null | 1 | 5 | Entry not found |
allegro/klej-cdsc-r | 2021-11-29T19:14:36.000Z | [
"region:us"
] | allegro | null | null | null | 0 | 5 | Entry not found |
anton-l/common_language | 2022-10-21T16:20:41.000Z | [
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:extended|common_voice",
"language:ar",
"language:br",
"language:ca",
"language:cnh",
"language:cs",
"language:cv",
"language:cy",
"language:de"... | anton-l | This dataset is composed of speech recordings from languages that were carefully selected from the CommonVoice database.
The total duration of audio recordings is 45.1 hours (i.e., 1 hour of material for each language).
The dataset has been extracted from CommonVoice to train language-id systems. | @dataset{ganesh_sinisetty_2021_5036977,
author = {Ganesh Sinisetty and
Pavlo Ruban and
Oleksandr Dymov and
Mirco Ravanelli},
title = {CommonLanguage},
month = jun,
year = 2021,
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5036977},
url = {https://doi.org/10.5281/zenodo.5036977}
} | null | 0 | 5 | ---
pretty_name: Common Language
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- ar
- br
- ca
- cnh
- cs
- cv
- cy
- de
- dv
- el
- en
- eo
- es
- et
- eu
- fa
- fr
- fy
- ia
- id
- it
- ja
- ka
- kab
- ky
- lv
- mn
- mt
- nl
- pl
- pt
- rm
- ro
- ru
- rw
- sah
- sl
- sv
- ta
- tr
- tt
- uk
- zh
language_bcp47:
- ar
- br
- ca
- cnh
- cs
- cv
- cy
- de
- dv
- el
- en
- eo
- es
- et
- eu
- fa
- fr
- fy-NL
- ia
- id
- it
- ja
- ka
- kab
- ky
- lv
- mn
- mt
- nl
- pl
- pt
- rm-sursilv
- ro
- ru
- rw
- sah
- sl
- sv-SE
- ta
- tr
- tt
- uk
- zh-CN
- zh-HK
- zh-TW
license:
- cc-by-nc-4.0
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|common_voice
task_categories:
- speech-processing
task_ids:
- speech-classification
---
# Dataset Card for common_language
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://zenodo.org/record/5036977
- **Repository:** https://github.com/speechbrain/speechbrain/tree/develop/recipes/CommonLanguage
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Leaderboard:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
This dataset is composed of speech recordings from languages that were carefully selected from the CommonVoice database. The total duration of audio recordings is 45.1 hours (i.e., 1 hour of material for each language). The dataset has been extracted from CommonVoice to train language-id systems.
### Supported Tasks and Leaderboards
The baselines for language-id are available in the SpeechBrain toolkit (see recipes/CommonLanguage):
https://github.com/speechbrain/speechbrain
### Languages
List of included language:
```
Arabic, Basque, Breton, Catalan, Chinese_China, Chinese_Hongkong, Chinese_Taiwan, Chuvash, Czech, Dhivehi, Dutch, English, Esperanto, Estonian, French, Frisian, Georgian, German, Greek, Hakha_Chin, Indonesian, Interlingua, Italian, Japanese, Kabyle, Kinyarwanda, Kyrgyz, Latvian, Maltese, Mongolian, Persian, Polish, Portuguese, Romanian, Romansh_Sursilvan, Russian, Sakha, Slovenian, Spanish, Swedish, Tamil, Tatar, Turkish, Ukranian, Welsh
```
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file, and its label `language`. Additional fields include `age`, `client_id`, `gender` and `sentence`.
```python
{
'client_id': 'itln_trn_sp_175',
'path': '/path/common_voice_kpd/Italian/train/itln_trn_sp_175/common_voice_it_18279446.wav',
'sentence': 'Con gli studenti è leggermente simile.',
'age': 'not_defined',
'gender': 'not_defined',
'language': 22
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`language` (`ClassLabel`): The language of the recording (see the `Languages` section above)
`sentence` (`string`): The sentence the user was prompted to speak
`age` (`string`): The age of the speaker.
`gender` (`string`): The gender of the speaker
### Data Splits
The dataset is already balanced and split into train, dev (validation) and test sets.
| Name | Train | Dev | Test |
|:---------------------------------:|:------:|:------:|:-----:|
| **# of utterances** | 177552 | 47104 | 47704 |
| **# unique speakers** | 11189 | 1297 | 1322 |
| **Total duration, hr** | 30.04 | 7.53 | 7.53 |
| **Min duration, sec** | 0.86 | 0.98 | 0.89 |
| **Mean duration, sec** | 4.87 | 4.61 | 4.55 |
| **Max duration, sec** | 21.72 | 105.67 | 29.83 |
| **Duration per language, min** | ~40 | ~10 | ~10 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
The Mongolian and Ukrainian languages are spelled as "Mangolian" and "Ukranian" in this version of the dataset.
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@dataset{ganesh_sinisetty_2021_5036977,
author = {Ganesh Sinisetty and
Pavlo Ruban and
Oleksandr Dymov and
Mirco Ravanelli},
title = {CommonLanguage},
month = jun,
year = 2021,
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5036977},
url = {https://doi.org/10.5281/zenodo.5036977}
}
```
### Contributions
Thanks to [@anton-l](https://github.com/anton-l) for adding this dataset.
|
ctu-aic/csfever | 2022-11-01T05:56:15.000Z | [
"license:cc-by-sa-3.0",
"arxiv:1803.05355",
"arxiv:2201.11115",
"region:us"
] | ctu-aic | CsFEVER is a Czech localisation of the English FEVER datgaset. | @article{DBLP:journals/corr/abs-2201-11115,
author = {Jan Drchal and
Herbert Ullrich and
Martin R{\'{y}}par and
Hana Vincourov{\'{a}} and
V{\'{a}}clav Moravec},
title = {CsFEVER and CTKFacts: Czech Datasets for Fact Verification},
journal = {CoRR},
volume = {abs/2201.11115},
year = {2022},
url = {https://arxiv.org/abs/2201.11115},
eprinttype = {arXiv},
eprint = {2201.11115},
timestamp = {Tue, 01 Feb 2022 14:59:01 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-11115.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
} | null | 1 | 5 | ---
license: cc-by-sa-3.0
---
# CsFEVER experimental Fact-Checking dataset
Czech dataset for fact verification localized from the data points of [FEVER](https://arxiv.org/abs/1803.05355) using the localization scheme described in the [CTKFacts: Czech Datasets for Fact Verification](https://arxiv.org/abs/2201.11115) paper which is currently being revised for publication in LREV journal.
The version you are looking at was reformatted to *Claim*-*Evidence* string pairs for the specific task of NLI - a more general Document-Retrieval-ready interpretation of our datapoints which can be used for training and evaluating the DR models over the June 2016 wikipedia snapshot can be found in the [data_dr]() folder in the JSON Lines format.
## Data Statement
### Curation Rationale
TODO
|
ctu-aic/snli_cs | 2021-11-21T21:07:34.000Z | [
"region:us"
] | ctu-aic | TODO: Snli_cs is a Czech translation of the Stanford NLI dataset | todo | null | 0 | 5 | Entry not found |
dk-crazydiv/huggingface-modelhub | 2021-06-20T14:09:58.000Z | [
"region:us"
] | dk-crazydiv | Metadata information of all the models available on HuggingFace's modelhub | \ | null | 3 | 5 | ## Summary
Metadata information of all the models uploaded on [HuggingFace modelhub](https://huggingface.co/models)
Dataset was last updated on 15th June 2021. Contains information on 10,354 models (v1).
Only `train` dataset is provided
#### Update: v1.0.2: Added downloads_last_month and library data
Same dataset is available in [kaggle](https://www.kaggle.com/crazydiv/huggingface-modelhub)
## Loading data
```python
from datasets import load_dataset
modelhub_dataset = load_dataset("dk-crazydiv/huggingface-modelhub")
```
### Useful commands:
```python
modelhub_dataset["train"] # Access train subset (the only subset available)
modelhub_dataset["train"][0] # Access the dataset elements by index
modelhub_dataset["train"].features # Get the columns present in the dataset.
```
### Sample dataset:
```json
{
"downloads_last_month": 7474,
"files": [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model",
"tf_model.h5",
"tokenizer.json",
"with-prefix-tf_model.h5"
],
"lastModified": "2021-01-13T15:08:24.000Z",
"library": "transformers",
"modelId": "albert-base-v1",
"pipeline_tag": "fill-mask",
"publishedBy": "huggingface",
"tags": [
"pytorch",
"tf",
"albert",
"masked-lm",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"exbert",
"license:apache-2.0",
"fill-mask"
],
"modelCard": "Readme sample data..."
}
```
## Bugs:
Please report any bugs/improvements to me on [twitter](https://twitter.com/kartik_godawat) |
eugenesiow/PIRM | 2022-10-21T04:01:16.000Z | [
"task_categories:other",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"license:cc-by-nc-sa-4.0",
"other-image-super-resolution",
"arxiv:1809.07517",
"region:us"
] | eugenesiow | The PIRM dataset consists of 200 images, which are divided into two equal sets for validation and testing.
These images cover diverse contents, including people, objects, environments, flora, natural scenery, etc.
Images vary in size, and are typically ~300K pixels in resolution.
This dataset was first used for evaluating the perceptual quality of super-resolution algorithms in The 2018 PIRM
challenge on Perceptual Super-resolution, in conjunction with ECCV 2018. | @misc{shoeiby2019pirm2018,
title={PIRM2018 Challenge on Spectral Image Super-Resolution: Dataset and Study},
author={Mehrdad Shoeiby and Antonio Robles-Kelly and Ran Wei and Radu Timofte},
year={2019},
eprint={1904.00540},
archivePrefix={arXiv},
primaryClass={cs.CV}
} | null | 0 | 5 | ---
annotations_creators:
- machine-generated
language_creators:
- found
language: []
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- other
task_ids: []
pretty_name: PIRM
tags:
- other-image-super-resolution
---
# Dataset Card for PIRM
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage**: https://github.com/roimehrez/PIRM2018
- **Repository**: https://huggingface.co/datasets/eugenesiow/PIRM
- **Paper**: https://arxiv.org/abs/1809.07517
- **Leaderboard**: https://github.com/eugenesiow/super-image#scale-x2
### Dataset Summary
The PIRM dataset consists of 200 images, which are divided into two equal sets for validation and testing.
These images cover diverse contents, including people, objects, environments, flora, natural scenery, etc.
Images vary in size, and are typically ~300K pixels in resolution.
This dataset was first used for evaluating the perceptual quality of super-resolution algorithms in The 2018 PIRM
challenge on Perceptual Super-resolution, in conjunction with ECCV 2018.
Install with `pip`:
```bash
pip install datasets super-image
```
Evaluate a model with the [`super-image`](https://github.com/eugenesiow/super-image) library:
```python
from datasets import load_dataset
from super_image import EdsrModel
from super_image.data import EvalDataset, EvalMetrics
dataset = load_dataset('eugenesiow/PIRM', 'bicubic_x2', split='validation')
eval_dataset = EvalDataset(dataset)
model = EdsrModel.from_pretrained('eugenesiow/edsr-base', scale=2)
EvalMetrics().evaluate(model, eval_dataset)
```
### Supported Tasks and Leaderboards
The dataset is commonly used for evaluation of the `image-super-resolution` task.
Unofficial [`super-image`](https://github.com/eugenesiow/super-image) leaderboard for:
- [Scale 2](https://github.com/eugenesiow/super-image#scale-x2)
- [Scale 3](https://github.com/eugenesiow/super-image#scale-x3)
- [Scale 4](https://github.com/eugenesiow/super-image#scale-x4)
- [Scale 8](https://github.com/eugenesiow/super-image#scale-x8)
### Languages
Not applicable.
## Dataset Structure
### Data Instances
An example of `validation` for `bicubic_x2` looks as follows.
```
{
"hr": "/.cache/huggingface/datasets/downloads/extracted/PIRM_valid_HR/1.png",
"lr": "/.cache/huggingface/datasets/downloads/extracted/PIRM_valid_LR_x2/1.png"
}
```
### Data Fields
The data fields are the same among all splits.
- `hr`: a `string` to the path of the High Resolution (HR) `.png` image.
- `lr`: a `string` to the path of the Low Resolution (LR) `.png` image.
### Data Splits
| name |validation|test|
|-------|---:|---:|
|bicubic_x2|100|100|
|bicubic_x3|100|100|
|bicubic_x4|100|100|
|unknown_x4|100|100|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
No annotations.
#### Who are the annotators?
No annotators.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- **Original Authors**: [Blau et al. (2018)](https://arxiv.org/abs/1809.07517)
### Licensing Information
This dataset is published under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/).
### Citation Information
```bibtex
@misc{blau20192018,
title={The 2018 PIRM Challenge on Perceptual Image Super-resolution},
author={Yochai Blau and Roey Mechrez and Radu Timofte and Tomer Michaeli and Lihi Zelnik-Manor},
year={2019},
eprint={1809.07517},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
### Contributions
Thanks to [@eugenesiow](https://github.com/eugenesiow) for adding this dataset.
|
frtna/jwt300_mt | 2021-12-08T22:29:26.000Z | [
"region:us"
] | frtna | This new dataset is designed to be used in the scope of machine translation project. | @InProceedings{phd,
title = {JWT-300 OPUS Machine Translation Dataset},
author={hmtkvs, Inc.
},
year={2021}
} | null | 0 | 5 | Entry not found |
fuliucansheng/pascal_voc | 2022-01-31T14:54:11.000Z | [
"region:us"
] | fuliucansheng | PASCAL_VOC | PASCAL_VOC | null | 0 | 5 | Entry not found |
gigant/m-ailabs_speech_dataset_fr | 2022-10-24T17:38:45.000Z | [
"task_categories:automatic-speech-recognition",
"language:fr",
"license:cc",
"region:us"
] | gigant | \
The M-AILABS Speech Dataset is the first large dataset that we are providing free-of-charge, freely usable as training data for speech recognition and speech synthesis.
Most of the data is based on LibriVox and Project Gutenberg. The training data consist of nearly thousand hours of audio and the text-files in prepared format.
A transcription is provided for each clip. Clips vary in length from 1 to 20 seconds and have a total length of approximately shown in the list (and in the respective info.txt-files) below.
The texts were published between 1884 and 1964, and are in the public domain. The audio was recorded by the LibriVox project and is also in the public domain – except for Ukrainian.
Ukrainian audio was kindly provided either by Nash Format or Gwara Media for machine learning purposes only (please check the data info.txt files for details). | \ | null | 0 | 5 | ---
language:
- fr
license: cc
size_categories:
fr:
- 10K<n<100K
task_categories:
- automatic-speech-recognition
task_ids: []
pretty_name: M-AILABS Speech Dataset (French)
---
## Dataset Description
- **Homepage:** https://www.caito.de/2019/01/the-m-ailabs-speech-dataset/
### Dataset Summary
The M-AILABS Speech Dataset is the first large dataset that we are providing free-of-charge, freely usable as training data for speech recognition and speech synthesis.
Most of the data is based on LibriVox and Project Gutenberg. The training data consist of nearly thousand hours of audio and the text-files in prepared format.
A transcription is provided for each clip. Clips vary in length from 1 to 20 seconds and have a total length of approximately shown in the list (and in the respective info.txt-files) below.
The texts were published between 1884 and 1964, and are in the public domain. The audio was recorded by the LibriVox project and is also in the public domain – except for Ukrainian.
Ukrainian audio was kindly provided either by Nash Format or Gwara Media for machine learning purposes only (please check the data info.txt files for details).
### Languages
French
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, called audio and its sentence.
### Data Fields
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- sentence: The sentence the user was prompted to speak
### Data Splits
The speech material has not been subdivided into portions, everything is in the "train" split.
The train split consists of 82825 audio clips and the related sentences.
### Contributions
[@gigant](https://huggingface.co/gigant) added this dataset. |
gsarti/itacola | 2022-07-01T15:38:55.000Z | [
"task_categories:text-classification",
"task_ids:acceptability-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:it",
"license:unknown",
"arxiv:2109.12053",... | gsarti | The Italian Corpus of Linguistic Acceptability includes almost 10k sentences taken from
linguistic literature with a binary annotation made by the original authors themselves.
The work is inspired by the English Corpus of Linguistic Acceptability (CoLA) by Warstadt et al.
Part of the dataset has been manually annotated to highlight 9 linguistic phenomena. | @inproceedings{trotta-etal-2021-monolingual,
author = {Trotta, Daniela and Guarasci, Raffaele and Leonardelli, Elisa and Tonelli, Sara},
title = {Monolingual and Cross-Lingual Acceptability Judgments with the Italian {CoLA} corpus},
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = {2021},
address = "Punta Cana, Dominican Republic and Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2109.12053",
} | null | 0 | 5 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- it
license:
- unknown
multilinguality:
- monolingual
pretty_name: itacola
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- acceptability-classification
---
# Dataset Card for ItaCoLA
## Table of Contents
- [Dataset Card for ItaCoLA](#dataset-card-for-itacola)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Acceptability Classification](#acceptability-classification)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Scores Configuration](#scores-configuration)
- [Phenomena Configuration](#phenomena-configuration)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Repository:** [Github](https://github.com/dhfbk/ItaCoLA-dataset)
- **Paper:** [Arxiv](http://ceur-ws.org/Vol-2765/paper169.pdf)
- **Point of Contact:** [Daniela Trotta](dtrotta@unisa.it)
### Dataset Summary
The Italian Corpus of Linguistic Acceptability includes almost 10k sentences taken from linguistic literature with a binary annotation made by the original authors themselves. The work is inspired by the English [Corpus of Linguistic Acceptability](https://nyu-mll.github.io/CoLA/).
**Disclaimer**: *The ItaCoLA corpus is hosted on Github by the [Digital Humanities group at FBK](https://dh.fbk.eu/)*. It was introduced in the article [Monolingual and Cross-Lingual Acceptability Judgments with the Italian CoLA corpus](https://arxiv.org/abs/2109.12053) by [Daniela Trotta](https://dh.fbk.eu/author/daniela/), [Raffaele Guarasci](https://www.icar.cnr.it/persone/guarasci/), [Elisa Leonardelli](https://dh.fbk.eu/author/elisa/), [Sara Tonelli](https://dh.fbk.eu/author/sara/)
### Supported Tasks and Leaderboards
#### Acceptability Classification
The following table is taken from Table 4 of the original paper, where an LSTM and a BERT model pretrained on the Italian languages are fine-tuned on the `train` split of the corpus and evaluated respectively on the `test` split (*In-domain*, `in`) and on the acceptability portion of the [AcCompl-it] corpus (*Out-of-domain*, `out`). Models are evaluated with accuracy (*Acc.*) and Matthews Correlation Coefficient (*MCC*) in both settings. Results are averaged over 10 runs with ±stdev. error bounds.
| | `in`, Acc.| `in`, MCC| `out`, Acc.|`out`, MCC|
|---------:|-----------:|----------:|-----------:|---------:|
|`LSTM` | 0.794 | 0.278 ± 0.029 | 0.605 | 0.147 ± 0.066 |
|`ITA-BERT`| 0.904 | 0.603 ± 0.022 | 0.683 | 0.198 ± 0.036 |
### Languages
The language data in ItaCoLA is in Italian (BCP-47 `it`)
## Dataset Structure
### Data Instances
#### Scores Configuration
The `scores` configuration contains sentences with acceptability judgments. An example from the `train` split of the `scores` config (default) is provided below.
```json
{
"unique_id": 1,
"source": "Graffi_1994",
"acceptability": 1,
"sentence": "Quest'uomo mi ha colpito."
}
```
The text is provided as-is, without further preprocessing or tokenization.
The fields are the following:
- `unique_id`: Unique identifier for the sentence across configurations.
- `source`: Original source for the sentence.
- `acceptability`: Binary score, 1 = acceptable, 0 = not acceptable.
- `sentence`: The evaluated sentence.
#### Phenomena Configuration
The `phenomena` configuration contains a sample of sentences from `scores` that has been manually annotated to denote the presence of 9 linguistic phenomena. An example from the `train` split is provided below:
```json
{
"unique_id": 1,
"source": "Graffi_1994",
"acceptability": 1,
"sentence": "Quest'uomo mi ha colpito.",
"cleft_construction": 0,
"copular_construction": 0,
"subject_verb_agreement": 1,
"wh_islands_violations": 0,
"simple": 0,
"question": 0,
"auxiliary": 1,
"bind": 0,
"indefinite_pronouns": 0
}
```
For each one of the new fields, the value of the binary score denotes the presence (1) or the absence (0) of the respective phenomenon. Refer to the original paper for a detailed description of each phenomenon.
### Data Splits
| config| train| test|
|----------:|-----:|----:|
|`scores` | 7801 | 975 |
|`phenomena`| 2088 | - |
### Dataset Creation
Please refer to the original article [Monolingual and Cross-Lingual Acceptability Judgments with the Italian CoLA corpus](https://arxiv.org/abs/2109.12053) for additional information on dataset creation.
## Additional Information
### Dataset Curators
The authors are the curators of the original dataset. For problems or updates on this 🤗 Datasets version, please contact [gabriele.sarti996@gmail.com](mailto:gabriele.sarti996@gmail.com).
### Licensing Information
No licensing information available.
### Citation Information
Please cite the authors if you use these corpora in your work:
```bibtex
@inproceedings{trotta-etal-2021-monolingual-cross,
title = "Monolingual and Cross-Lingual Acceptability Judgments with the {I}talian {C}o{LA} corpus",
author = "Trotta, Daniela and
Guarasci, Raffaele and
Leonardelli, Elisa and
Tonelli, Sara",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.250",
doi = "10.18653/v1/2021.findings-emnlp.250",
pages = "2929--2940"
}
```
|
huggingartists/logic | 2022-10-25T09:35:38.000Z | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | huggingartists | This dataset is designed to generate lyrics with HuggingArtists. | @InProceedings{huggingartists:dataset,
title = {Lyrics dataset},
author={Aleksey Korshuk
},
year={2021}
} | null | 1 | 5 | ---
language:
- en
tags:
- huggingartists
- lyrics
---
# Dataset Card for "huggingartists/logic"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 3.343197 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/0f975524d106026e89de983689d007c4.900x900x1.jpg')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/logic">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Logic</div>
<a href="https://genius.com/artists/logic">
<div style="text-align: center; font-size: 14px;">@logic</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/logic).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/logic")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|651| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/logic")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
imvladikon/hebrew_speech_coursera | 2023-05-05T09:05:00.000Z | [
"task_categories:automatic-speech-recognition",
"size_categories:1K<n<10K",
"language:he",
"region:us"
] | imvladikon | null | null | null | 4 | 5 | ---
task_categories:
- automatic-speech-recognition
language:
- he
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 6670706136.352
num_examples: 20306
- name: validation
num_bytes: 1648062261.28
num_examples: 5076
download_size: 7726933856
dataset_size: 8318768397.632
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
```json
{'audio': {'path': '/root/.cache/huggingface/datasets/downloads/extracted/89efd3a0fa3ead3f0b8e432e8796697a738d4561b24ff91f4fb2cc25d86e9fb0/train/ccef55189b7843d49110228cb0a71bfa115.wav',
'array': array([-0.01217651, -0.04351807, -0.06278992, ..., -0.00018311,
-0.00146484, -0.00349426]),
'sampling_rate': 16000},
'sentence': 'מצד אחד ובתנועה הציונית הצעירה'}
```
### Data Fields
[More Information Needed]
### Data Splits
| | train | validation |
| ---- | ----- | ---------- |
| number of samples | 20306 | 5076 |
| hours | 28.88 | 7.23 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{imvladikon2022hebrew_speech_coursera,
author = {Gurevich, Vladimir},
title = {Hebrew Speech Recognition Dataset: Coursera},
year = {2022},
howpublished = \url{https://huggingface.co/datasets/imvladikon/hebrew_speech_coursera},
}
```
### Contributions
[More Information Needed] |
jinmang2/common-sense-mrc | 2021-12-12T07:56:31.000Z | [
"region:us"
] | jinmang2 | null | null | null | 0 | 5 | Entry not found |
kroshan/BioASQ | 2021-12-06T14:32:54.000Z | [
"region:us"
] | kroshan | null | null | null | 3 | 5 | Entry not found |
lpsc-fiuba/melisa | 2022-10-22T08:52:56.000Z | [
"task_categories:text-classification",
"task_ids:language-modeling",
"task_ids:sentiment-classification",
"task_ids:sentiment-scoring",
"task_ids:topic-classification",
"annotations_creators:found",
"language_creators:found",
"source_datasets:original",
"language:es",
"language:pt",
"license:oth... | lpsc-fiuba | null | TO DO: Cita | null | 3 | 5 | ---
annotations_creators:
- found
language_creators:
- found
language:
- es
- pt
license:
- other
multilinguality:
all_languages:
- multilingual
es:
- monolingual
pt:
- monolingual
paperswithcode_id: null
size_categories:
all_languages:
- 100K<n<1M
es:
- 100K<n<1M
pt:
- 100K<n<1M
source_datasets:
- original
task_categories:
- conditional-text-generation
- sequence-modeling
- text-classification
- text-scoring
task_ids:
- language-modeling
- sentiment-classification
- sentiment-scoring
- summarization
- topic-classification
---
# Dataset Card for MeLiSA (Mercado Libre for Sentiment Analysis)
** **NOTE: THIS CARD IS UNDER CONSTRUCTION** **
** **NOTE 2: THE RELEASED VERSION OF THIS DATASET IS A DEMO VERSION.** **
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Webpage:** https://github.com/lpsc-fiuba/MeLiSA
- **Paper:**
- **Point of Contact:** lestienne@fi.uba.ar
[More Information Needed]
### Dataset Summary
We provide a Mercado Libre product reviews dataset for spanish and portuguese text classification. The dataset contains reviews in these two languages collected between August 2020 and January 2021. Each record in the dataset contains the review content and title, the star rating, the country where it was pubilshed and the product category (arts, technology, etc.). The corpus is roughly balanced across stars, so each star rating constitutes approximately 20% of the reviews in each language.
| || Spanish ||| Portugese ||
|---|:------:|:----------:|:-----:|:------:|:----------:|:-----:|
| | Train | Validation | Test | Train | Validation | Test |
| 1 | 88.425 | 4.052 | 5.000 | 50.801 | 4.052 | 5.000 |
| 2 | 88.397 | 4.052 | 5.000 | 50.782 | 4.052 | 5.000 |
| 3 | 88.435 | 4.052 | 5.000 | 50.797 | 4.052 | 5.000 |
| 4 | 88.449 | 4.052 | 5.000 | 50.794 | 4.052 | 5.000 |
| 5 | 88.402 | 4.052 | 5.000 | 50.781 | 4.052 | 5.000 |
Table shows the number of samples per star rate in each split. There is a total of 442.108 training samples in spanish and 253.955 in portuguese. We limited the number of reviews per product to 30 and we perform a ranked inclusion of the downloaded reviews to include those with rich semantic content. In these ranking, the lenght of the review content and the valorization (difference between likes and dislikes) was prioritized. For more details on this process, see (CITATION).
Reviews in spanish were obtained from 8 different Latin Amercian countries (Argentina, Colombia, Peru, Uruguay, Chile, Venezuela and Mexico), and portuguese reviews were extracted from Brasil. To match the language with its respective country, we applied a language detection algorithm based on the works of Joulin et al. (2016a and 2016b) to determine the language of the review text and we removed reviews that were not written in the expected language.
[More Information Needed]
### Languages
The dataset contains reviews in Latin American Spanish and Portuguese.
## Dataset Structure
### Data Instances
Each data instance corresponds to a review. Each split is stored in a separated `.csv` file, so every row in each file consists on a review. For example, here we show a snippet of the spanish training split:
```csv
country,category,review_content,review_title,review_rate
...
MLA,Tecnología y electrónica / Tecnologia e electronica,Todo bien me fue muy util.,Muy bueno,2
MLU,"Salud, ropa y cuidado personal / Saúde, roupas e cuidado pessoal",No fue lo que esperaba. El producto no me sirvió.,No fue el producto que esperé ,2
MLM,Tecnología y electrónica / Tecnologia e electronica,No fue del todo lo que se esperaba.,No me fue muy funcional ahí que hacer ajustes,2
...
```
### Data Fields
- `country`: The string identifier of the country. It could be one of the following: `MLA` (Argentina), `MCO` (Colombia), `MPE` (Peru), `MLU` (Uruguay), `MLC` (Chile), `MLV` (Venezuela), `MLM` (Mexico) or `MLB` (Brasil).
- `category`: String representation of the product's category. It could be one of the following:
- Hogar / Casa
- Tecnologı́a y electrónica / Tecnologia e electronica
- Salud, ropa y cuidado personal / Saúde, roupas e cuidado pessoal
- Arte y entretenimiento / Arte e Entretenimiento
- Alimentos y Bebidas / Alimentos e Bebidas
- `review_content`: The text content of the review.
- `review_title`: The text title of the review.
- `review_rate`: An int between 1-5 indicating the number of stars.
### Data Splits
Each language configuration comes with it's own `train`, `validation`, and `test` splits. The `all_languages` split is simply a concatenation of the corresponding split across all languages. That is, the `train` split for `all_languages` is a concatenation of the `train` splits for each of the languages and likewise for `validation` and `test`.
## Dataset Creation
### Curation Rationale
The dataset is motivated by the desire to advance sentiment analysis and text classification in Latin American Spanish and Portuguese.
### Source Data
#### Initial Data Collection and Normalization
The authors gathered the reviews from the marketplaces in Argentina, Colombia, Peru, Uruguay, Chile, Venezuela and Mexico for the Spanish language and from Brasil for Portuguese. They prioritized reviews that contained relevant semantic content by applying a ranking filter based in the lenght and the valorization (difference betweent the number of likes and dislikes) of the review. They then ensured the correct language by applying a semi-automatic language detection algorithm, only retaining those of the target language. No normalization was applied to the review content or title.
Original products categories were grouped in higher level categories, resulting in five different types of products: "Home" (Hogar / Casa), "Technology and electronics" (Tecnologı́a y electrónica
/ Tecnologia e electronica), "Health, Dress and Personal Care" (Salud, ropa y cuidado personal / Saúde, roupas e cuidado pessoal) and "Arts and Entertainment" (Arte y entretenimiento / Arte e Entretenimiento).
#### Who are the source language producers?
The original text comes from Mercado Libre customers reviewing products on the marketplace across a variety of product categories.
### Annotations
#### Annotation process
Each of the fields included are submitted by the user with the review or otherwise associated with the review. No manual or machine-driven annotation was necessary.
#### Who are the annotators?
N/A
### Personal and Sensitive Information
Mercado Libre Reviews are submitted by users with the knowledge and attention of being public. The reviewer ID's included in this dataset are anonymized, meaning that they are disassociated from the original user profiles. However, these fields would likely be easy to deannoymize given the public and identifying nature of free-form text responses.
## Considerations for Using the Data
### Social Impact of Dataset
Although Spanish and Portuguese languages are relatively high resource, most of the data is collected from European or United State users. This dataset is part of an effort to encourage text classification research in languages other than English and European Spanish and Portuguese. Such work increases the accessibility of natural language technology to more regions and cultures.
### Discussion of Biases
The data included here are from unverified consumers. Some percentage of these reviews may be fake or contain misleading or offensive language.
### Other Known Limitations
The dataset is constructed so that the distribution of star ratings is roughly balanced. This feature has some advantages for purposes of classification, but some types of language may be over or underrepresented relative to the original distribution of reviews to acheive this balance.
[More Information Needed]
## Additional Information
### Dataset Curators
Published by Lautaro Estienne, Matías Vera and Leonardo Rey Vega. Managed by the Signal Processing in Comunications Laboratory of the Electronic Department at the Engeneering School of the Buenos Aires University (UBA).
### Licensing Information
Amazon has licensed this dataset under its own agreement, to be found at the dataset webpage here:
https://docs.opendata.aws/amazon-reviews-ml/license.txt
### Citation Information
Please cite the following paper if you found this dataset useful:
(CITATION)
[More Information Needed]
### Contributions
[More Information Needed]
|
mrp/Thai-Semantic-Textual-Similarity-Benchmark | 2021-11-29T06:15:34.000Z | [
"region:us"
] | mrp | null | null | null | 0 | 5 | Sentence representation plays a crucial role in NLP downstream tasks such as NLI, text classification, and STS. Recent sentence representation training techniques require NLI or STS datasets. However, there are no equivalent Thai NLI or STS datasets for sentence representation training.
To address this problem we provide the Thai sentence vector benchmark. We evaluate the Spearman correlation score of the sentence representations’ performance on Thai STS-B (translated version of [STS-B](https://github.com/facebookresearch/SentEval)).
# Thai semantic textual similarity benchmark
- We use [STS-B translated ver.](https://github.com/mrpeerat/Thai-Sentence-Vector-Benchmark/blob/main/sts-test_th.csv) in which we translate STS-B from [SentEval](https://github.com/facebookresearch/SentEval) by using google-translate.
- How to evaluate sentence representation: [SentEval.ipynb](https://github.com/mrpeerat/Thai-Sentence-Vector-Benchmark/blob/main/SentEval.ipynb)
- How to evaluate sentence representation on Google Colab: https://colab.research.google.com/github/mrpeerat/Thai-Sentence-Vector-Benchmark/blob/main/SentEval.ipynb
| Base Model | Spearman's Correlation (*100) | Supervised? |
| ------------- | :-------------: | :-------------: |
| [simcse-model-distil-m-bert](https://huggingface.co/mrp/simcse-model-distil-m-bert) | 38.84 |
| [simcse-model-m-bert-thai-cased](https://huggingface.co/mrp/simcse-model-m-bert-thai-cased) | 39.26 |
| [simcse-model-roberta-base-thai](https://huggingface.co/mrp/simcse-model-roberta-base-thai) | 62.60 |
| [distiluse-base-multilingual-cased-v2](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v2) | 63.50 | ✓
| [paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) | 80.11 | ✓ |
nickmuchi/trade-the-event-finance | 2022-02-04T06:05:02.000Z | [
"region:us"
] | nickmuchi | null | null | null | 6 | 5 | Entry not found |
projecte-aina/xquad-ca | 2023-09-13T12:42:48.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"language:ca",
"license:cc-by-sa-4.0",
"arxiv:2107.07903",
"arxiv:1606.05250",
"arxiv:1910.11856",
"regi... | projecte-aina | Professional translation into Catalan of XQuAD dataset (https://github.com/deepmind/xquad).
XQuAD (Cross-lingual Question Answering Dataset) is a benchmark dataset for evaluating
cross-lingual question answering performance.
The dataset consists of a subset of 240 paragraphs and 1190 question-answer pairs from
the development set of SQuAD v1.1 (Rajpurkar et al., 2016) together with
their professional translations into ten languages:
Spanish, German, Greek, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese, and Hindi.
Rumanian was added later.
We added the 13th language to the corpus using also professional native catalan translators.
XQuAD and XQuAD-Ca datasets are released under CC-by-sa licence. | Carlos Gerardo Rodriguez-Penagos, & Carme Armentano-Oller. (2021). XQuAD-ca [Data set].
Zenodo. http://doi.org/10.5281/zenodo.4757559 | null | 1 | 5 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ca
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: xquad-ca
size_categories:
- unknown
source_datasets: []
task_categories:
- question-answering
task_ids:
- extractive-qa
---
# Dataset Card for XQuAD-Ca
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://zenodo.org/record/6669801
- **Paper:** [Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan](https://arxiv.org/abs/2107.07903)
- **Point of Contact:** [Carlos Rodríguez-Penagos](carlos.rodriguez1@bsc.es) and [Carme Armentano-Oller](carme.armentano@bsc.es)
### Dataset Summary
Professional translation into Catalan of [XQuAD dataset](https://github.com/deepmind/xquad).
XQuAD (Cross-lingual Question Answering Dataset) is a benchmark dataset for evaluating cross-lingual question answering performance. The dataset consists of a subset of 240 paragraphs and 1190 question-answer pairs from the development set of SQuAD v1.1 ([Rajpurkar, Pranav et al., 2016](http://arxiv.org/abs/1606.05250)) together with their professional translations into ten language: Spanish, German, Greek, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese, and Hindi. Rumanian was added later. We added the 13th language to the corpus using also professional native Catalan translators.
XQuAD and XQuAD-Ca datasets are released under [CC-by-sa](https://creativecommons.org/licenses/by-sa/3.0/legalcode) licence.
### Supported Tasks and Leaderboards
Cross-lingual-QA, Extractive-QA, Language Model
### Languages
The dataset is in Catalan (`ca-ES`)
## Dataset Structure
### Data Instances
One json file.
1189 examples.
<pre>
{
"data": [
{
"context": "Al llarg de la seva existència, Varsòvia ha estat una ciutat multicultural. Segons el cens del 1901, de 711.988 habitants, el 56,2 % eren catòlics, el 35,7 % jueus, el 5 % cristians ortodoxos grecs i el 2,8 % protestants. Vuit anys després, el 1909, hi havia 281.754 jueus (36,9 %), 18.189 protestants (2,4 %) i 2.818 mariavites (0,4 %). Això va provocar que es construïssin centenars de llocs de culte religiós a totes les parts de la ciutat. La majoria d’ells es van destruir després de la insurrecció de Varsòvia del 1944. Després de la guerra, les noves autoritats comunistes de Polònia van apocar la construcció d’esglésies i només se’n va construir un petit nombre.",
"qas": [
{
"answers": [
{
"text": "711.988",
"answer_start": 104
}
],
"id": "57338007d058e614000b5bdb",
"question": "Quina era la població de Varsòvia l’any 1901?"
},
{
"answers": [
{
"text": "56,2 %",
"answer_start": 126
}
],
"id": "57338007d058e614000b5bdc",
"question": "Dels habitants de Varsòvia l’any 1901, quin percentatge era catòlic?"
},
...
]
}
]
},
...
]
}
</pre>
### Data Fields
Follows [Rajpurkar, Pranav et al., 2016](http://arxiv.org/abs/1606.05250) for SQuAD v1 datasets.
- `id` (str): Unique ID assigned to the question.
- `title` (str): Title of the Wikipedia article.
- `context` (str): Wikipedia section text.
- `question` (str): Question.
- `answers` (list): List of answers to the question, each containing:
- `text` (str): Span text answering to the question.
- `answer_start` Starting offset of the span text answering to the question.
### Data Splits
- test.json: 1189 examples.
## Dataset Creation
### Curation RationaleCA
We created this dataset to contribute to the development of language models in Catalan, a low-resource language, and for compatibility with similar datasets in other languages, and to allow inter-lingual comparisons.
### Source Data
- [XQuAD's webpage](https://github.com/deepmind/xquad).
#### Initial Data Collection and Normalization
This dataset is a professional translation of [XQuAD](https://github.com/deepmind/xquad) into Catalan, commissioned by [BSC TeMU](https://temu.bsc.es/) within [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina/).
For more information on how XQuAD was created, refer to the paper, On the [Cross-lingual Transferability of Monolingual Representations](https://arxiv.org/abs/1910.11856), or visit the [XQuAD's webpage](https://github.com/deepmind/xquad).
#### Who are the source language producers?
For more information on how XQuAD was created, refer to the paper, [On the Cross-lingual Transferability of Monolingual Representations ](https://arxiv.org/abs/1910.11856), or visit the [XQuAD's webpage](https://github.com/deepmind/xquad).
### Annotations
This is a professional translation of the XQuAD corpus and its annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
Translation was commissioned to a professional translation company.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Carlos Rodríguez-Penagos (carlos.rodriguez1@bsc.es) and Carme Armentano-Oller (carme.armentano@bsc.es) from [BSC-CNS](https://www.bsc.es/).
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
### Citation Information
```
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
[DOI](https://doi.org/10.5281/zenodo.4526223)
### Contributions
[N/A] |
shivkumarganesh/CoLA | 2021-10-30T19:53:06.000Z | [
"region:us"
] | shivkumarganesh | null | null | null | 1 | 5 | Entry not found |
shpotes/tfcol | 2021-11-16T21:49:16.000Z | [
"region:us"
] | shpotes | null | null | null | 0 | 5 | Entry not found |
Alvenir/alvenir_asr_da_eval | 2022-06-16T09:13:33.000Z | [
"license:cc-by-4.0",
"region:us"
] | Alvenir | Dataset of a little bit more than 5hours primarily intended as an evaluation dataset for Danish. | null | null | 5 | 5 | ---
license: cc-by-4.0
---
# Dataset Card alvenir_asr_da_eval
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Prompts/sentence selection](#prompts/sentence-selection)
- [Recording](#recording)
- [Evaluation](#evaluation)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** https://alvenir.ai
- **Repository:** https://github.com/danspeech/alvenir-asr-da-eval/
### Dataset Summary
This dataset was created by Alvenir in order to evaluate ASR models in Danish. It can also be used for training but the amount is very limited.
The dataset consists of .wav files with corresponding reference text. The amount of data is just above 5 hours spread across 50 speakers with age in the interval 20-60 years old. The data was collected by a third party vendor through their software and people. All recordings have been validated.
## Dataset Structure
### Data Instances
A data point consists of a path to the audio file, called path and its sentence. Additional fields will eventually be added such as age and gender.
`
{'audio': {'path': `some_path.wav', 'array': array([-0.044223, -0.00031411, -0.00435671, ..., 0.00612312, 0.00014581, 0.00091009], dtype=float32), 'sampling_rate': 16000}}
`
### Data Fields
audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
sentence: The sentence the user was prompted to speak
### Data Splits
Since the idea behind the dataset is for it to be used as a test/eval ASR dataset for Danish, there is only test split.
## Dataset Creation
### Prompts/sentence selection
The sentences used for prompts were gathered from the danish part of open subtitles (OSS) (need reference) and wikipedia (WIKI). The OSS prompts sampled randomly across the dataset making sure that all prompts are unique. The WIKI prompts were selected by first training a topic model with 30 topics on wikipedia and than randomly sampling an equal amount of unique sentences from each topic. All sentences were manually inspected.
### Recording
50 unique speakers were all sent 20 WIKI sentences and 60 sentences from OSS. The recordings took place through third party recording software.
### Evaluation
All recordings were evaluated by third party to confirm alignment between audio and text.
### Personal and Sensitive Information
The dataset consists of people who have given their voice to the dataset for ASR purposes. You agree to not attempt to determine the identity of any of the speakers in the dataset.
### Licensing Information
[cc-by-4.0](https://creativecommons.org/licenses/by/4.0/)
|
ruanchaves/nru_hse | 2022-10-20T19:12:59.000Z | [
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:ru",
"license:unknown",
"word-segmentation",
"arxiv:1911.03270",
"region:us"
] | ruanchaves | 2000 real hashtags collected from several pages about civil services on vk.com (a Russian social network)
and then segmented manually. | @article{glushkova2019char,
title={Char-RNN and Active Learning for Hashtag Segmentation},
author={Glushkova, Taisiya and Artemova, Ekaterina},
journal={arXiv preprint arXiv:1911.03270},
year={2019}
} | null | 0 | 5 | ---
annotations_creators:
- expert-generated
language_creators:
- machine-generated
language:
- ru
license:
- unknown
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- structure-prediction
task_ids: []
pretty_name: NRU-HSE
tags:
- word-segmentation
---
# Dataset Card for NRU-HSE
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [glushkovato/hashtag_segmentation](https://github.com/glushkovato/hashtag_segmentation/)
- **Paper:** [Char-RNN and Active Learning for Hashtag Segmentation](https://arxiv.org/abs/1911.03270)
### Dataset Summary
Real hashtags collected from several pages about civil services on vk.com (a Russian social network) and then segmented manually.
### Languages
Russian
## Dataset Structure
### Data Instances
```
{
"index": 0,
"hashtag": "ЁлкаВЗазеркалье",
"segmentation": "Ёлка В Зазеркалье"
}
```
### Data Fields
- `index`: a numerical index.
- `hashtag`: the original hashtag.
- `segmentation`: the gold segmentation for the hashtag.
## Dataset Creation
- All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: `hashtag` and `segmentation` or `identifier` and `segmentation`.
- The only difference between `hashtag` and `segmentation` or between `identifier` and `segmentation` are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.
- There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as `_` , `:`, `~` ).
- If there are any annotations for named entity recognition and other token classification tasks, they are given in a `spans` field.
## Additional Information
### Citation Information
```
@article{glushkova2019char,
title={Char-RNN and Active Learning for Hashtag Segmentation},
author={Glushkova, Taisiya and Artemova, Ekaterina},
journal={arXiv preprint arXiv:1911.03270},
year={2019}
}
```
### Contributions
This dataset was added by [@ruanchaves](https://github.com/ruanchaves) while developing the [hashformers](https://github.com/ruanchaves/hashformers) library. |
rubrix/research_papers_multi-label | 2022-03-17T11:29:02.000Z | [
"region:us"
] | rubrix | null | null | null | 2 | 5 | Entry not found |
tau/multi_news | 2022-03-24T08:56:03.000Z | [
"region:us"
] | tau | Multi-News, consists of news articles and human-written summaries
of these articles from the site newser.com.
Each summary is professionally written by editors and
includes links to the original articles cited.
There are two features:
- document: text of news articles seperated by special token "|||||".
- summary: news summary. | @misc{alex2019multinews,
title={Multi-News: a Large-Scale Multi-Document Summarization Dataset and Abstractive Hierarchical Model},
author={Alexander R. Fabbri and Irene Li and Tianwei She and Suyi Li and Dragomir R. Radev},
year={2019},
eprint={1906.01749},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 0 | 5 | Entry not found |
UrukHan/t5-russian-spell_I | 2022-03-27T12:53:21.000Z | [
"region:us"
] | UrukHan | null | null | null | 0 | 5 | Entry not found |
MLCommons/peoples_speech_v1.0 | 2022-08-10T16:41:34.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1T<n",
"source_datasets:original",
"language:en",
... | MLCommons | null | null | null | 6 | 5 | ---
annotations_creators:
- crowdsourced
- machine-generated
language_creators:
- crowdsourced
- machine-generated
language:
- en
license:
- cc-by-2.0
- cc-by-2.5
- cc-by-3.0
- cc-by-4.0
- cc-by-sa-3.0
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: People's Speech
size_categories:
- 1T<n
source_datasets:
- original
task_categories:
- automatic-speech-recognition
task_ids:
- speech-recognition
- robust-speech-recognition
- noisy-speech-recognition
---
# Dataset Card for People's Speech
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://mlcommons.org/en/peoples-speech/
- **Repository:** https://github.com/mlcommons/peoples-speech
- **Paper:** https://arxiv.org/abs/2111.09344
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [datasets@mlcommons.org](mailto:datasets@mlcommons.org)
### Dataset Summary
The People's Speech Dataset is among the world's largest English speech recognition corpus today that is licensed for academic and commercial usage under CC-BY-SA and CC-BY 4.0. It includes 30,000+ hours of transcribed speech in English languages with a diverse set of speakers. This open dataset is large enough to train speech-to-text systems and crucially is available with a permissive license.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English
## Dataset Structure
### Data Instances
{
"id": "gov_DOT_uscourts_DOT_scotus_DOT_19-161/gov_DOT_uscourts_DOT_scotus_DOT_19-161_DOT_2020-03-02_DOT_mp3_00002.flac",
"audio": {
"path": "gov_DOT_uscourts_DOT_scotus_DOT_19-161/gov_DOT_uscourts_DOT_scotus_DOT_19-161_DOT_2020-03-02_DOT_mp3_00002.flac"
"array": array([-6.10351562e-05, ...]),
"sampling_rate": 16000
}
"duration_ms": 14490,
"text": "contends that the suspension clause requires a [...]"
}
### Data Fields
{
"id": datasets.Value("string"),
"audio": datasets.Audio(sampling_rate=16_000),
"duration_ms": datasets.Value("int32"),
"text": datasets.Value("string"),
}
### Data Splits
We provide the following configurations for the dataset: `cc-by-clean`, `cc-by-dirty`, `cc-by-sa-clean`, `cc-by-sa-dirty`, and `microset`. We don't provide splits for any of the configurations.
## Dataset Creation
### Curation Rationale
See our [paper](https://arxiv.org/abs/2111.09344).
### Source Data
#### Initial Data Collection and Normalization
Data was downloaded via the archive.org API. No data inference was done.
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
No manual annotation is done. We download only source audio with already existing transcripts.
#### Who are the annotators?
For the test and dev sets, we paid native American English speakers to do transcriptions. We do not know the identities of the transcriptionists for data in the training set. For the training set, we have noticed that some transcriptions are likely to be the output of automatic speech recognition systems.
### Personal and Sensitive Information
Several of our sources are legal and government proceedings, spoken histories, speeches, and so on. Given that these were intended as public documents and licensed as such, it is natural that the involved individuals are aware of this.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset could be used for speech synthesis. However, this requires careful cleaning of the dataset, as background noise is not tolerable for speech synthesis.
The dataset could be used for keyword spotting tasks as well. In particular, this is good use case for the non-English audio in the dataset.
Our sincere hope is that the large breadth of sources our dataset incorporates reduces existing quality of service issues today, like speech recognition system’s poor understanding of non-native English accents. We cannot think of any unfair treatment that come from using this dataset at this time.
### Discussion of Biases
Our data is downloaded from archive.org. As such, the data is biased towards whatever users decide to upload there.
Almost all of our data is American accented English.
### Other Known Limitations
As of version 1.0, a portion of data in the training, test, and dev sets is poorly aligned. Specifically, some words appear in the transcript, but not the audio, or some words appear in the audio, but not the transcript. We are working on it.
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
We provide CC-BY and CC-BY-SA subsets of the dataset.
### Citation Information
Please cite:
```
@article{DBLP:journals/corr/abs-2111-09344,
author = {Daniel Galvez and
Greg Diamos and
Juan Ciro and
Juan Felipe Cer{\'{o}}n and
Keith Achorn and
Anjali Gopi and
David Kanter and
Maximilian Lam and
Mark Mazumder and
Vijay Janapa Reddi},
title = {The People's Speech: {A} Large-Scale Diverse English Speech Recognition
Dataset for Commercial Usage},
journal = {CoRR},
volume = {abs/2111.09344},
year = {2021},
url = {https://arxiv.org/abs/2111.09344},
eprinttype = {arXiv},
eprint = {2111.09344},
timestamp = {Mon, 22 Nov 2021 16:44:07 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2111-09344.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
surdan/nerel_short | 2022-10-25T10:06:49.000Z | [
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"language:ru",
"region:us"
] | surdan | null | null | null | 0 | 5 | ---
language: ru
multilinguality: monolingual
task_ids:
- named-entity-recognition
---
### About DataSet
The dataset based on NEREL corpus.
For more information about original data, please visit this [source](https://github.com/dialogue-evaluation/RuNNE)
Example of preparing original data illustrated in <Prepare_original_data.ipynb>
### Additional info
The dataset consist 29 entities, each of them can be as beginner part of entity "B-" as inner "I-".
Frequency for each entity:
- I-AGE: 284
- B-AGE: 247
- B-AWARD: 285
- I-AWARD: 466
- B-CITY: 1080
- I-CITY: 39
- B-COUNTRY: 2378
- I-COUNTRY: 128
- B-CRIME: 214
- I-CRIME: 372
- B-DATE: 2701
- I-DATE: 5437
- B-DISEASE: 136
- I-DISEASE: 80
- B-DISTRICT: 98
- I-DISTRICT: 73
- B-EVENT: 3369
- I-EVENT: 2524
- B-FACILITY: 376
- I-FACILITY: 510
- B-FAMILY: 27
- I-FAMILY: 22
- B-IDEOLOGY: 271
- I-IDEOLOGY: 20
- B-LANGUAGE: 32
- I-LAW: 1196
- B-LAW: 297
- B-LOCATION: 242
- I-LOCATION: 139
- B-MONEY: 147
- I-MONEY: 361
- B-NATIONALITY: 437
- I-NATIONALITY: 41
- B-NUMBER: 1079
- I-NUMBER: 328
- B-ORDINAL: 485
- I-ORDINAL: 6
- B-ORGANIZATION: 3339
- I-ORGANIZATION: 3354
- B-PENALTY: 73
- I-PENALTY: 104
- B-PERCENT: 51
- I-PERCENT: 37
- B-PERSON: 5148
- I-PERSON: 3635
- I-PRODUCT: 48
- B-PRODUCT: 197
- B-PROFESSION: 3869
- I-PROFESSION: 2598
- B-RELIGION: 102
- I-RELIGION: 1
- B-STATE_OR_PROVINCE: 436
- I-STATE_OR_PROVINCE: 154
- B-TIME: 187
- I-TIME: 529
- B-WORK_OF_ART: 133
- I-WORK_OF_ART: 194
You can find mapper for entity ids in <id_to_label_map.pickle> file:
```python
import pickle
with open('id_to_label_map.pickle', 'rb') as f:
mapper = pickle.load(f)
``` |
enoriega/GENIA-Term-Corpus | 2022-04-21T00:26:31.000Z | [
"region:us"
] | enoriega | GENIA Term corpus | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | null | 0 | 5 | Entry not found |
adithya7/xlel_wd_dictionary | 2022-07-01T17:30:21.000Z | [
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:af",
"language:ar",
"language:be",
"language:bg",
"language:bn",
"language:ca",
"language:cs",
"language:da",
"language:de",
"langu... | adithya7 | XLEL-WD is a multilingual event linking dataset. This sub-dataset contains a dictionary of events from Wikidata. The multilingual descriptions for Wikidata event items are taken from the corresponding Wikipedia articles. | @article{pratapa-etal-2022-multilingual,
title = {Multilingual Event Linking to Wikidata},
author = {Pratapa, Adithya and Gupta, Rishubh and Mitamura, Teruko},
publisher = {arXiv},
year = {2022},
url = {https://arxiv.org/abs/2204.06535},
} | null | 0 | 5 | ---
annotations_creators:
- found
language_creators:
- found
language:
- af
- ar
- be
- bg
- bn
- ca
- cs
- da
- de
- el
- en
- es
- fa
- fi
- fr
- he
- hi
- hu
- id
- it
- ja
- ko
- ml
- mr
- ms
- nl
- 'no'
- pl
- pt
- ro
- ru
- si
- sk
- sl
- sr
- sv
- sw
- ta
- te
- th
- tr
- uk
- vi
- zh
license:
- cc-by-4.0
multilinguality:
- multilingual
pretty_name: XLEL-WD is a multilingual event linking dataset. This supplementary dataset
contains a dictionary of event items from Wikidata. The descriptions for Wikidata
event items are taken from the corresponding multilingual Wikipedia articles.
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories: []
task_ids: []
---
# Dataset Card for XLEL-WD-Dictionary
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** <https://github.com/adithya7/xlel-wd>
- **Repository:** <https://github.com/adithya7/xlel-wd>
- **Paper:** <https://arxiv.org/abs/2204.06535>
- **Leaderboard:** N/A
- **Point of Contact:** Adithya Pratapa
### Dataset Summary
XLEL-WD is a multilingual event linking dataset. This supplementary dataset contains a dictionary of event items from Wikidata. The descriptions for Wikidata event items are taken from the corresponding multilingual Wikipedia articles.
### Supported Tasks and Leaderboards
This dictionary can be used as a part of the event linking task.
### Languages
This dataset contains text from 44 languages. The language names and their ISO 639-1 codes are listed below. For details on the dataset distribution for each language, refer to the original paper.
| Language | Code | Language | Code | Language | Code | Language | Code |
| -------- | ---- | -------- | ---- | -------- | ---- | -------- | ---- |
| Afrikaans | af | Arabic | ar | Belarusian | be | Bulgarian | bg |
| Bengali | bn | Catalan | ca | Czech | cs | Danish | da |
| German | de | Greek | el | English | en | Spanish | es |
| Persian | fa | Finnish | fi | French | fr | Hebrew | he |
| Hindi | hi | Hungarian | hu | Indonesian | id | Italian | it |
| Japanese | ja | Korean | ko | Malayalam | ml | Marathi | mr |
| Malay | ms | Dutch | nl | Norwegian | no | Polish | pl |
| Portuguese | pt | Romanian | ro | Russian | ru | Sinhala | si |
| Slovak | sk | Slovene | sl | Serbian | sr | Swedish | sv |
| Swahili | sw | Tamil | ta | Telugu | te | Thai | th |
| Turkish | tr | Ukrainian | uk | Vietnamese | vi | Chinese | zh |
## Dataset Structure
### Data Instances
Each instance in the `label_dict.jsonl` file follows the below template,
```json
{
"label_id": "830917",
"label_title": "2010 European Aquatics Championships",
"label_desc": "The 2010 European Aquatics Championships were held from 4–15 August 2010 in Budapest and Balatonfüred, Hungary. It was the fourth time that the city of Budapest hosts this event after 1926, 1958 and 2006. Events in swimming, diving, synchronised swimming (synchro) and open water swimming were scheduled.",
"label_lang": "en"
}
```
### Data Fields
| Field | Meaning |
| ----- | ------- |
| `label_id` | Wikidata ID |
| `label_title` | Title for the event, as collected from the corresponding Wikipedia article |
| `label_desc` | Description for the event, as collected from the corresponding Wikipedia article |
| `label_lang` | language used for the title and description |
### Data Splits
This dictionary has a single split, `dictionary`. It contains 10947 event items from Wikidata and a total of 114834 text descriptions collected from multilingual Wikipedia articles.
## Dataset Creation
### Curation Rationale
This datasets helps address the task of event linking. KB linking is extensively studied for entities, but its unclear if the same methodologies can be extended for linking mentions to events from KB. Event items are collected from Wikidata.
### Source Data
#### Initial Data Collection and Normalization
A Wikidata item is considered a potential event if it has spatial and temporal properties. The final event set is collected after post-processing for quality control.
#### Who are the source language producers?
The titles and descriptions for the events are written by Wikipedia contributors.
### Annotations
#### Annotation process
This dataset was automatically compiled from Wikidata. It was post-processed to improve data quality.
#### Who are the annotators?
Wikidata and Wikipedia contributors.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
This dictionary primarily contains eventive nouns from Wikidata. It does not include other event items from Wikidata such as disease outbreak (Q3241045), military offensive (Q2001676), war (Q198), etc.,
## Additional Information
### Dataset Curators
The dataset was curated by Adithya Pratapa, Rishubh Gupta and Teruko Mitamura. The code for collecting the dataset is available at [Github:xlel-wd](https://github.com/adithya7/xlel-wd).
### Licensing Information
XLEL-WD dataset is released under [CC-BY-4.0 license](https://creativecommons.org/licenses/by/4.0/).
### Citation Information
```bib
@article{pratapa-etal-2022-multilingual,
title = {Multilingual Event Linking to Wikidata},
author = {Pratapa, Adithya and Gupta, Rishubh and Mitamura, Teruko},
publisher = {arXiv},
year = {2022},
url = {https://arxiv.org/abs/2204.06535},
}
```
### Contributions
Thanks to [@adithya7](https://github.com/adithya7) for adding this dataset.
|
d0r1h/customer_churn | 2022-05-07T03:27:33.000Z | [
"license:apache-2.0",
"region:us"
] | d0r1h | null | null | null | 2 | 5 | ---
license: apache-2.0
---
|
bigscience-data/roots_en_odiencorp | 2022-12-12T11:01:55.000Z | [
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | bigscience-data | null | null | null | 0 | 5 | ---
language: en
license: cc-by-nc-sa-4.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_en_odiencorp
# OdiEnCorp2.0
- Dataset uid: `odiencorp`
### Description
OdiEnCorp is a collection of Odia-English parallel and Odia monolingual sentences collected from different sources such as Odia Wikipedia, web sites, books, and dictionaries using different manual and machine learning techniques including web scraping and optical character recognition. OdiEnCorp 2.0 served in WAT 2020 EnglishOdia Indic Task.
### Homepage
https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3211
### Licensing
- non-commercial use
- cc-by-nc-sa-4.0: Creative Commons Attribution Non Commercial Share Alike 4.0 International
### Speaker Locations
- Southern Asia
- India
### Sizes
- 0.0043 % of total
- 2.2553 % of indic-or
- 0.0000 % of en
### BigScience processing steps
#### Filters applied to: indic-or
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
#### Filters applied to: en
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
|
bigscience-data/roots_en_book_dash_books | 2022-12-12T11:02:01.000Z | [
"language:en",
"license:cc-by-4.0",
"region:us"
] | bigscience-data | null | null | null | 0 | 5 | ---
language: en
license: cc-by-4.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_en_book_dash_books
# Book Dash Books
- Dataset uid: `book_dash_books`
### Description
Book Dash believes that every child should own one hundred books by the age of five.
To that end, we gather creative professionals who volunteer to create new, African storybooks that anyone can freely translate, print and distribute. In this way, we have vastly reduced the costs involved in putting high-quality books in children’s hands and hearts.
### Homepage
https://bookdash.org/books/
### Licensing
Creative Commons Attribution 4.0
### Speaker Locations
- Africa
- South Africa
### Sizes
- 0.0000 % of total
- 0.0000 % of en
- 0.0000 % of fr
### BigScience processing steps
#### Filters applied to: en
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: fr
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
|
bigscience-data/roots_en_the_pile_uspto | 2022-12-12T11:03:28.000Z | [
"language:en",
"license:mit",
"region:us"
] | bigscience-data | null | null | null | 0 | 5 | ---
language: en
license: mit
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_en_the_pile_uspto
# the_pile_uspto
- Dataset uid: `the_pile_uspto`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 0.5358 % of total
- 2.9032 % of en
### BigScience processing steps
#### Filters applied to: en
- dedup_document
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
|
bigscience-data/roots_indic-bn_bengali_question_answering | 2022-12-12T11:06:52.000Z | [
"language:bn",
"license:cc-by-4.0",
"region:us"
] | bigscience-data | null | null | null | 0 | 5 | ---
language: bn
license: cc-by-4.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_indic-bn_bengali_question_answering
# Bengali Question Answering Dataset
- Dataset uid: `bengali_question_answering`
### Description
This dataset contains the data for the paper "Deep learning-based question answering system in Bengali". It is a translated version of SQuAD 2.0 dataset to the Bengali language. Preprocessing details can be found in the paper.
Link : https://zenodo.org/record/4557874#.YDVGxegzZPZ
Paper : https://www.tandfonline.com/doi/full/10.1080/24751839.2020.1833136
### Homepage
https://www.kaggle.com/mayeesha/bengali-question-answering-dataset
### Licensing
Creative Commons Attribution 4.0 International
### Speaker Locations
- Southern Asia
- Bangladesh
### Sizes
- 0.0030 % of total
- 0.1401 % of indic-bn
### BigScience processing steps
#### Filters applied to: indic-bn
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
|
Aniemore/resd | 2023-06-10T22:15:40.000Z | [
"task_categories:audio-classification",
"task_ids:audio-emotion-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ru",
"lice... | Aniemore | null | null | null | 3 | 5 | ---
license:
- mit
annotations_creators:
- expert-generated
language_creators:
- expert-generated
- crowdsourced
language:
- ru
multilinguality:
- monolingual
pretty_name: Russian Emotional Speech Dialogs
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- audio-classification
task_ids:
- audio-emotion-recognition
dataset_info:
features:
- name: name
dtype: string
- name: path
dtype: string
- name: emotion
dtype: string
- name: speech
dtype: audio
splits:
- name: test
num_bytes: 96603538.0
num_examples: 280
- name: train
num_bytes: 398719157.336
num_examples: 1116
download_size: 485403675
dataset_size: 495322695.336
---
# Dataset Card for resd
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage: https://huggingface.co/datasets/Aniemore/resd**
- **Repository: https://github.com/aniemore/Aniemore**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Russian dataset of emotional speech dialogues. This dataset was assembled from ~3.5 hours of live speech by actors who voiced pre-distributed emotions in the dialogue for ~3 minutes each.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
This dataset was created by Artem Amentes, Nikita Davidchuk and Ilya Lubenets
### Citation Information
```
@misc{Aniemore,
author = {Артем Аментес, Илья Лубенец, Никита Давидчук},
title = {Открытая библиотека искусственного интеллекта для анализа и выявления эмоциональных оттенков речи человека},
year = {2022},
publisher = {Hugging Face},
journal = {Hugging Face Hub},
howpublished = {\url{https://huggingface.com/aniemore/Aniemore}},
email = {hello@socialcode.ru}
}
```
### Contributions
Thanks to [@Ar4ikov](https://github.com/Ar4ikov) for adding this dataset. |
Aniemore/cedr-m7 | 2022-07-01T16:39:56.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|cedr",
"language:ru",
"license:mit",
"region:us"
] | Aniemore | null | null | null | 5 | 5 | ---
annotations_creators:
- found
language_creators:
- found
language:
- ru
license: mit
multilinguality:
- monolingual
pretty_name: cedr-m7
size_categories:
- 1K<n<10K
source_datasets:
- extended|cedr
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for CEDR-M7
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{Aniemore,
author = {Артем Аментес, Илья Лубенец, Никита Давидчук},
title = {Открытая библиотека искусственного интеллекта для анализа и выявления эмоциональных оттенков речи человека},
year = {2022},
publisher = {Hugging Face},
journal = {Hugging Face Hub},
howpublished = {\url{https://huggingface.com/aniemore/Aniemore}},
email = {hello@socialcode.ru}
}
```
### Contributions
Thanks to [@toiletsandpaper](https://github.com/toiletsandpaper) for adding this dataset.
|
taesiri/GamePhysics_Grand_Theft_Auto_V | 2022-05-26T06:00:19.000Z | [
"region:us"
] | taesiri | A test dataset for GamePhysics | @article{taesiri2022clip,
title={CLIP meets GamePhysics: Towards bug identification in gameplay videos using zero-shot transfer learning},
author={Taesiri, Mohammad Reza and Macklon, Finlay and Bezemer, Cor-Paul},
journal={arXiv preprint arXiv:2203.11096},
year={2022}
} | null | 3 | 5 | ---
annotations_creators:
- no-annotation
languages:
- en
# Dataset Card for GamePhysics_Grand_Theft_Auto_V
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://asgaardlab.github.io/CLIPxGamePhysics/
- **Repository:** https://github.com/asgaardlab/CLIPxGamePhysics
- **Paper:** CLIP meets GamePhysics
- **Leaderboard:** [N/A]
- **Point of Contact:** [Mohammad Reza Taesiri](mailto:mtaesiri@gmail.com)
### Dataset Summary
The GamePhysics Grand Theft Auto V dataset is a small video dataset of buggy gameplay videos of Grand Theft Auto V game, collected from [GamePhysics](https://www.reddit.com/r/GamePhysics/) subrredit
### Supported Tasks and Leaderboards
[N/A]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] |
sileod/discourse_marker_qa | 2022-07-19T13:00:05.000Z | [
"task_categories:question-answering",
"task_categories:multiple-choice",
"task_ids:open-domain-qa",
"task_ids:multiple-choice-qa",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language... | sileod | Discourse marker/connective prediction as multiple choice questions based on the Discovery dataset | @inproceedings{sileo-etal-2019-mining,
title = "Mining Discourse Markers for Unsupervised Sentence Representation Learning",
author = "Sileo, Damien and
Van De Cruys, Tim and
Pradel, Camille and
Muller, Philippe",
booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)",
month = jun,
year = "2019",
address = "Minneapolis, Minnesota",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N19-1351",
doi = "10.18653/v1/N19-1351",
pages = "3477--3486",
abstract = "Current state of the art systems in NLP heavily rely on manually annotated datasets, which are expensive to construct. Very little work adequately exploits unannotated data {--} such as discourse markers between sentences {--} mainly because of data sparseness and ineffective extraction methods. In the present work, we propose a method to automatically discover sentence pairs with relevant discourse markers, and apply it to massive amounts of data. Our resulting dataset contains 174 discourse markers with at least 10k examples each, even for rare markers such as {``}coincidentally{''} or {``}amazingly{''}. We use the resulting data as supervision for learning transferable sentence embeddings. In addition, we show that even though sentence representation learning through prediction of discourse marker yields state of the art results across different transfer tasks, it{'}s not clear that our models made use of the semantic relation between sentences, thus leaving room for further improvements.",
} | null | 3 | 5 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: 'discourse_marker_qa'
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- question-answering
- multiple-choice
task_ids:
- open-domain-qa
- multiple-choice-qa
---
# Dataset for evaluation of (zero-shot) discourse marker prediction with language models
This is the Big-Bench version of our discourse marker prediction dataset, [Discovery](https://huggingface.co/datasets/discovery)
Design considerations:
<https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/discourse_marker_prediction>
GPT2 has to zero-shot 15% accuracy with on this multiple-choice task based on language modeling perplexity. As a comparison, a fully supervised model, trained with 10k examples per marker with ROBERTA and default hyperparameters with one epoch, leads to an accuracy of 30% with 174 possible markers. This shows that this task is hard for GPT2 and that the model didn't memorize the discourse markers, but that high accuracies are still possible.
# Citation
```
@inproceedings{sileo-etal-2019-mining,
title = "Mining Discourse Markers for Unsupervised Sentence Representation Learning",
author = "Sileo, Damien and
Van De Cruys, Tim and
Pradel, Camille and
Muller, Philippe",
booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)",
month = jun,
year = "2019",
address = "Minneapolis, Minnesota",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N19-1351",
doi = "10.18653/v1/N19-1351",
pages = "3477--3486",
}
``` |
wrice/wikipedia-en-punctuated | 2022-05-30T15:07:29.000Z | [
"region:us"
] | wrice | null | null | null | 0 | 5 | Entry not found |
buio/heart-disease | 2022-06-05T11:48:42.000Z | [
"structured-data",
"tabular-data",
"classification",
"region:us"
] | buio | null | null | null | 0 | 5 | ---
tags:
- structured-data
- tabular-data
- classification
---
The [Heart Disease Data Set](https://archive.ics.uci.edu/ml/datasets/heart+Disease) is provided by the Cleveland Clinic Foundation for Heart Disease. It's a CSV file with 303 rows. Each row contains information about a patient (a sample), and each column describes an attribute of the patient (a feature). We use the features to predict whether a patient has a heart disease (binary classification).
It is originally [hosted here]("http://storage.googleapis.com/download.tensorflow.org/data/heart.csv"). |
BeIR/climate-fever-generated-queries | 2022-10-23T06:09:20.000Z | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | BeIR | null | null | null | 0 | 5 | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
- zero-shot-retrieval
- information-retrieval
- zero-shot-information-retrieval
task_ids:
- passage-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
- tweet-retrieval
- citation-prediction-retrieval
- duplication-question-retrieval
- argument-retrieval
- news-retrieval
- biomedical-information-retrieval
- question-answering-retrieval
---
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
scikit-learn/breast-cancer-wisconsin | 2022-06-20T14:28:58.000Z | [
"license:cc-by-sa-4.0",
"region:us"
] | scikit-learn | null | null | null | 0 | 5 | ---
license: cc-by-sa-4.0
---
## Breast Cancer Wisconsin Diagnostic Dataset
Following description was retrieved from [breast cancer dataset on UCI machine learning repository](https://archive.ics.uci.edu/ml/datasets/breast+cancer+wisconsin+(diagnostic)).
Features are computed from a digitized image of a fine needle aspirate (FNA) of a breast mass. They describe characteristics of the cell nuclei present in the image. A few of the images can be found at [here](https://pages.cs.wisc.edu/~street/images/).
Separating plane described above was obtained using Multisurface Method-Tree (MSM-T), a classification method which uses linear programming to construct a decision tree. Relevant features were selected using an exhaustive search in the space of 1-4 features and 1-3 separating planes.
The actual linear program used to obtain the separating plane in the 3-dimensional space is that described in: [K. P. Bennett and O. L. Mangasarian: "Robust Linear Programming Discrimination of Two Linearly Inseparable Sets", Optimization Methods and Software 1, 1992, 23-34].
Attribute Information:
- ID number
- Diagnosis (M = malignant, B = benign)
Ten real-valued features are computed for each cell nucleus:
- radius (mean of distances from center to points on the perimeter)
- texture (standard deviation of gray-scale values)
- perimeter
- area
- smoothness (local variation in radius lengths)
- compactness (perimeter^2 / area - 1.0)
- concavity (severity of concave portions of the contour)
- concave points (number of concave portions of the contour)
- symmetry
- fractal dimension ("coastline approximation" - 1)
|
FacePerceiver/laion-face | 2022-11-18T04:04:56.000Z | [
"region:us"
] | FacePerceiver | null | null | null | 15 | 5 | # Laion-Face
[LAION-Face](https://github.com/FacePerceiver/LAION-Face) is the human face subset of [LAION-400M](https://laion.ai/laion-400-open-dataset/), it consists of 50 million image-text pairs. Face detection is conducted to find images with faces. Apart from the 50 million full-set(LAION-Face 50M), there is a 20 million sub-set(LAION-Face 20M) for fast evaluation.
LAION-Face is first used as the training set of [FaRL](https://github.com/FacePerceiver/FaRL), which provides powerful pre-training transformer backbones for face analysis tasks.
For more details, please check the offical repo at https://github.com/FacePerceiver/LAION-Face .
## Download and convert metadata
```bash
wget -l1 -r --no-parent https://the-eye.eu/public/AI/cah/laion400m-met-release/laion400m-meta/
mv the-eye.eu/public/AI/cah/laion400m-met-release/laion400m-meta/ .
wget https://huggingface.co/datasets/FacePerceiver/laion-face/resolve/main/laion_face_ids.pth
wget https://raw.githubusercontent.com/FacePerceiver/LAION-Face/master/convert_parquet.py
python convert_parquet.py ./laion_face_ids.pth ./laion400m-meta ./laion_face_meta
```
## Download the images with img2dataset
When metadata is ready, you can start download the images.
```bash
wget https://raw.githubusercontent.com/FacePerceiver/LAION-Face/master/download.sh
bash download.sh ./laion_face_meta ./laion_face_data
```
Please be patient, this command might run over days, and cost about 2T disk space, and it will download 50 million image-text pairs as 32 parts.
- To use the **LAION-Face 50M**, you should use all the 32 parts.
- To use the **LAION-Face 20M**, you should use these parts.
```
0,2,5,8,13,15,17,18,21,22,24,25,28
```
checkout `download.sh` and [img2dataset](https://github.com/rom1504/img2dataset) for more details and parameter setting.
|
jalFaizy/detect_chess_pieces | 2022-10-25T10:34:41.000Z | [
"task_categories:object-detection",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"language:en",
"license:other",
"region:us"
] | jalFaizy | The "Object Detection for Chess Pieces" dataset is a toy dataset created (as suggested by the name!) to introduce object detection in a beginner friendly way. | null | null | 3 | 5 | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: Object Detection for Chess Pieces
size_categories:
- n<1K
source_datasets: []
task_categories:
- object-detection
task_ids: []
---
# Dataset Card for Object Detection for Chess Pieces
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/faizankshaikh/chessDetection
- **Repository:** https://github.com/faizankshaikh/chessDetection
- **Paper:** -
- **Leaderboard:** -
- **Point of Contact:** [Faizan Shaikh](mailto:faizankshaikh@gmail.com)
### Dataset Summary
The "Object Detection for Chess Pieces" dataset is a toy dataset created (as suggested by the name!) to introduce object detection in a beginner friendly way. It is structured in a one object-one image manner, with the objects being of four classes, namely, Black King, White King, Black Queen and White Queen
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train and evaluate simplistic object detection models
### Languages
The text (labels) in the dataset is in English
## Dataset Structure
### Data Instances
A data point comprises an image and the corresponding objects in bounding boxes.
```
{
'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=224x224 at 0x23557C66160>,
'objects': { "label": [ 0 ], "bbox": [ [ 151, 151, 26, 26 ] ] }
}
```
### Data Fields
- `image`: A `PIL.Image.Image` object containing the 224x224 image.
- `label`: An integer between 0 and 3 representing the classes with the following mapping:
| Label | Description |
| --- | --- |
| 0 | blackKing |
| 1 | blackQueen |
| 2 | whiteKing |
| 3 | whiteQueen |
- `bbox`: A list of integers having sequence [x_center, y_center, width, height] for a particular bounding box
### Data Splits
The data is split into training and validation set. The training set contains 204 images and the validation set 52 images.
## Dataset Creation
### Curation Rationale
The dataset was created to be a simple benchmark for object detection
### Source Data
#### Initial Data Collection and Normalization
The data is obtained by machine generating images from "python-chess" library. Please refer [this code](https://github.com/faizankshaikh/chessDetection/blob/main/code/1.1%20create_images_with_labels.ipynb) to understand data generation pipeline
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
The annotations were done manually.
#### Who are the annotators?
The annotations were done manually.
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
The dataset can be considered as a beginner-friendly toy dataset for object detection. It should not be used for benchmarking state of the art object detection models, or be used for a deployed model.
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
The dataset only contains four classes for simplicity. The complexity can be increased by considering all types of chess pieces, and by making it a multi-object detection problem
## Additional Information
### Dataset Curators
The dataset was created by Faizan Shaikh
### Licensing Information
The dataset is licensed as CC-BY-SA:2.0
### Citation Information
[Needs More Information] |
BeardedJohn/ubb-endava-conll-assistant-ner | 2022-06-24T13:04:41.000Z | [
"region:us"
] | BeardedJohn | null | null | null | 0 | 5 | Entry not found |
BeardedJohn/ubb-endava-conll-assistant-ner-only-misc | 2023-01-20T10:46:44.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"language:en",
"region:us"
] | BeardedJohn | null | null | null | 0 | 5 | ---
task_categories:
- token-classification
task_ids:
- named-entity-recognition
language:
- en
--- |
DarwinAnim8or/greentext | 2023-01-24T18:32:57.000Z | [
"task_categories:text2text-generation",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"multilinguality:monolingual",
"language:en",
"license:unknown",
"grug",
"internet",
"greentext",
"region:us"
] | DarwinAnim8or | null | null | null | 1 | 5 | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- machine-generated
license:
- unknown
multilinguality:
- monolingual
pretty_name: 'Greentext Dataset
This is content pulled from various archives to create a "greentext bot" or sorts using
GPT-JT-8Bit. '
size_categories: []
source_datasets: []
tags:
- grug
- internet
- greentext
task_categories:
- text2text-generation
task_ids: []
---
# Greentext Dataset
This is content pulled from various archives to create a "greentext bot" or sorts using GPT-JT.
Really, just a dumb joke I made with some friends.
## Biases & Limitations
This dataset contains charaters such as \n and u2019d that need to be filtered out manually.
Needless to say, this dataset contains *many* instances of profanity & biases, as it is trained on data from hell.
I don't recommend actually using any of this. |
BeardedJohn/ubb-endava-assistant-ner-only-misc | 2022-06-29T11:41:31.000Z | [
"region:us"
] | BeardedJohn | null | null | null | 0 | 5 | Entry not found |
ZeyadAhmed/Arabic-SQuADv2.0 | 2022-06-29T16:04:58.000Z | [
"region:us"
] | ZeyadAhmed | null | null | null | 0 | 5 | Entry not found |
PolyAI/evi | 2022-10-25T10:39:33.000Z | [
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"language:en",
"language:fr",
"language:pl",
"license:cc-by-4.0",
"arxiv:2204.13496",
"region:us"
] | PolyAI | EVI is a challenging spoken multilingual dataset with 5,506 dialogues in English, Polish, and French
that can be used for benchmarking and developing knowledge-based enrolment, identification, and identification
for spoken dialogue systems. | @inproceedings{Spithourakis2022evi,
author = {Georgios P. Spithourakis and Ivan Vuli\'{c} and Micha\l{} Lis and I\~{n}igo Casanueva and Pawe\l{} Budzianowski},
title = {{EVI}: Multilingual Spoken Dialogue Tasks and Dataset for Knowledge-Based Enrolment, Verification, and Identification},
year = {2022},
note = {Data available at https://github.com/PolyAI-LDN/evi-paper},
url = {https://arxiv.org/abs/2204.13496},
booktitle = {Findings of NAACL (publication pending)}
} | null | 2 | 5 | ---
annotations_creators:
- crowdsourced
- machine-generated
language_creators:
- crowdsourced
- expert-generated
language:
- en
- fr
- pl
license:
- cc-by-4.0
multilinguality:
- multilingual
paperswithcode_id: evi-multilingual-spoken-dialogue-tasks-and-1
language_bcp47:
- en
- en-GB
- fr
- fr-FR
- pl
---
# EVI
## Dataset Description
- **Paper:** [EVI: Multilingual Spoken Dialogue Tasks and Dataset for Knowledge-Based Enrolment, Verification, and Identification](https://arxiv.org/abs/2204.13496)
- **Repository:** [Github](https://github.com/PolyAI-LDN/evi-paper)
EVI is a challenging spoken multilingual dataset
with 5,506 dialogues in English, Polish, and French
that can be used for benchmarking and developing
knowledge-based enrolment, identification, and identification for spoken dialogue systems.
## Example
EVI can be downloaded and used as follows:
```py
from datasets import load_dataset
evi = load_dataset("PolyAI/evi", "en-GB") # for British English
# to download data from all locales use:
# evi = load_dataset("PolyAI/evi", "all")
# see structure
print(evi)
```
## Dataset Structure
We show detailed information of the example for the `en-GB` configuration of the dataset.
All other configurations have the same structure.
### Data Instances
An example of a data instance of the config `en-GB` looks as follows:
```
{
"language": 0,
"dialogue_id": "CA0007220161df7be23f4554704c8720f5",
"speaker_id": "e80e9bdd33eda593f16a1b6f2fb228ff",
"turn_id": 0,
"target_profile_id": "en.GB.608",
"asr_transcription": "w20 a b",
"asr_nbest'": ["w20 a b", "w20 a bee", "w20 a baby"],
"path": "audios/en/CA0007220161df7be23f4554704c8720f5/0.wav",
"audio": {
"path": "/home/georgios/.cache/huggingface/datasets/downloads/extracted/0335ebc25feace53243133b49ba17ba18e26f0f97cb083ffdf4e73dd7427b443/audios/en/CA0007220161df7be23f4554704c8720f5/0.wav",
"array": array([ 0.00024414, 0.00024414, 0.00024414, ..., 0.00024414,
-0.00024414, 0.00024414], dtype=float32),
"sampling_rate": 8000,
}
}
```
### Data Fields
The data fields are the same among all splits.
- **language** (int): ID of language
- **dialogue_id** (str): the ID of the dialogue
- **speaker_id** (str): the ID of the speaker
- **turn_id** (int)": the ID of the turn
- **target_profile_id** (str): the ID of the target profile
- **asr_transcription** (str): ASR transcription of the audio file
- **asr_nbest** (list): n-best ASR transcriptions of the audio file
- **path** (str): Path to the audio file
- **audio** (dict): Audio object including loaded audio array, sampling rate and path of audio
### Data Splits
Every config only has the `"test"` split containing *ca.* 1,800 dialogues.
## Dataset Creation
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
All datasets are licensed under the [Creative Commons license (CC-BY)](https://creativecommons.org/licenses/).
### Citation Information
```
@inproceedings{Spithourakis2022evi,
author = {Georgios P. Spithourakis and
Ivan Vuli\'{c} and
Micha\l{} Lis and
I\~{n}igo Casanueva
and Pawe\l{} Budzianowski},
title = {{EVI}: Multilingual Spoken Dialogue Tasks and Dataset for Knowledge-Based Enrolment, Verification, and Identification},
year = {2022},
note = {Data available at https://github.com/PolyAI-LDN/evi-paper},
url = {https://arxiv.org/abs/2204.13496},
booktitle = {Findings of NAACL (publication pending)}
}
```
### Contributions
Thanks to [@polinaeterna](https://github.com/polinaeterna) for helping with adding this dataset |
djagatiya/ner-ontonotes-v5-eng-v4 | 2022-07-03T11:36:33.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"source_datasets:subset",
"language:eng",
"region:us"
] | djagatiya | null | null | null | 0 | 5 | ---
language:
- eng
task_categories:
- token-classification
task_ids:
- named-entity-recognition
source_datasets:
- subset
---
# (NER) ontonotes-v5-eng-v4
This dataset is subset of [conll2012_ontonotesv5](https://huggingface.co/datasets/conll2012_ontonotesv5) original dataset.
- Language: english
- Version: v4
| Dataset | Examples |
| --- | --- |
| Training | 75187 |
| Testing | 9479 |
|
Christoph911/German-legal-SQuAD | 2022-07-03T12:15:07.000Z | [
"license:mit",
"region:us"
] | Christoph911 | null | null | null | 2 | 5 | ---
license: mit
---
|
CShorten/Last-Week-on-ML-ArXiv | 2022-07-12T21:03:47.000Z | [
"region:us"
] | CShorten | null | null | null | 0 | 5 | Please check here to see when the dataset was last updated. <br />
<h1> Last Updated July 12th, 2022 </h1> |
kmkarakaya/turkishReviews-ds-mini | 2023-10-02T19:42:11.000Z | [
"language:tr",
"region:us"
] | kmkarakaya | null | null | null | 0 | 5 | ---
language:
- tr
--- |
saadob12/chart-to-text | 2022-07-10T10:09:33.000Z | [
"arxiv:2203.06486",
"region:us"
] | saadob12 | null | null | null | 3 | 5 | This dataset only consists of linearized underlying data table of charts and their corresponding summaries.
Model that use this dataset: https://huggingface.co/saadob12/t5_C2T_big
## Created By:
Kanthara, S., Leong, R. T. K., Lin, X., Masry, A., Thakkar, M., Hoque, E., & Joty, S. (2022). Chart-to-Text: A Large-Scale Benchmark for Chart Summarization. arXiv preprint arXiv:2203.06486.
**Paper**: https://arxiv.org/abs/2203.06486
**Orignal github repo**: https://github.com/vis-nlp/Chart-to-text
# Abstract from the Paper
Charts are commonly used for exploring data
and communicating insights. Generating nat-
ural language summaries from charts can be
very helpful for people in inferring key in-
sights that would otherwise require a lot of
cognitive and perceptual efforts. We present
Chart-to-text, a large-scale benchmark with
two datasets and a total of 44,096 charts cover-
ing a wide range of topics and chart types. We
explain the dataset construction process and
analyze the datasets. We also introduce a num-
ber of state-of-the-art neural models as base-
lines that utilize image captioning and data-to-
text generation techniques to tackle two prob-
lem variations: one assumes the underlying
data table of the chart is available while the
other needs to extract data from chart images.
Our analysis with automatic and human eval-
uation shows that while our best models usu-
ally generate fluent summaries and yield rea-
sonable BLEU scores, they also suffer from
hallucinations and factual errors as well as dif-
ficulties in correctly explaining complex pat-
terns and trends in charts.
### Note
The original paper published two sub-datasets one collected from statista and the other from pew. The dataset upload here is from statista. Images can be downloaded from the github repo mentioned above.
# Langugage
The data is in english and the summaries are in english.
# Dataset split
| train | valid | test |
|:---:|:---:| :---:|
| 24367 | 5222 | 5222 |
**Name of Contributor:** Saad Obaid ul Islam |
readerbench/ro-fb-offense | 2023-02-20T13:26:28.000Z | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ro",
"license:apache-2.0",
"hate-speech-detection",
"regio... | readerbench | null | null | null | 1 | 5 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ro
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- hate-speech-detection
pretty_name: RO-FB-Offense
extra_gated_prompt: 'Warning: this repository contains harmful content (abusive language,
hate speech).'
tags:
- hate-speech-detection
---
# Dataset Card for "RO-FB-Offense"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/readerbench/ro-fb-offense](https://github.com/readerbench/ro-fb-offense)
- **Repository:** [https://github.com/readerbench/ro-fb-offense](https://github.com/readerbench/ro-fb-offense)
- **Paper:** FB-RO-Offense – A Romanian Dataset and Baseline Models for detecting Offensive Language in Facebook Comments
- **Point of Contact:** [Andrei Paraschiv](https://github.com/AndyTheFactory)
### Dataset Summary
FB-RO-Offense corpus, an offensive speech dataset containing 4,455 user-generated comments from Facebook live broadcasts available in Romanian
The annotation follows the hierarchical tagset proposed in the Germeval 2018 Dataset.
The following Classes are available:
* OTHER: Non-Offensive Language
* OFFENSIVE:
- PROFANITY
- INSULT
- ABUSE
### Languages
Romanian
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{
'sender': '$USER1208',
'no_reacts': 1,
'text': 'PLACEHOLDER TEXT',
'label': OTHER,
}
```
### Data Fields
- `sender`: a `string` feature.
- 'no_reacts': a `integer`
- `text`: a `string`.
- `label`: categorical `OTHER`, `PROFANITY`, `INSULT`, `ABUSE`
### Data Splits
| name |train|test|
|---------|----:|---:|
|ro|x|x|
## Dataset Creation
### Curation Rationale
Collecting data for abusive language classification for Romanian Language.
### Source Data
Facebook comments
#### Initial Data Collection and Normalization
#### Who are the source language producers?
Social media users
### Annotations
#### Annotation process
#### Who are the annotators?
Native speakers
### Personal and Sensitive Information
The data was public at the time of collection. No PII removal has been performed.
## Considerations for Using the Data
### Social Impact of Dataset
The data definitely contains abusive language. The data could be used to develop and propagate offensive language against every target group involved, i.e. ableism, racism, sexism, ageism, and so on.
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
This data is available and distributed under Apache-2.0 license
### Citation Information
```
@inproceedings{busuioc2022fb-ro-offense,
title={FB-RO-Offense – A Romanian Dataset and Baseline Models for detecting Offensive Language in Facebook Comments},
author={ Busuioc, Gabriel-Razvan and Paraschiv, Andrei and Dascalu, Mihai},
booktitle={International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC) 2022},
year={2022}
}
```
### Contributions
|
biglam/old_bailey_proceedings | 2022-07-22T17:26:53.000Z | [
"task_categories:text-classification",
"task_categories:text-generation",
"task_ids:multi-class-classification",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language_creators:machine-generated",
"m... | biglam | The dataset consists of 2,163 transcriptions of the Proceedings and 475 Ordinary's Accounts marked up in TEI-XML,
and contains some documentation covering the data structure and variables. Each Proceedings file represents one session of the court (1674-1913),
and each Ordinary's Account file represents a single pamphlet (1676-1772) | @article{Howard2017,
author = "Sharon Howard",
title = "{Old Bailey Online XML Data}",
year = "2017",
month = "4",
url = "https://figshare.shef.ac.uk/articles/dataset/Old_Bailey_Online_XML_Data/4775434",
doi = "10.15131/shef.data.4775434.v2"
} | null | 3 | 5 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- expert-generated
- machine-generated
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Old Bailey Proceedings
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
- text-generation
task_ids:
- multi-class-classification
- language-modeling
- masked-language-modeling
---
[Needs More Information]
# Dataset Card for Old Bailey Proceedings
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://www.dhi.ac.uk/projects/old-bailey/
- **Repository:** https://www.dhi.ac.uk/san/data/oldbailey/
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** The University of Sheffield
Digital Humanities Institute
34 Gell Street
Sheffield S3 7QY
### Dataset Summary
**Note** We are making this dataset available via the HuggingFace hub to open it up to more users and use cases. We have focused primarily on making an initial version of this dataset available, focusing on some potential use cases. If you think there are other configurations this dataset should support, please use the community tab to open an issue.
The dataset consists of 2,163 transcriptions of the Proceedings and 475 Ordinary's Accounts marked up in TEI-XML, and contains some documentation covering the data structure and variables. Each Proceedings file represents one session of the court (1674-1913), and each Ordinary's Account file represents a single pamphlet (1676-1772).
### Supported Tasks and Leaderboards
- `language-modeling`: This dataset can be used to contribute to the training or evaluation of language models for historical texts. Since it represents transcription from court proceedings, the language in this dataset may better represent the variety of language used at the time.
- `text-classification`: This dataset can be used to classify what style of English some text is in
- `named-entity-recognition`: Some of the text contains names of people and places. We don't currently provide the token IDs for these entities but do provide the tokens themselves. This means this dataset has the potential to be used to evaluate the performance of other Named Entity Recognition models on this dataset.
### Languages
`en`
## Dataset Structure
### Data Instances
An example of one instance from the dataset:
```python
{
'id': 'OA16760517',
'text': "THE CONFESSION AND EXECUTION Of the Prisoners at TYBURN On Wednesday the 17May1676. Viz. Henry Seabrook , Elizabeth Longman Robert Scot , Condemned the former Sessions. Edward Wall , and Edward Russell . Giving a full
and satisfactory Account of their Crimes, Behaviours, Discourses in Prison, and last Words (as neer as could be taken) at the place of Execution. Published for a Warning, to all that read it, to avoid the like wicked Courses, which brought these poor people to this shameful End. THE CONFESSION AND EXECUTION Of the Prisoners at TYBURN On Wednesday the 17th of May, 1676. Viz. Henry Seabrook , Elizabeth Longman Robert Scot , Condemned the former Sessions. Edward Wall , and Edward Russell . Giving a full and satisfactory Account of their Crimes, Behaviours, Discourses in Prison, and last Words (as neer as could be taken) at the place of Execution. Published for a Warning, to all that read it, to avoid the like wicked Courses, which brought these poor people to this shameful End. However, Mercy so far interposed after the Sentence of Justice, that only Five of them actually suffered: Amongst whom was Elizabeth Longman , an old Offendor, having been above a Dozen several times in Newgate : Some time since she was convicted, and obtained the benefit and favour of Transportation, and was accordingly carried into Virginia : But Clum, non Animutant, qu: trans mare currunt. She had not been there above Fourteen Moneths, before she
procured Monies remitted from some of the Brotherhood here, wherewith she bought off her Servitude, and ever she comes again into England , long before the term of her Sentence was expired. Nor was she content to violate the Law only in that point, bur returned to her old Trade (for so these people call stealing) as well as to her Countrey; and was soon after her Arrival conducted to Newgate , for mistaking several parcels of Silk, upon which being Convicted, and pleading her Belly, she was
set by the last Sessions before this: But now it appearing that she was highly accessary (though all the while in Newgate ) to the Robbery of a Person of Quality, and that she was wholly incorrigible, not to be reclaimed by any Warnings, she was brought down again to the Bar, and demanded, what she could say for her self, why she should not suffer Death, according to Law, upon her old Judgment. To which she still pleaded, that she was quick with Child. But being searched by a Jury of Matrons, they found no such thing; so that she was carried with the rest into the Hole, and ordered for Execution. As for her behaviour, I am sorry no better account can be given of it; for truely she did not seem so sensible of her End, or to make that serious preparation for it, as night be expected from a Person in her condition: yet were not the charitable assistances and endeavours of the Ordinary and several other Ministers wanting towards her, though 'tis feared they did not make the wisht-for Impressions upon her Spirit. Two others viz. Edward Wall and Edward Russel that suffered, were brought to this untimely and ignominious End, by the means and seducements of this unhappy Woman. For they together with one A. M. going after the former Sessions to a Gentlemans House, to sollicite and engage his Interest, in order to the obtaining of a Reprieve for a Woman that past for one of their Wives, and was then under Condemnation, they chanced to spie the Maid a scowring a very considerable quantity of Plate, the glittering sight whereof so much affected them, that when they came back to Newgate , to give an account of their business, amongst other discourse, they mentioned what abundance of Plate they saw. And will you only see it? (says this Besse Longman , being by) then you deserve to starve indeed, when Fortune puts Booty, as it were, in your Mouths, and you are such Cowards, that you dare not take it: With these and many other words to that purpose, she animated them on so far, till by her Instigation and the Devils together, they resolved upon the Villany, and accordingly went the next Night, broke open the Gentlemans House, and took thence a great quantity of Plate: But upon description and search, A. M: was taken next Morning on saffron-hill , with a Silver Ladle, a Silver Porringer, and that famous Engine of Wickedness, called Betty. He was carried for the present to New prison , and there kept till he had discovered the othe. Parties; and upon his ingenu u Confession obtained the Mercy of a Repeve from that Execution, which his Fellow Criminals now suffer'd. The other person executed, was Henry Sea brooke : He was condemned the former Sessions for robbing the Merchant at Dukes Place ; but upon his pretending to discover the rest of the Cabal, and other great matters, was kept from the Gibbet all this, while; but now failing to verifie those pretentions, he was ordered by the Court to receive his punishment according to his former Sentence, with the resof the Prisoners condemned this Sessions. Of these poor wretches, two, viz Wall and Russell, as they ingenuously pleaded guilty to their Indictment at the Bar, so they behaved themselves very modestly at their Condemnation; and afterwards in Prison when Ministers' came to visit and discourse with them, in order to their Souls everlasting good, they received them with great expressions of joy and este, attending with much reverence and seeming heed to their Spiritual Instruction, who with most necessary and importunate Exhortations pressed them to a speedy and hearty Repentance, Since it stood them so much in hand, being upon the brink of Eternity, they told them, Their Condition was sad, as being justly sentenced by Men to a temporal Death; but that was infinitely short of being condemned by God, and suffering Eternal Death under the ury of his Wrath: that though it was vin for them to flatter themselves with hopes of onger life in this world, yet there were
means est to secure them of Everlasting Life in the ext: and that to such vile sinners as they nd been, it was an unspeakable Mercy, that hey had yet a little space left them, wherein make their peace with Heaven; and what ould the damned Souls, weltring without pe in Eternal Flames, give or do for such a recious opportunity? With such and many her pious Admonitions and Prescriptions did ese Spiritual Physicians endeavour to cure e Ulcers of their Souls, and excite them to row off the peccant matter, and wash away i Iniquities with tears of a sincere Repennce, proceeding not from a sense of approa- ching Punishment, but of trouble for the Evil itself, and their provoking of God thereby. To all which they gave very great attention, promising to put that blessed Advice in practice; and so continued in a very serious and laudable frame till the time of Execution, which was the 17May, being then conducted to Tyburn with vest numbers of people following the Carts to behold the last
sad Scene of their deplorable Tragedy. Being come to the Gallows, and the usual Prayers and Solemnities being performed, one of them spoke a pretty while to the Multitude, protesting, This was the first Face that he was ever actually guilty of, though he had been accessary to divers others, and had been all his days a very ill Liver; so that he could not but acknowledge that he suffer'd justly. He very much admonish'd all persons to consider their ways; especially warning Youth not to misspend their time in Idleness, or Disobedience to Parents or Masters; and to have a care of being seduced and drawn away by led women. affirming that such Courses and their Temptations, and to satisfie their Luxury, had been originally the cause of his destruction, and that shameful death he was now going to suffer. The rest said very few words, unless to some particular Acquaintance; but by their Gestures seemed to pray secretly, and so were all Executed according to Sentence.",
'places': ['TYBURN', 'TYBURN', 'Newgate', 'Virginia', 'England', 'Newgate', 'Newgate', 'Newgate', 'saffron-hill', 'New prison', 'Dukes Place', 'Tyburn'],
'type': 'OA',
'persons': ['Henry Seabrook', 'Elizabeth Longman', 'Robert Scot', 'Edward Wall', 'Edward Russell', 'Henry Seabrook', 'Elizabeth Longman', 'Robert Scot', 'Edward Wall', 'Edward Russell', 'Elizabeth Longman', 'Edward Wall', 'Edward Russel', 'Besse Longman', 'Henry Sea brooke'],
'date': '16760517'}
```
### Data Fields
- `id`: A unique identifier for the data point (in this case, a trial)
- `text`: The text of the proceeding
- `places`: The places mentioned in the text
- `type`: This can be either 'OA' or 'OBP'. OA is "Ordinary's Accounts" and OBP is "Sessions Proceedings"
- `persons`: The persons named in the text
- `date`: The date of the text
### Data Splits
This dataset only contains a single split:
Train: `2638` examples
## Dataset Creation
### Curation Rationale
Between 1674 and 1913 the Proceedings of the Central Criminal Court in London, the Old Bailey, were published eight times a year. These records detail 197,000 individual trials and contain 127 million words in 182,000 pages. They represent the largest single source of information about non-elite lives and behaviour ever published and provide a wealth of detail about everyday life, as well as valuable systematic evidence of the circumstances surrounding the crimes and lives of victims and the accused, and their trial outcomes. This project created a fully digitised and structured version of all surviving published trial accounts between 1674 and 1913, and made them available as a searchable online resource.
### Source Data
#### Initial Data Collection and Normalization
Starting with microfilms of the original Proceedings and Ordinary's Accounts, page images were scanned to create high definition, 400dpi TIFF files, from which GIF and JPEG files have been created for transmission over the internet. The uncompressed TIFF files will be preserved for archival purposes and should eventually be accessible over the web once data transmission speeds improve. A GIF format has been used to transmit image files for the Proceedings published between 1674 and 1834.
#### Who are the source language producers?
The text of the 1674 to October 1834 Proceedings was manually typed by the process known as "double rekeying", whereby the text is typed in twice, by two different typists. Then the two transcriptions are compared by computer. Differences are identified and then resolved manually. This process was also used to create a transcription of the Ordinary's Accounts. This process means this text data contains fewer errors than many historical text corpora produced using Optical Character Recognition.
### Annotations
#### Annotation process
The markup was done by a combination of automated and manual processes.
Most of the 1674 to October 1834 markup was done manually by a team of five data developers working at the Humanities Research Institute at the University of Sheffield (see project staff).
However, person names were tagged using an automated markup programme, GATE, developed by the Department of Computer Science at the University of Sheffield and specially customised to process the text of the Proceedings. Most of the 1674-1834 trial proceedings were run through GATE, which was able to identify approximately 80-90% of the names in the text. GATE was asked only to identify names where both a forename (not just an initial) and surname were given. The names not identified by this programme were not regularly marked up manually unless they were the names of defendants or victims.
The November 1834 to 1913 text was first run through an automated markup process. This process was carried out by the Digital Humanities Institute Sheffield.
Remaining markup, including checking of the results of the automated markup, was carried out by a team of eight data developers employed by the University of Hertfordshire (see project staff).
#### Who are the annotators?
- The directors of this project, and authors of all the historical background pages, are Professor Clive Emsley (Open University), Professor Tim Hitchcock (University of Sussex) and Professor Robert Shoemaker (University of Sheffield).
- The Project Manager is Dr Sharon Howard.
- The technical officer responsible for programming the search engines is Jamie McLaughlin.
- The Senior Data Developer, in charge of all the tagging procedures, was Dr Philippa Hardman.
- The other Data Developers were Anna Bayman, Eilidh Garrett, Carol Lewis-Roylance, Susan Parkinson, Anna Simmons, Gwen Smithson, Nicola Wilcox, and Catherine Wright.
- The London researcher was Mary Clayton.
- The technical officers responsible for the automated markup were Ed MacKenzie and Katherine Rogers.
- Project staff who worked on the 1674-1834 phase of the project include Dr Louise Henson (Senior Data Developer), Dr John Black, Dr Edwina Newman, Kay O'Flaherty, and Gwen Smithson.
### Personal and Sensitive Information
-This dataset contains personal information of people involved in criminal proceedings during the time period
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
- "Virtually every aspect of English life between 1674 and 1913 was influenced by gender, and this includes behaviour documented in the Old Bailey Proceedings. Long-held views about the particular strengths, weaknesses, and appropriate responsibilities of each sex shaped everyday lives, patterns of crime, and responses to crime." This dataset contains text that adheres to those stereotypes.
- "The make-up of London's population changed and changed again during the course of the two and a half centuries after 1674. European Protestant refugees, blacks discharged from the armies of a growing empire, and Jews from Spain and Eastern Europe, Irish men and women, Lascars and political refugees from the revolutions of the nineteenth century contributed to the ragout of communities that made up this world city. Information about all these communities, and several more besides, can be found in the Proceedings"
### Other Known Limitations
## Additional Information
### Dataset Curators
- The directors of this project, and authors of all the historical background pages, are Professor Clive Emsley (Open University), Professor Tim Hitchcock (University of Sussex) and Professor Robert Shoemaker (University of Sheffield).
- The Project Manager is Dr Sharon Howard.
- The technical officer responsible for programming the search engines is Jamie McLaughlin.
- The Senior Data Developer, in charge of all the tagging procedures, was Dr Philippa Hardman.
- The other Data Developers were Anna Bayman, Eilidh Garrett, Carol Lewis-Roylance, Susan Parkinson, Anna Simmons, Gwen Smithson, - Nicola Wilcox, and Catherine Wright.
### Licensing Information
[CC-NY-04](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
@article{Howard2017,
author = "Sharon Howard",
title = "{Old Bailey Online XML Data}",
year = "2017",
month = "4",
url = "https://figshare.shef.ac.uk/articles/dataset/Old_Bailey_Online_XML_Data/4775434",
doi = "10.15131/shef.data.4775434.v2"
}
Thanks to [@shamikbose](https://github.com/shamikbose) for adding this dataset. |
pyronear/openfire | 2022-12-11T22:25:43.000Z | [
"task_categories:image-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"size_categories:1K<n<10K",
"source_datasets:original",
"license:apache-2.0",
"region:us"
] | pyronear | OpenFire is an image classification dataset for wildfire detection, collected
from web searches. | @software{Pyronear_PyroVision_2019,
title={Pyrovision: wildfire early detection},
author={Pyronear contributors},
year={2019},
month={October},
publisher = {GitHub},
url = {https://github.com/pyronear/pyro-vision}
} | null | 2 | 5 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language: []
license:
- apache-2.0
multilinguality: []
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- image-classification
task_ids: []
pretty_name: Wildfire image classification dataset collected using images from web
searches.
---
# Dataset Card for OpenFire
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://pyronear.org/pyro-vision/datasets.html#openfire
- **Repository:** https://github.com/pyronear/pyro-vision
- **Point of Contact:** Pyronear <https://pyronear.org/en/>
### Dataset Summary
OpenFire is an image classification dataset for wildfire detection, collected
from web searches.
### Supported Tasks and Leaderboards
- `image-classification`: The dataset can be used to train a model for Image Classification.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image URL and its binary label.
```
{
'image_url': 'https://cdn-s-www.ledauphine.com/images/13C08274-6BA6-4577-B3A0-1E6C1B2A573C/FB1200/photo-1338240831.jpg',
'is_wildfire': true,
}
```
### Data Fields
- `image_url`: the download URL of the image.
- `is_wildfire`: a boolean value specifying whether there is an ongoing wildfire on the image.
### Data Splits
The data is split into training and validation sets. The training set contains 7143 images and the validation set 792 images.
## Dataset Creation
### Curation Rationale
The curators state that the current wildfire classification datasets typically contain close-up shots of wildfires, with limited variations of weather conditions, luminosity and backrgounds,
making it difficult to assess for real world performance. They argue that the limitations of datasets have partially contributed to the failure of some algorithms in coping
with sun flares, foggy / cloudy weather conditions and small scale.
### Source Data
#### Initial Data Collection and Normalization
OpenFire was collected using images publicly indexed by the search engine DuckDuckGo using multiple relevant queries. The images were then manually cleaned to remove errors.
### Annotations
#### Annotation process
Each web search query was designed to yield a single label (with wildfire or without), and additional human verification was used to remove errors.
#### Who are the annotators?
François-Guillaume Fernandez
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
François-Guillaume Fernandez
### Licensing Information
[Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Citation Information
```
@software{Pyronear_PyroVision_2019,
title={Pyrovision: wildfire early detection},
author={Pyronear contributors},
year={2019},
month={October},
publisher = {GitHub},
howpublished = {\url{https://github.com/pyronear/pyro-vision}}
}
```
|
Muennighoff/mbpp | 2022-10-20T19:43:58.000Z | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:c... | Muennighoff | The MBPP (Mostly Basic Python Problems) dataset consists of around 1,000 crowd-sourced Python
programming problems, designed to be solvable by entry level programmers, covering programming
fundamentals, standard library functionality, and so on. Each problem consists of a task
description, code solution and 3 automated test cases. | @article{austin2021program,
title={Program Synthesis with Large Language Models},
author={Austin, Jacob and Odena, Augustus and Nye, Maxwell and Bosma, Maarten and Michalewski, Henryk and Dohan, David and Jiang, Ellen and Cai, Carrie and Terry, Michael and Le, Quoc and others},
journal={arXiv preprint arXiv:2108.07732},
year={2021}
} | null | 1 | 5 | ---
annotations_creators:
- crowdsourced
- expert-generated
language_creators:
- crowdsourced
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
pretty_name: Mostly Basic Python Problems
tags:
- code-generation
---
# Dataset Card for Mostly Basic Python Problems (mbpp)
## Table of Contents
- [Dataset Card for Mostly Basic Python Problems (mbpp)](#dataset-card-for-mostly-basic-python-problems-(mbpp))
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/google-research/google-research/tree/master/mbpp
- **Paper:** [Program Synthesis with Large Language Models](https://arxiv.org/abs/2108.07732)
### Dataset Summary
The benchmark consists of around 1,000 crowd-sourced Python programming problems, designed to be solvable by entry level programmers, covering programming fundamentals, standard library functionality, and so on. Each problem consists of a task description, code solution and 3 automated test cases. As described in the paper, a subset of the data has been hand-verified by us.
Released [here](https://github.com/google-research/google-research/tree/master/mbpp) as part of [Program Synthesis with Large Language Models, Austin et. al., 2021](https://arxiv.org/abs/2108.07732).
### Supported Tasks and Leaderboards
This dataset is used to evaluate code generations.
### Languages
English - Python code
## Dataset Structure
```python
dataset_full = load_dataset("mbpp")
DatasetDict({
test: Dataset({
features: ['task_id', 'text', 'code', 'test_list', 'test_setup_code', 'challenge_test_list'],
num_rows: 974
})
})
dataset_sanitized = load_dataset("mbpp", "sanitized")
DatasetDict({
test: Dataset({
features: ['source_file', 'task_id', 'prompt', 'code', 'test_imports', 'test_list'],
num_rows: 427
})
})
```
### Data Instances
#### mbpp - full
```
{
'task_id': 1,
'text': 'Write a function to find the minimum cost path to reach (m, n) from (0, 0) for the given cost matrix cost[][] and a position (m, n) in cost[][].',
'code': 'R = 3\r\nC = 3\r\ndef min_cost(cost, m, n): \r\n\ttc = [[0 for x in range(C)] for x in range(R)] \r\n\ttc[0][0] = cost[0][0] \r\n\tfor i in range(1, m+1): \r\n\t\ttc[i][0] = tc[i-1][0] + cost[i][0] \r\n\tfor j in range(1, n+1): \r\n\t\ttc[0][j] = tc[0][j-1] + cost[0][j] \r\n\tfor i in range(1, m+1): \r\n\t\tfor j in range(1, n+1): \r\n\t\t\ttc[i][j] = min(tc[i-1][j-1], tc[i-1][j], tc[i][j-1]) + cost[i][j] \r\n\treturn tc[m][n]',
'test_list': [
'assert min_cost([[1, 2, 3], [4, 8, 2], [1, 5, 3]], 2, 2) == 8',
'assert min_cost([[2, 3, 4], [5, 9, 3], [2, 6, 4]], 2, 2) == 12',
'assert min_cost([[3, 4, 5], [6, 10, 4], [3, 7, 5]], 2, 2) == 16'],
'test_setup_code': '',
'challenge_test_list': []
}
```
#### mbpp - sanitized
```
{
'source_file': 'Benchmark Questions Verification V2.ipynb',
'task_id': 2,
'prompt': 'Write a function to find the shared elements from the given two lists.',
'code': 'def similar_elements(test_tup1, test_tup2):\n res = tuple(set(test_tup1) & set(test_tup2))\n return (res) ',
'test_imports': [],
'test_list': [
'assert set(similar_elements((3, 4, 5, 6),(5, 7, 4, 10))) == set((4, 5))',
'assert set(similar_elements((1, 2, 3, 4),(5, 4, 3, 7))) == set((3, 4))',
'assert set(similar_elements((11, 12, 14, 13),(17, 15, 14, 13))) == set((13, 14))'
]
}
```
### Data Fields
- `source_file`: unknown
- `text`/`prompt`: description of programming task
- `code`: solution for programming task
- `test_setup_code`/`test_imports`: necessary code imports to execute tests
- `test_list`: list of tests to verify solution
- `challenge_test_list`: list of more challenging test to further probe solution
### Data Splits
There are two version of the dataset (full and sanitized) which only one split each (test).
## Dataset Creation
See section 2.1 of original [paper](https://arxiv.org/abs/2108.07732).
### Curation Rationale
In order to evaluate code generation functions a set of simple programming tasks as well as solutions is necessary which this dataset provides.
### Source Data
#### Initial Data Collection and Normalization
The dataset was manually created from scratch.
#### Who are the source language producers?
The dataset was created with an internal crowdsourcing effort at Google.
### Annotations
#### Annotation process
The full dataset was created first and a subset then underwent a second round to improve the task descriptions.
#### Who are the annotators?
The dataset was created with an internal crowdsourcing effort at Google.
### Personal and Sensitive Information
None.
## Considerations for Using the Data
Make sure you execute generated Python code in a safe environment when evauating against this dataset as generated code could be harmful.
### Social Impact of Dataset
With this dataset code generating models can be better evaluated which leads to fewer issues introduced when using such models.
### Discussion of Biases
### Other Known Limitations
Since the task descriptions might not be expressive enough to solve the task. The `sanitized` split aims at addressing this issue by having a second round of annotators improve the dataset.
## Additional Information
### Dataset Curators
Google Research
### Licensing Information
CC-BY-4.0
### Citation Information
```
@article{austin2021program,
title={Program Synthesis with Large Language Models},
author={Austin, Jacob and Odena, Augustus and Nye, Maxwell and Bosma, Maarten and Michalewski, Henryk and Dohan, David and Jiang, Ellen and Cai, Carrie and Terry, Michael and Le, Quoc and others},
journal={arXiv preprint arXiv:2108.07732},
year={2021}
```
### Contributions
Thanks to [@lvwerra](https://github.com/lvwerra) for adding this dataset.
|
chintagunta85/pv_dataset | 2022-07-28T18:52:53.000Z | [
"region:us"
] | chintagunta85 | null | null | null | 0 | 5 | Entry not found |
KaranChand/atcosim_base_pruned_input_split | 2022-08-02T19:09:34.000Z | [
"region:us"
] | KaranChand | null | null | null | 0 | 5 | Entry not found |
rungalileo/trec6 | 2022-10-05T22:48:16.000Z | [
"region:us"
] | rungalileo | null | null | null | 0 | 5 | Entry not found |
rungalileo/conv_intent | 2022-10-05T22:48:48.000Z | [
"region:us"
] | rungalileo | null | null | null | 0 | 5 | Entry not found |
snoop2head/enron_aeslc_emails | 2022-08-04T15:54:22.000Z | [
"region:us"
] | snoop2head | null | null | null | 1 | 5 | Entry not found |
Qilex/EN-ME | 2022-08-11T21:25:34.000Z | [
"task_categories:translation",
"multilinguality:translation",
"size_categories:10K<n<100K",
"language:en",
"language:me",
"license:afl-3.0",
"middle english",
"region:us"
] | Qilex | null | null | null | 2 | 5 | ---
language:
- en
- me
license:
- afl-3.0
multilinguality:
- translation
pretty_name: EN-ME
size_categories:
- 10K<n<100K
tags:
- middle english
task_categories:
- translation
---
EN-ME Special Chars is a dataset of roughly 58000 aligned sentence pairs in English and Middle English, collected from the works of Geoffrey Chaucer, John Wycliffe, and the Gawain Poet.
It includes special characters such as þ.
This dataset reflects the spelling inconsistencies characteristic of Middle English.
|
jamescalam/oscar-en-minilm-2m | 2022-08-15T18:19:16.000Z | [
"task_categories:sentence-similarity",
"annotations_creators:no-annotation",
"language_creators:other",
"size_categories:1M<n<10M",
"source_datasets:extended|oscar",
"language:en",
"license:afl-3.0",
"embeddings",
"vector search",
"semantic similarity",
"semantic search",
"sentence transformer... | jamescalam | null | null | null | 1 | 5 | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- other
license:
- afl-3.0
multilinguality: []
pretty_name: OSCAR MiniLM Embeddings 2M
size_categories:
- 1M<n<10M
source_datasets:
- extended|oscar
tags:
- embeddings
- vector search
- semantic similarity
- semantic search
- sentence transformers
- sentence similarity
task_categories:
- sentence-similarity
task_ids: []
---
# Oscar EN 2M Embeddings
This dataset contains 2M sentences extracted from the English subset of the OSCAR dataset, and encoded into sentence embeddings using the `sentence-transformers/all-MiniLM-L6-v2` model. |
BasStein/250000-randomfunctions-2d | 2022-09-02T10:39:39.000Z | [
"region:us"
] | BasStein | null | null | null | 0 | 5 | Entry not found |
NimaBoscarino/butterflies | 2022-09-13T16:52:33.000Z | [
"region:us"
] | NimaBoscarino | null | null | null | 1 | 5 | Entry not found |
ywchoi/pubmed_abstract_9 | 2022-09-13T01:16:52.000Z | [
"region:us"
] | ywchoi | null | null | null | 0 | 5 | Entry not found |
anton-l/earnings22_baseline_5_gram | 2022-10-17T18:35:04.000Z | [
"license:apache-2.0",
"region:us"
] | anton-l | \nThe Earnings 22 dataset ( also referred to as earnings22 ) is a 119-hour corpus of English-language earnings calls collected from global companies.
The primary purpose is to serve as a benchmark for industrial and academic automatic speech recognition (ASR) models on real-world accented speech. | \n@misc{https://doi.org/10.48550/arxiv.2203.15591,
doi = {10.48550/ARXIV.2203.15591},
url = {https://arxiv.org/abs/2203.15591},
author = {Del Rio, Miguel and Ha, Peter and McNamara, Quinten and Miller, Corey and Chandra, Shipra},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Earnings-22: A Practical Benchmark for Accents in the Wild},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Share Alike 4.0 International}
} | null | 1 | 5 | ---
license: apache-2.0
---
|
EMBO/sd-character-level-ner | 2022-10-23T06:41:24.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:named-entity-recognition",
"task_ids:parsing",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license... | EMBO | This dataset is based on the SourceData database and is intented to facilitate training of NLP tasks in the cell and molecualr biology domain. | @Unpublished{
huggingface: dataset,
title = {SourceData NLP},
authors={Thomas Lemberger & Jorge Abreu-Vicente, EMBO},
year={2021}
} | null | 0 | 5 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets: []
task_categories:
- text-classification
- structure-prediction
task_ids:
- multi-class-classification
- named-entity-recognition
- parsing
---
# Dataset Card for sd-nlp
## Table of Contents
- [Dataset Card for [EMBO/sd-nlp-non-tokenized]](#dataset-card-for-dataset-name)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://sourcedata.embo.org
- **Repository:** https://github.com/source-data/soda-roberta
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** thomas.lemberger@embo.org, jorge.abreu@embo.org
### Dataset Summary
This dataset is based on the content of the SourceData (https://sourcedata.embo.org) database, which contains manually annotated figure legends written in English and extracted from scientific papers in the domain of cell and molecular biology (Liechti et al, Nature Methods, 2017, https://doi.org/10.1038/nmeth.4471).
Unlike the dataset [`sd-nlp`](https://huggingface.co/datasets/EMBO/sd-nlp), pre-tokenized with the `roberta-base` tokenizer, this dataset is not previously tokenized, but just splitted into words. Users can therefore use it to fine-tune other models.
Additional details at https://github.com/source-data/soda-roberta
### Supported Tasks and Leaderboards
Tags are provided as [IOB2-style tags](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)).
`PANELIZATION`: figure captions (or figure legends) are usually composed of segments that each refer to one of several 'panels' of the full figure. Panels tend to represent results obtained with a coherent method and depicts data points that can be meaningfully compared to each other. `PANELIZATION` provide the start (B-PANEL_START) of these segments and allow to train for recogntion of the boundary between consecutive panel lengends.
`NER`: biological and chemical entities are labeled. Specifically the following entities are tagged:
- `SMALL_MOLECULE`: small molecules
- `GENEPROD`: gene products (genes and proteins)
- `SUBCELLULAR`: subcellular components
- `CELL`: cell types and cell lines.
- `TISSUE`: tissues and organs
- `ORGANISM`: species
- `EXP_ASSAY`: experimental assays
`ROLES`: the role of entities with regard to the causal hypotheses tested in the reported results. The tags are:
- `CONTROLLED_VAR`: entities that are associated with experimental variables and that subjected to controlled and targeted perturbations.
- `MEASURED_VAR`: entities that are associated with the variables measured and the object of the measurements.
`BORING`: entities are marked with the tag `BORING` when they are more of descriptive value and not directly associated with causal hypotheses ('boring' is not an ideal choice of word, but it is short...). Typically, these entities are so-called 'reporter' geneproducts, entities used as common baseline across samples, or specify the context of the experiment (cellular system, species, etc...).
### Languages
The text in the dataset is English.
## Dataset Structure
### Data Instances
```json
{'text': '(E) Quantification of the number of cells without γ-Tubulin at centrosomes (γ-Tub -) in pachytene and diplotene spermatocytes in control, Plk1(∆/∆) and BI2536-treated spermatocytes. Data represent average of two biological replicates per condition. ',
'labels': [0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
13,
14,
14,
14,
14,
14,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
3,
4,
4,
4,
4,
4,
4,
4,
4,
0,
0,
0,
0,
5,
6,
6,
6,
6,
6,
6,
6,
6,
6,
6,
0,
0,
3,
4,
4,
4,
4,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
7,
8,
8,
8,
8,
8,
8,
8,
8,
8,
8,
8,
8,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
3,
4,
4,
4,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
2,
2,
2,
2,
2,
0,
0,
0,
0,
0,
0,
0,
0,
0,
7,
8,
8,
8,
8,
8,
8,
8,
8,
8,
8,
8,
8,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0]}
```
### Data Fields
- `text`: `str` of the text
- `label_ids` dictionary composed of list of strings on a character-level:
- `entity_types`: `list` of `strings` for the IOB2 tags for entity type; possible value in `["O", "I-SMALL_MOLECULE", "B-SMALL_MOLECULE", "I-GENEPROD", "B-GENEPROD", "I-SUBCELLULAR", "B-SUBCELLULAR", "I-CELL", "B-CELL", "I-TISSUE", "B-TISSUE", "I-ORGANISM", "B-ORGANISM", "I-EXP_ASSAY", "B-EXP_ASSAY"]`
- `panel_start`: `list` of `strings` for IOB2 tags `["O", "B-PANEL_START"]`
### Data Splits
```python
DatasetDict({
train: Dataset({
features: ['text', 'labels'],
num_rows: 66085
})
test: Dataset({
features: ['text', 'labels'],
num_rows: 8225
})
validation: Dataset({
features: ['text', 'labels'],
num_rows: 7948
})
})
```
## Dataset Creation
### Curation Rationale
The dataset was built to train models for the automatic extraction of a knowledge graph based from the scientific literature. The dataset can be used to train character-based models for text segmentation and named entity recognition.
### Source Data
#### Initial Data Collection and Normalization
Figure legends were annotated according to the SourceData framework described in Liechti et al 2017 (Nature Methods, 2017, https://doi.org/10.1038/nmeth.4471). The curation tool at https://curation.sourcedata.io was used to segment figure legends into panel legends, tag enities, assign experiemental roles and normalize with standard identifiers (not available in this dataset). The source data was downloaded from the SourceData API (https://api.sourcedata.io) on 21 Jan 2021.
#### Who are the source language producers?
The examples are extracted from the figure legends from scientific papers in cell and molecular biology.
### Annotations
#### Annotation process
The annotations were produced manually with expert curators from the SourceData project (https://sourcedata.embo.org)
#### Who are the annotators?
Curators of the SourceData project.
### Personal and Sensitive Information
None known.
## Considerations for Using the Data
### Social Impact of Dataset
Not applicable.
### Discussion of Biases
The examples are heavily biased towards cell and molecular biology and are enriched in examples from papers published in EMBO Press journals (https://embopress.org)
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Thomas Lemberger, EMBO.
### Licensing Information
CC BY 4.0
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@tlemberger](https://github.com/tlemberger>) and [@drAbreu](https://github.com/drAbreu>) for adding this dataset. |
nateraw/airplane-crashes-and-fatalities | 2022-09-27T17:55:18.000Z | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | nateraw | null | null | null | 0 | 5 | ---
license:
- cc-by-nc-sa-4.0
converted_from: kaggle
kaggle_id: thedevastator/airplane-crashes-and-fatalities
---
# Dataset Card for Airplane Crashes and Fatalities
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://kaggle.com/datasets/thedevastator/airplane-crashes-and-fatalities
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
## Airplane Crashes and Fatalities
_____
This dataset showcases Boeing 707 accidents that have occurred since 1948. The data includes information on the date, time, location, operator, flight number, route, type of aircraft, registration number, cn/In number of persons on board, fatalities, ground fatalities, and a summary of the accident
### How to use the dataset
This dataset includes information on over 5,000 airplane crashes around the world.
This is an absolutely essential dataset for anyone interested in aviation safety! Here you will find information on when and where each crash occurred, what type of plane was involved, how many people were killed, and much more.
This dataset is perfect for anyone interested in data visualization or analysis. With so much information available, there are endless possibilities for interesting stories and insights that can be gleaned from this data.
So whether you're a seasoned data pro or just getting started, this dataset is sure to give you plenty to work with. So get started today and see what you can discover!
### Research Ideas
1. Plot a map of all flight routes
2. Analyze what type of aircraft is involved in the most crashes
3. Identify patterns in where/when crashes occur
### Columns
- **index:** the index of the row
- **Date:** the date of the incident
- **Time:** the time of the incident
- **Location:** the location of the incident
- **Operator:** the operator of the aircraft
- **Flight #:** the flight number of the aircraft
- **Route:** the route of the aircraft
- **Type:** the type of aircraft
- **Registration:** the registration of the aircraft
- **cn/In:** the construction number/serial number of the aircraft
- **Aboard:** the number of people on board the aircraft
- **Fatalities:** the number of fatalities in the incident
- **Ground:** the number of people on the ground killed in the incident
- **Summary:** a summary of the incident
### Acknowledgements
This dataset was obtained from the Data Society. If you use this dataset in your research, please credit the Data Society.
Columns: index, Date, Time, Location, Operator, Flight #, Route, Type, Registration, cn/In, Aboard, Fatalities Ground Summary
> [Data Source](https://data.world/data-society)
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was shared by [@thedevastator](https://kaggle.com/thedevastator)
### Licensing Information
The license for this dataset is cc-by-nc-sa-4.0
### Citation Information
```bibtex
[More Information Needed]
```
### Contributions
[More Information Needed] |
artemsnegirev/dialogs_from_jokes | 2022-09-27T11:43:32.000Z | [
"task_categories:conversational",
"task_ids:dialogue-generation",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:ru",
"license:cc0-1.0",
"region:us"
] | artemsnegirev | null | null | null | 1 | 5 | ---
language:
- ru
multilinguality:
- monolingual
pretty_name: Dialogs from Jokes
size_categories:
- 100K<n<1M
task_categories:
- conversational
task_ids:
- dialogue-generation
license: cc0-1.0
---
Converted to json version of dataset from [Koziev/NLP_Datasets](https://github.com/Koziev/NLP_Datasets/blob/master/Conversations/Data/extract_dialogues_from_anekdots.tar.xz) |
IDEA-CCNL/laion2B-multi-chinese-subset | 2023-04-06T06:32:18.000Z | [
"task_categories:feature-extraction",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"language:zh",
"license:cc-by-4.0",
"arxiv:2209.02970",
"region:us"
] | IDEA-CCNL | null | null | null | 17 | 5 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- zh
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: laion2B-multi-chinese-subset
task_categories:
- feature-extraction
---
# laion2B-multi-chinese-subset
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
- Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
## 简介 Brief Introduction
取自Laion2B多语言多模态数据集中的中文部分,一共143M个图文对。
A subset from Laion2B (a multimodal dataset), around 143M image-text pairs (only Chinese).
## 数据集信息 Dataset Information
大约一共143M个中文图文对。大约占用19GB空间(仅仅是url等文本信息,不包含图片)。
- Homepage: [laion-5b](https://laion.ai/blog/laion-5b/)
- Huggingface: [laion/laion2B-multi](https://huggingface.co/datasets/laion/laion2B-multi)
## 下载 Download
```bash
mkdir laion2b_chinese_release && cd laion2b_chinese_release
for i in {00000..00012}; do wget https://huggingface.co/datasets/IDEA-CCNL/laion2B-multi-chinese-subset/resolve/main/data/train-$i-of-00013.parquet; done
cd ..
```
## Lisence
CC-BY-4.0
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
|
hossein20s/enrun-emails-text-classification | 2022-09-27T22:33:36.000Z | [
"region:us"
] | hossein20s | null | null | null | 0 | 5 | Entry not found |
biglam/europeana_newspapers | 2023-01-06T11:42:17.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"language:de",
"language:fr",
"language:el",
"language:et",
"language:fi",
"language:hr",
... | biglam | null | null | null | 2 | 5 | ---
annotations_creators:
- no-annotation
language:
- de
- fr
- el
- et
- fi
- hr
- ji
- pl
- ru
- sr
- sv
- uk
language_creators:
- machine-generated
multilinguality:
- multilingual
pretty_name: 'Europeana Newspapers '
size_categories:
- 1M<n<10M
source_datasets: []
tags:
- newspapers
- lam
- OCR
task_categories:
- text-generation
task_ids:
- language-modeling
--- |
Tidrael/tsl_news | 2022-10-10T14:23:36.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"region:us"
] | Tidrael | This new dataset is designed to solve this great NLP task and is crafted with a lot of care. | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | null | 1 | 5 | ---
annotations_creators: []
language:
- en
language_creators:
- machine-generated
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: bussiness-news
size_categories:
- 1K<n<10K
source_datasets:
- original
tags: []
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Top news headline in finance from bbc-news
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
Sentiment label: Using threshold below 0 is negative (0) and above 0 is positive (1)
[More Information Needed]
### Data Splits
Train/Split Ratio is 0.9/0.1
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
Bhuvaneshwari/intent_classification | 2022-10-06T13:52:33.000Z | [
"region:us"
] | Bhuvaneshwari | null | null | null | 0 | 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.