author stringlengths 2 29 ⌀ | cardData null | citation stringlengths 0 9.58k ⌀ | description stringlengths 0 5.93k ⌀ | disabled bool 1 class | downloads float64 1 1M ⌀ | gated bool 2 classes | id stringlengths 2 108 | lastModified stringlengths 24 24 | paperswithcode_id stringlengths 2 45 ⌀ | private bool 2 classes | sha stringlengths 40 40 | siblings list | tags list | readme_url stringlengths 57 163 | readme stringlengths 0 977k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
joelito | null | false | 7,788 | false | joelito/Multi_Legal_Pile | 2022-11-14T18:35:22.000Z | null | false | 4832d469fa5e6f02dd1f5fad6aaa5f80e766fedf | [] | [
"annotations_creators:other",
"language_creators:found",
"language:bg",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language:et",
"language:fi",
"language:fr",
"language:ga",
"language:hr",
"language:hu",
"language:it",
"language:lt",
... | https://huggingface.co/datasets/joelito/Multi_Legal_Pile/resolve/main/README.md | ---
annotations_creators:
- other
language_creators:
- found
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
license:
- cc-by-4.0
multilinguality:
- multilingual
paperswithcode_id: null
pretty_name: "MultiLegalPile: A Large-Scale Multilingual Corpus for the Legal Domain"
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- fill-mask
---
# Dataset Card for MultiLegalPile: A Large-Scale Multilingual Corpus for the Legal Domain
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Joel Niklaus](mailto:joel.niklaus.2@bfh.ch)
### Dataset Summary
The Multi_Legal_Pile is a large-scale multilingual legal dataset suited for pretraining language models.
It spans over 24 languages and four legal text types.
### Supported Tasks and Leaderboards
The dataset supports the tasks of fill-mask.
### Languages
The following languages are supported: bg, cs, da, de, el, en, es, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt,
ro, sk, sl, sv
## Dataset Structure
It is structured in the following format:
text_type -> language -> jurisdiction.jsonl.xz
text_type is one of the following:
- caselaw
- contracts
- legislation
- other
The dataset can be used in the following way:
```
from datasets import load_dataset
config = 'en_contracts'
dataset = load_dataset('joelito/Multi_Legal_Pile', config, split='Train', streaming=True)
```
'config' is a combination of language and text_type, e.g. 'en_contracts' or 'de_caselaw'.
To load all the languages or all the text_types, use 'all' instead of the language or text_type (e.g., '
all_legislation').
### Data Instances
The file format is jsonl.xz and there is one split available ("train").
The complete dataset (564GB) consists of four large subsets:
- Native Multi Legal Pile (30GB)
- Eurlex Resources (179GB)
- MC4 Legal (133GB)
- Pile of Law (222GB)
#### Native Multilingual Legal Pile data
| Language | Text Type | jurisdiction | Source | Size (MB) | Tokens | Documents | Words/Document | URL | License |
|:---------|:------------|:-------------|:-----------------------------------|----------:|-------:|----------:|---------------:|-----|--------:|
| bg | legislation | Bulgaria | MARCELL | 588 | xxx | xxx | xxx | | |
| cs | caselaw | Czechia | CzCDC Constitutional Court | 713 | xxx | xxx | xxx | | |
| cs | caselaw | Czechia | CzCDC Supreme Administrative Court | 1248 | xxx | xxx | xxx | | |
| cs | caselaw | Czechia | CzCDC Supreme Court | 1566 | xxx | xxx | xxx | | |
| da | caselaw | Denmark | DDSC | 205 | xxx | xxx | xxx | | |
| da | legislation | Denmark | DDSC | 1464 | xxx | xxx | xxx | | |
| de | caselaw | Germany | openlegaldata | 4310 | xxx | xxx | xxx | | |
| de | caselaw | Switzerland | entscheidsuche | 6937 | xxx | xxx | xxx | | |
| de | legislation | Germany | openlegaldata | 96 | xxx | xxx | xxx | | |
| de | legislation | Switzerland | lexfind | 299 | xxx | xxx | xxx | | |
| en | legislation | Switzerland | lexfind | 9 | xxx | xxx | xxx | | |
| en | legislation | UK | uk-lex | 262 | xxx | xxx | xxx | | |
| fr | caselaw | Belgium | jurportal | 104 | xxx | xxx | xxx | | |
| fr | caselaw | France | CASS | 266 | xxx | xxx | xxx | | |
| fr | caselaw | Luxembourg | judoc | 277 | xxx | xxx | xxx | | |
| fr | caselaw | Switzerland | entscheidsuche | 5100 | xxx | xxx | xxx | | |
| fr | legislation | Switzerland | lexfind | 219 | xxx | xxx | xxx | | |
| fr | legislation | Belgium | ejustice | 178 | xxx | xxx | xxx | | |
| hu | legislation | Hungary | MARCELL | 239 | xxx | xxx | xxx | | |
| it | caselaw | Switzerland | entscheidsuche | 1274 | xxx | xxx | xxx | | |
| it | legislation | Switzerland | lexfind | 141 | xxx | xxx | xxx | | |
| nl | legislation | Belgium | ejustice | 178 | xxx | xxx | xxx | | |
| pl | legislation | Poland | MARCELL | 264 | xxx | xxx | xxx | | |
| pt | caselaw | Brazil | RulingBR | 173 | xxx | xxx | xxx | | |
| ro | legislation | Romania | MARCELL | 2704 | xxx | xxx | xxx | | |
| sk | legislation | Slovakia | MARCELL | 192 | xxx | xxx | xxx | | |
| sl | legislation | Slovenia | MARCELL | 753 | xxx | xxx | xxx | | |
| total | all | all | all | 29759 | xxx | xxx | xxx | | |
#### Eurlex Resources
See [Eurlex Resources](https://huggingface.co/datasets/joelito/eurlex_resources#data-instances) for more information.
#### MC4 Legal
See [MC4 Legal](https://huggingface.co/datasets/joelito/mc4_legal#data-instances) for more information.
#### Pile-of-Law
See [Pile-of-Law](https://huggingface.co/datasets/pile-of-law/pile-of-law#data-instances) for more information.
For simplicity and with respect to balancing off data across different jurisdictions and languages,
we disregard many US resources that are either very specialized (e.g., tax rulings),
outdated/historical (e.g., founding letters),
very small (less than 20MB),
not legal language in the strict sense (conversations),
or the information overlaps with other sources (study materials).
If you are interested in a US-based (US-biased) model, refer to the "Pile of (US) Law" by Henderson et al. (2022).
Analyses are put into the "other" category because in mc4_legal we also likely have similar text.
| Language | Type | jurisdiction | Source | Size (MB) | Tokens | Documents | Words/Document |
|:----------|:------------|:-------------|:---------------------------|----------:|-------:|----------:|---------------:|
| en | caselaw | US | courtlisteneropinions | 79050 | xxx | xxx | xxx |
| en | caselaw | US | courtlistenerdocketentries | 69510 | xxx | xxx | xxx |
| en | caselaw | US | scotus_filings | 2010 | xxx | xxx | xxx |
| en | caselaw | EU | echr | 149 | xxx | xxx | xxx |
| en | caselaw | Canada | canadian_decisions | 243 | xxx | xxx | xxx |
| en | contracts | US | atticus_contracts | 41600 | xxx | xxx | xxx |
| en | contracts | US | edgar | 14350 | xxx | xxx | xxx |
| en | contracts | US | cfpb_creditcard_contracts | 94 | xxx | xxx | xxx |
| en | legislation | US | uscode | 358 | xxx | xxx | xxx |
| en | legislation | US | state_codes | 9030 | xxx | xxx | xxx |
| en | legislation | US | us_bills | 1690 | xxx | xxx | xxx |
| en | legislation | US | federal_register | 212 | xxx | xxx | xxx |
| en | legislation | US | cfr | 894 | xxx | xxx | xxx |
| en | legislation | N/A | constitutions | 33 | xxx | xxx | xxx |
| en | other | US | oig | 2530 | xxx | xxx | xxx |
| en | other | US | olc_memos | 49 | xxx | xxx | xxx |
| total | all | all | all | 221851 | xxx | xxx | xxx |
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
TODO add citation
```
### Contributions
Thanks to [@JoelNiklaus](https://github.com/joelniklaus) for adding this dataset.
| ||
huggingface-projects | null | null | null | false | 1 | false | huggingface-projects/contribute-a-dataset | 2022-09-26T10:33:05.000Z | null | false | a2f60155ef84fbb118b337eafa391351277003b3 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/huggingface-projects/contribute-a-dataset/resolve/main/README.md | ---
license: apache-2.0
---
|
Heisenbergzz1 | null | null | null | false | 2 | false | Heisenbergzz1/abdullah-jaber | 2022-09-26T10:56:14.000Z | null | false | 1b6af9f6fbd19bb68f82515f4f6eca993d643b23 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/Heisenbergzz1/abdullah-jaber/resolve/main/README.md | ---
license: afl-3.0
---
|
dary | null | null | null | false | 1 | false | dary/agagga_oaoa | 2022-09-26T10:59:06.000Z | null | false | 82fff01dfe20340fca20b50b66f61cd7e6d7a2e4 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/dary/agagga_oaoa/resolve/main/README.md | ---
license: openrail
---
|
osbm | null | null | null | false | 1 | false | osbm/zenodo | 2022-09-26T16:19:39.000Z | null | false | 42f9bb791e5996ee1a2492a381d810e3af9e80fe | [] | [] | https://huggingface.co/datasets/osbm/zenodo/resolve/main/README.md | ---
---
# download zenodo datasets using huggingface datasets
```python
from datasets import load_dataset
dataset = load_dataset("zenodo", "10.5281/zenodo.4285300")
```
or download the dataset to a desired directory
```python
from datasets import load_dataset
dataset = load_dataset("zenodo", "10.5281/zenodo.4285300", data_dir="path/to/dataset")
```
|
ChickenHiiro | null | null | null | false | 2 | false | ChickenHiiro/Duc_Luu | 2022-09-27T02:02:03.000Z | null | false | e10538f40436c73126e8fbcf08502cbc6bdb751b | [] | [
"license:artistic-2.0"
] | https://huggingface.co/datasets/ChickenHiiro/Duc_Luu/resolve/main/README.md | ---
license: artistic-2.0
---
|
ali4546 | null | null | null | false | 1 | false | ali4546/ma | 2022-09-26T12:23:43.000Z | null | false | feb76ecc5e78064880e0b784bc0fe3daa92fc330 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/ali4546/ma/resolve/main/README.md | ---
license: afl-3.0
---
|
laion | null | null | null | false | 5 | false | laion/laion2B-multi-joined-translated-to-en | 2022-10-11T20:33:48.000Z | null | false | f18057211d797807f29c40fd880c654b78eeb83b | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/laion/laion2B-multi-joined-translated-to-en/resolve/main/README.md | ---
license: cc-by-4.0
---
|
EMBO | null | @Unpublished{
huggingface: dataset,
title = {SourceData NLP},
authors={Thomas Lemberger & Jorge Abreu-Vicente, EMBO},
year={2021}
} | This dataset is based on the SourceData database and is intended to facilitate training of NLP tasks in the cell and molecualr biology domain. | false | 15 | false | EMBO/sd-nlp-v2 | 2022-09-26T12:47:16.000Z | null | false | 6b7cdd494e42ae91bea2ac6aceeeed38132b12cd | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/EMBO/sd-nlp-v2/resolve/main/README.md | ---
license: cc-by-4.0
---
|
tomvo | null | null | null | false | 2 | false | tomvo/test_images | 2022-09-26T18:28:16.000Z | null | false | 62245ed0664652b85c4360f2320b59bbb8a83cb8 | [] | [] | https://huggingface.co/datasets/tomvo/test_images/resolve/main/README.md | |
datascopum | null | null | null | false | 1 | false | datascopum/datascopum | 2022-09-29T16:33:40.000Z | null | false | 05f2b9a2b864e04ec1a969f6d31923a776307c53 | [] | [] | https://huggingface.co/datasets/datascopum/datascopum/resolve/main/README.md | ........ |
FerdinandASH | null | null | null | false | 1 | false | FerdinandASH/Ferdinand | 2022-09-26T15:16:41.000Z | null | false | 0a661c385f1c7ceaa45f8f5cd72abb8ea76d3851 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/FerdinandASH/Ferdinand/resolve/main/README.md | ---
license: afl-3.0
---
|
open-source-metrics | null | null | null | false | 2,704 | false | open-source-metrics/model-repos-stats | 2022-11-15T03:53:22.000Z | null | false | c8ebadd65821787266a282693757cafc94fdb060 | [] | [] | https://huggingface.co/datasets/open-source-metrics/model-repos-stats/resolve/main/README.md | ---
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: repo_id
dtype: string
- name: author
dtype: string
- name: model_type
dtype: string
- name: files_per_repo
dtype: int64
- name: downloads_30d
dtype: int64
- name: library
dtype: string
- name: likes
dtype: int64
- name: pipeline
dtype: string
- name: pytorch
dtype: bool
- name: tensorflow
dtype: bool
- name: jax
dtype: bool
- name: license
dtype: string
- name: languages
dtype: string
- name: datasets
dtype: string
- name: co2
dtype: string
- name: prs_count
dtype: int64
- name: prs_open
dtype: int64
- name: prs_merged
dtype: int64
- name: prs_closed
dtype: int64
- name: discussions_count
dtype: int64
- name: discussions_open
dtype: int64
- name: discussions_closed
dtype: int64
- name: tags
dtype: string
- name: has_model_index
dtype: bool
- name: has_metadata
dtype: bool
- name: has_text
dtype: bool
- name: text_length
dtype: int64
splits:
- name: train
num_bytes: 20182549
num_examples: 87992
download_size: 3476866
dataset_size: 20182549
---
# Dataset Card for "model-repos-stats"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ysharma | null | null | null | false | 22 | false | ysharma/short_jokes | 2022-09-26T17:11:06.000Z | null | false | da31b6c38403a4811b20342486bdf0ec2a724a2a | [] | [
"license:mit"
] | https://huggingface.co/datasets/ysharma/short_jokes/resolve/main/README.md | ---
license: mit
---
**Context**
Generating humor is a complex task in the domain of machine learning, and it requires the models to understand the deep semantic meaning of a joke in order to generate new ones. Such problems, however, are difficult to solve due to a number of reasons, one of which is the lack of a database that gives an elaborate list of jokes. Thus, a large corpus of over 0.2 million jokes has been collected by scraping several websites containing funny and short jokes.
You can visit the [Github repository](https://github.com/amoudgl/short-jokes-dataset) from [amoudgl](https://github.com/amoudgl) for more information regarding collection of data and the scripts used.
**Content**
This dataset is in the form of a csv file containing 231,657 jokes. Length of jokes ranges from 10 to 200 characters. Each line in the file contains a unique ID and joke.
**Disclaimer**
It has been attempted to keep the jokes as clean as possible. Since the data has been collected by scraping websites, it is possible that there may be a few jokes that are inappropriate or offensive to some people.
**Note**
This dataset is taken from Kaggle dataset that can be found [here](https://www.kaggle.com/datasets/abhinavmoudgil95/short-jokes). |
Worldwars | null | null | null | false | 1 | false | Worldwars/caka | 2022-09-26T17:15:44.000Z | null | false | 7f7c09a2950eca4bbafefca78196015ffaa3059f | [] | [
"license:cc0-1.0"
] | https://huggingface.co/datasets/Worldwars/caka/resolve/main/README.md | ---
license: cc0-1.0
---
|
cjvt | null | @inproceedings{armendariz-etal-2020-semeval,
title = "{SemEval-2020} {T}ask 3: Graded Word Similarity in Context ({GWSC})",
author = "Armendariz, Carlos S. and
Purver, Matthew and
Pollak, Senja and
Ljube{\v{s}}i{\'{c}}, Nikola and
Ul{\v{c}}ar, Matej and
Robnik-{\v{S}}ikonja, Marko and
Vuli{\'{c}}, Ivan and
Pilehvar, Mohammad Taher",
booktitle = "Proceedings of the 14th International Workshop on Semantic Evaluation",
year = "2020",
address="Online"
} | The dataset contains human similarity ratings for pairs of words. The annotators were presented with contexts that
contained both of the words in the pair and the dataset features two different contexts per pair. The words were
sourced from the English, Croatian, Finnish and Slovenian versions of the original Simlex dataset. | false | 2 | false | cjvt/cosimlex | 2022-10-21T07:34:58.000Z | null | false | de93f205b1d46c99e45e3da694207776da2bbf63 | [] | [
"annotations_creators:crowdsourced",
"language_creators:found",
"language:en",
"language:hr",
"language:sl",
"language:fi",
"license:gpl-3.0",
"multilinguality:multilingual",
"size_categories:n<1K",
"task_categories:other",
"tags:graded-word-similarity-in-context"
] | https://huggingface.co/datasets/cjvt/cosimlex/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
- hr
- sl
- fi
license:
- gpl-3.0
multilinguality:
- multilingual
size_categories:
- n<1K
source_datasets: []
task_categories:
- other
task_ids: []
pretty_name: CoSimLex
tags:
- graded-word-similarity-in-context
---
# Dataset Card for CoSimLex
### Dataset Summary
The dataset contains human similarity ratings for pairs of words. The annotators were presented with contexts that contained both of the words in the pair and the dataset features two different contexts per pair. The words were sourced from the English, Croatian, Finnish and Slovenian versions of the original Simlex dataset.
Statistics:
- 340 English pairs (config `en`),
- 112 Croatian pairs (config `hr`),
- 111 Slovenian pairs (config `sl`),
- 24 Finnish pairs (config `fi`).
### Supported Tasks and Leaderboards
Graded word similarity in context.
### Languages
English, Croatian, Slovenian, Finnish.
## Dataset Structure
### Data Instances
A sample instance from the dataset:
```
{
'word1': 'absence',
'word2': 'presence',
'context1': 'African slaves from Angola and Mozambique were also present, but in fewer numbers than in other Brazilian areas, because Paraná was a poor region that did not need much slave manpower. The immigration grew in the mid-19th century, mostly composed of Italian, German, Polish, Ukrainian, and Japanese peoples. While Poles and Ukrainians are present in Paraná, their <strong>presence</strong> in the rest of Brazil is almost <strong>absence</strong>.',
'context2': 'The Chinese had become almost impossible to deal with because of the turmoil associated with the cultural revolution. The North Vietnamese <strong>presence</strong> in Eastern Cambodia had grown so large that it was destabilizing Cambodia politically and economically. Further, when the Cambodian left went underground in the late 1960s, Sihanouk had to make concessions to the right in the <strong>absence</strong> of any force that he could play off against them.',
'sim1': 2.2699999809265137,
'sim2': 1.3700000047683716,
'stdev1': 2.890000104904175,
'stdev2': 1.7899999618530273,
'pvalue': 0.2409999966621399,
'word1_context1': 'absence',
'word2_context1': 'presence',
'word1_context2': 'absence',
'word2_context2': 'presence'
}
```
### Data Fields
- `word1`: a string representing the first word in the pair. Uninflected form.
- `word2`: a string representing the second word in the pair. Uninflected form.
- `context1`: a string representing the first context containing the pair of words. The target words are marked with a `<strong></strong>` labels.
- `context2`: a string representing the second context containing the pair of words. The target words are marked with a `<strong></strong>` labels.
- `sim1`: a float representing the mean of the similarity scores within the first context.
- `sim2`: a float representing the mean of the similarity scores within the second context.
- `stdev1`: a float representing the standard Deviation for the scores within the first context.
- `stdev2`: a float representing the standard deviation for the scores within the second context.
- `pvalue`: a float representing the p-value calculated using the Mann-Whitney U test.
- `word1_context1`: a string representing the inflected version of the first word as it appears in the first context.
- `word2_context1`: a string representing the inflected version of the second word as it appears in the first context.
- `word1_context2`: a string representing the inflected version of the first word as it appears in the second context.
- `word2_context2`: a string representing the inflected version of the second word as it appears in the second context.
## Additional Information
### Dataset Curators
Carlos Armendariz; et al. (please see http://hdl.handle.net/11356/1308 for the full list)
### Licensing Information
GNU GPL v3.0.
### Citation Information
```
@inproceedings{armendariz-etal-2020-semeval,
title = "{SemEval-2020} {T}ask 3: Graded Word Similarity in Context ({GWSC})",
author = "Armendariz, Carlos S. and
Purver, Matthew and
Pollak, Senja and
Ljube{\v{s}}i{\'{c}}, Nikola and
Ul{\v{c}}ar, Matej and
Robnik-{\v{S}}ikonja, Marko and
Vuli{\'{c}}, Ivan and
Pilehvar, Mohammad Taher",
booktitle = "Proceedings of the 14th International Workshop on Semantic Evaluation",
year = "2020",
address="Online"
}
```
### Contributions
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
|
nateraw | null | null | null | false | 19 | false | nateraw/airplane-crashes-and-fatalities | 2022-09-27T17:55:18.000Z | null | false | 16e24521436eaf961e62b0406744617666a741ba | [] | [
"license:cc-by-nc-sa-4.0",
"converted_from:kaggle",
"kaggle_id:thedevastator/airplane-crashes-and-fatalities"
] | https://huggingface.co/datasets/nateraw/airplane-crashes-and-fatalities/resolve/main/README.md | ---
license:
- cc-by-nc-sa-4.0
converted_from: kaggle
kaggle_id: thedevastator/airplane-crashes-and-fatalities
---
# Dataset Card for Airplane Crashes and Fatalities
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://kaggle.com/datasets/thedevastator/airplane-crashes-and-fatalities
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
## Airplane Crashes and Fatalities
_____
This dataset showcases Boeing 707 accidents that have occurred since 1948. The data includes information on the date, time, location, operator, flight number, route, type of aircraft, registration number, cn/In number of persons on board, fatalities, ground fatalities, and a summary of the accident
### How to use the dataset
This dataset includes information on over 5,000 airplane crashes around the world.
This is an absolutely essential dataset for anyone interested in aviation safety! Here you will find information on when and where each crash occurred, what type of plane was involved, how many people were killed, and much more.
This dataset is perfect for anyone interested in data visualization or analysis. With so much information available, there are endless possibilities for interesting stories and insights that can be gleaned from this data.
So whether you're a seasoned data pro or just getting started, this dataset is sure to give you plenty to work with. So get started today and see what you can discover!
### Research Ideas
1. Plot a map of all flight routes
2. Analyze what type of aircraft is involved in the most crashes
3. Identify patterns in where/when crashes occur
### Columns
- **index:** the index of the row
- **Date:** the date of the incident
- **Time:** the time of the incident
- **Location:** the location of the incident
- **Operator:** the operator of the aircraft
- **Flight #:** the flight number of the aircraft
- **Route:** the route of the aircraft
- **Type:** the type of aircraft
- **Registration:** the registration of the aircraft
- **cn/In:** the construction number/serial number of the aircraft
- **Aboard:** the number of people on board the aircraft
- **Fatalities:** the number of fatalities in the incident
- **Ground:** the number of people on the ground killed in the incident
- **Summary:** a summary of the incident
### Acknowledgements
This dataset was obtained from the Data Society. If you use this dataset in your research, please credit the Data Society.
Columns: index, Date, Time, Location, Operator, Flight #, Route, Type, Registration, cn/In, Aboard, Fatalities Ground Summary
> [Data Source](https://data.world/data-society)
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was shared by [@thedevastator](https://kaggle.com/thedevastator)
### Licensing Information
The license for this dataset is cc-by-nc-sa-4.0
### Citation Information
```bibtex
[More Information Needed]
```
### Contributions
[More Information Needed] |
cfilt | null | null | null | false | 1 | false | cfilt/AI-OpenMic | 2022-09-26T20:41:52.000Z | null | false | 408981fdb52b04955973f83fa16827f73f351971 | [] | [
"license:cc-by-nc-sa-4.0"
] | https://huggingface.co/datasets/cfilt/AI-OpenMic/resolve/main/README.md | ---
license: cc-by-nc-sa-4.0
---
|
valentinabrzt | null | null | null | false | 2 | false | valentinabrzt/datasettttttttt | 2022-09-26T21:13:37.000Z | null | false | 1a417e7ef6997cabeb2e864470118d1d5ed93b40 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/valentinabrzt/datasettttttttt/resolve/main/README.md | ---
license: afl-3.0
---
|
Kunling | null | null | null | false | 2 | false | Kunling/layoutlm_resume_data | 2022-09-29T05:18:32.000Z | null | false | f4daca16419351170bc5d882b03459f60524c9c7 | [] | [
"license:bsd"
] | https://huggingface.co/datasets/Kunling/layoutlm_resume_data/resolve/main/README.md | ---
license: bsd
---
|
srvs | null | null | null | false | 2 | false | srvs/training | 2022-09-26T23:21:44.000Z | null | false | 209c2baf698f5693e8b2f755a21cdcb804814b3e | [] | [
"license:artistic-2.0"
] | https://huggingface.co/datasets/srvs/training/resolve/main/README.md | ---
license: artistic-2.0
---
|
Ceetar | null | null | null | false | 2 | false | Ceetar/MetsTweets | 2022-09-27T00:08:51.000Z | null | false | d4548d8a0d713c364d69e6dafeec59d3c7717026 | [] | [] | https://huggingface.co/datasets/Ceetar/MetsTweets/resolve/main/README.md | Tweets containing '#Mets' from early August through late September |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev-mathemakitte-08a58b-1563555688 | 2022-09-27T04:26:16.000Z | null | false | 5e26419ab91ed4a212eb945097dfc3b5d0687401 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_dev"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev-mathemakitte-08a58b-1563555688/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_dev
eval_info:
task: text_zero_shot_classification
model: Tristan/opt-66b-copy
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_dev
dataset_config: mathemakitten--winobias_antistereotype_dev
dataset_split: validation
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: Tristan/opt-66b-copy
* Dataset: mathemakitten/winobias_antistereotype_dev
* Config: mathemakitten--winobias_antistereotype_dev
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Tristan](https://huggingface.co/Tristan) for evaluating this model. |
ltjabc | null | null | null | false | 2 | false | ltjabc/sanguosha | 2022-09-27T02:04:44.000Z | null | false | ec2cb334401dfe22f8b85a56ed47018c56350a44 | [] | [
"license:other"
] | https://huggingface.co/datasets/ltjabc/sanguosha/resolve/main/README.md | ---
license: other
---
|
cays | null | null | null | false | 2 | false | cays/LX0 | 2022-09-27T02:29:16.000Z | null | false | bdd784fd553e9e6546ca8167a7e23e7189e42c2f | [] | [
"license:artistic-2.0"
] | https://huggingface.co/datasets/cays/LX0/resolve/main/README.md | ---
license: artistic-2.0
---
|
tcsenpai | null | null | null | false | 2 | false | tcsenpai/aggregated_captcha_images_and_text | 2022-09-27T03:31:17.000Z | null | false | 5bf51cd1b371b4c8aa0fe48d64123e20b25cdaf7 | [] | [
"license:cc-by-nc-4.0"
] | https://huggingface.co/datasets/tcsenpai/aggregated_captcha_images_and_text/resolve/main/README.md | ---
license: cc-by-nc-4.0
---
# Aggregated Captcha Images and Text
## Credits
All the images (not the texts) here contained have been downloaded and selected from various datasets on kaggle.com
### What is this?
This is a dataset containing some hundreds of thousands of images taken from real and used captchas (reCaptcha, hCaptcha and various others) and containing an equally big amount of random 4-8 length texts generated each one in 363 different fonts and with different random noise, size, colors and scratches on them.
While the texts part might result difficult to recognize from the models you could train, the images quantity allows the model to offer a significant possibility of recognization of captcha images.
### Disclaimer
This dataset is NOT intended to break any ToS of any website or to execute malicious, illegal or unethical actions. This dataset is distributed with a purely informative and educative finality, namely the study of the weakness or strength of the current protection systems.
You will for example notice how puzzle based captchas are highly resistant to this kind of analysis. |
shubhamg2208 | null | \ | Lexicap contains the captions for every Lex Friedman Podcast episode. It it created by [Dr. Andrej Karpathy](https://twitter.com/karpathy).
There are 430 caption files available. There are 2 types of files:
- large
- small
Each file name follows the format `episode_{episode_number}_{file_type}.vtt`. | false | 4 | false | shubhamg2208/lexicap | 2022-09-27T04:41:00.000Z | null | false | 76aeb129b64a67d72998420da80c2e51032c6907 | [] | [
"lexicap:found",
"language:en",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"tags:karpathy,whisper,openai",
"task_categories:text-classification",
"task_categories:text-generation",
"task_ids:sentiment-analysis",
"task_ids:dialogu... | https://huggingface.co/datasets/shubhamg2208/lexicap/resolve/main/README.md | ---
lexicap:
- found
language:
- en
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: 'Lexicap: Lex Fridman Podcast Whisper captions'
size_categories:
- n<1K
source_datasets:
- original
tags:
- karpathy,whisper,openai
task_categories:
- text-classification
- text-generation
task_ids:
- sentiment-analysis
- dialogue-modeling
- language-modeling
---
# Dataset Card for Lexicap
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
-
## Dataset Structure
### Data Instances
Train and test dataset.
j
### Data Fields
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
### Contributions
|
Worldwars | null | null | null | false | 2 | false | Worldwars/caka1 | 2022-09-27T08:00:24.000Z | null | false | 840a29a57e1be9102cd03a752c7512ad0ecd1bee | [] | [
"license:artistic-2.0"
] | https://huggingface.co/datasets/Worldwars/caka1/resolve/main/README.md | ---
license: artistic-2.0
---
|
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-kmfoda__booksum-kmfoda__booksum-ba6080-1564655701 | 2022-09-28T12:45:02.000Z | null | false | d5dfe0d2fdc72e5d881a47cd3e8e8e57c2ca5b1b | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:kmfoda/booksum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-kmfoda__booksum-kmfoda__booksum-ba6080-1564655701/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- kmfoda/booksum
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP17
metrics: []
dataset_name: kmfoda/booksum
dataset_config: kmfoda--booksum
dataset_split: test
col_mapping:
text: chapter
target: summary_text
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP17
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-billsum-default-37bdaa-1564755702 | 2022-09-28T14:20:08.000Z | null | false | 5b001451c8a86ecabf3e8aa1486ab7780534b48a | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:billsum"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-billsum-default-37bdaa-1564755702/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- billsum
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP17
metrics: []
dataset_name: billsum
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP17
* Dataset: billsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-launch__gov_report-plain_text-45e121-1564955705 | 2022-09-27T23:02:40.000Z | null | false | ee5cf7dc24900b58bd4a0f8c0de335ad4f7bdb4d | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:launch/gov_report"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-launch__gov_report-plain_text-45e121-1564955705/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- launch/gov_report
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP17
metrics: []
dataset_name: launch/gov_report
dataset_config: plain_text
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP17
* Dataset: launch/gov_report
* Config: plain_text
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-eval-launch__gov_report-plain_text-45e121-1564955706 | 2022-09-27T23:17:35.000Z | null | false | b080eb0ef952f2c8283f6bf0186d2e03bf88b527 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:launch/gov_report"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-launch__gov_report-plain_text-45e121-1564955706/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- launch/gov_report
eval_info:
task: summarization
model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP15
metrics: []
dataset_name: launch/gov_report
dataset_config: plain_text
dataset_split: test
col_mapping:
text: document
target: summary
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP15
* Dataset: launch/gov_report
* Config: plain_text
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. |
asdfdvfbdb | null | null | null | false | 2 | false | asdfdvfbdb/efefw | 2022-09-27T17:22:01.000Z | null | false | 64ce816c8fa6cffd09a52c77ed4bffe769228cb4 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/asdfdvfbdb/efefw/resolve/main/README.md | ---
license: afl-3.0
---
|
astronomou | null | null | null | false | 2 | false | astronomou/me | 2022-09-27T10:50:07.000Z | null | false | 5923b9d0b341ae83cb27a529a920ad206724c689 | [] | [
"license:other"
] | https://huggingface.co/datasets/astronomou/me/resolve/main/README.md | ---
license: other
---
|
n1ghtf4l1 | null | null | null | false | 6 | false | n1ghtf4l1/automatic-dissection | 2022-11-01T07:08:47.000Z | null | false | 702c3ff0bee31d2479f7f98a1095210683c3fec0 | [] | [
"license:mit"
] | https://huggingface.co/datasets/n1ghtf4l1/automatic-dissection/resolve/main/README.md | ---
license: mit
---
#### automatic-dissection
# **HuBMAP + HPA - Hacking the Human Body**
##### **Segment multi-organ functional tissue units in biopsy slides from several different organs.**
### **Overview**
When you think of "life hacks," normally you’d imagine productivity techniques. But how about the kind that helps you understand your body at a molecular level? It may be possible! Researchers must first determine the function and relationships among the 37 trillion cells that make up the human body. A better understanding of our cellular composition could help people live healthier, longer lives.
A previous Kaggle [competition](https://www.kaggle.com/c/hubmap-kidney-segmentation) aimed to annotate cell population neighborhoods that perform an organ’s main physiologic function, also called functional tissue units (FTUs). Manually annotating FTUs (e.g., glomeruli in kidney or alveoli in the lung) is a time-consuming process. In the average kidney, there are over 1 million glomeruli FTUs. While there are existing cell and FTU segmentation methods, we want to push the boundaries by building algorithms that generalize across different organs and are robust across different dataset differences.
The [Human BioMolecular Atlas Program](https://hubmapconsortium.org/) (HuBMAP) is working to create a [Human Reference Atlas](https://www.nature.com/articles/s41556-021-00788-6) at the cellular level. Sponsored by the National Institutes of Health (NIH), HuBMAP and Indiana University’s Cyberinfrastructure for Network Science Center (CNS) have partnered with institutions across the globe for this endeavor. A major partner is the [Human Protein Atlas](https://www.proteinatlas.org/) (HPA), a Swedish research program aiming to map the protein expression in human cells, tissues, and organs, funded by the Knut and Alice Wallenberg Foundation.
In this repository, we [aim](https://www.kaggle.com/competitions/hubmap-organ-segmentation/) to identify and segment functional tissue units (FTUs) across five human organs. We have to build a model using a dataset of tissue section images, with the best submissions segmenting FTUs as accurately as possible.
If successful, we can help accelerate the world’s understanding of the relationships between cell and tissue organization. With a better idea of the relationship of cells, researchers will have more insight into the function of cells that impact human health. Further, the Human Reference Atlas constructed by HuBMAP will be freely available for use by researchers and pharmaceutical companies alike, potentially improving and prolonging human life.
### **Dataset Description**
The goal is to identify the locations of each functional tissue unit (FTU) in biopsy slides from several different organs. The underlying data includes imagery from different sources prepared with different protocols at a variety of resolutions, reflecting typical challenges for working with medical data.
This project uses [data](https://huggingface.co/datasets/n1ghtf4l1/automatic-dissection) from two different consortia, the [Human Protein Atlas](https://www.proteinatlas.org/) (HPA) and [Human BioMolecular Atlas Program](https://hubmapconsortium.org/) (HuBMAP). The training dataset consists of data from public HPA data, the public test set is a combination of private HPA data and HuBMAP data, and the private test set contains only HuBMAP data. Adapting models to function properly when presented with data that was prepared using a different protocol will be one of the core challenges of this competition. While this is expected to make the problem more difficult, developing models that generalize is a key goal of this endeavor.
### **Files**
**[train/test].csv** Metadata for the train/test set. Only the first few rows of the test set are available for download.
- ```id``` - The image ID.
- ```organ``` - The organ that the biopsy sample was taken from.
- ```data_source``` - Whether the image was provided by HuBMAP or HPA.
- ```img_height``` - The height of the image in pixels.
- ```img_width``` - The width of the image in pixels.
- ```pixel_size``` - The height/width of a single pixel from this image in micrometers. All HPA images have a pixel size of 0.4 µm. For HuBMAP imagery the pixel size is 0.5 µm for kidney, 0.2290 µm for large intestine, 0.7562 µm for lung, 0.4945 µm for spleen, and 6.263 µm for prostate.
- ```tissue_thickness``` - The thickness of the biopsy sample in micrometers. All HPA images have a thickness of 4 µm. The HuBMAP samples have tissue slice thicknesses 10 µm for kidney, 8 µm for large intestine, 4 µm for spleen, 5 µm for lung, and 5 µm for prostate.
- ```rle``` - The target column. A run length encoded copy of the annotations. Provided for the training set only.
- ```age``` - The patient's age in years. Provided for the training set only.
- ```sex``` - The gender of the patient. Provided for the training set only.
**sample_submission.csv**
- ```id``` - The image ID.
- ```rle``` - A run length encoded mask of the FTUs in the image.
**[train/test]_images/** The images. Expect roughly 550 images in the hidden test set. All HPA images are 3000 x 3000 pixels with a tissue area within the image around 2500 x 2500 pixels. The HuBMAP images range in size from 4500x4500 down to 160x160 pixels. HPA samples were stained with antibodies visualized with 3,3'-diaminobenzidine (DAB) and counterstained with hematoxylin. HuBMAP images were prepared using Periodic acid-Schiff (PAS)/hematoxylin and eosin (H&E) stains. All images used have at least one FTU. All tissue data used in this competition is from healthy donors that pathologists identified as pathologically unremarkable tissue.
**train_annotations/** The annotations provided in the format of points that define the boundaries of the polygon masks of the FTUs.
|
Spammie | null | null | null | false | 2 | false | Spammie/rev-stable-diff | 2022-09-27T11:12:05.000Z | null | false | 8be2f1f757989d37ca17221661f6a9f66e0b57c8 | [] | [
"license:gpl-3.0"
] | https://huggingface.co/datasets/Spammie/rev-stable-diff/resolve/main/README.md | ---
license: gpl-3.0
---
|
artemsnegirev | null | null | null | false | 9 | false | artemsnegirev/dialogs_from_jokes | 2022-09-27T11:43:32.000Z | null | false | 3b0559e997b2dc1a5eb080364ba2420e29e4dd2d | [] | [
"language:ru",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"task_categories:conversational",
"task_ids:dialogue-generation",
"license:cc0-1.0"
] | https://huggingface.co/datasets/artemsnegirev/dialogs_from_jokes/resolve/main/README.md | ---
language:
- ru
multilinguality:
- monolingual
pretty_name: Dialogs from Jokes
size_categories:
- 100K<n<1M
task_categories:
- conversational
task_ids:
- dialogue-generation
license: cc0-1.0
---
Converted to json version of dataset from [Koziev/NLP_Datasets](https://github.com/Koziev/NLP_Datasets/blob/master/Conversations/Data/extract_dialogues_from_anekdots.tar.xz) |
jelber2 | null | null | null | false | 1 | false | jelber2/RustBioGPT-valid | 2022-09-27T12:01:37.000Z | null | false | 5500a07ad0e88dae61f0f78a46f17751d5a95c7f | [] | [
"license:mit"
] | https://huggingface.co/datasets/jelber2/RustBioGPT-valid/resolve/main/README.md | ---
license: mit
---
```sh
git clone https://github.com/rust-bio/rust-bio-tools
rm -f RustBioGPT-validate.csv && for i in `find . -name "*.rs"`;do paste -d "," <(echo "rust-bio-tools"|perl -pe "s/(.+)/\"\1\"/g") <(echo $i|perl -pe "s/(.+)/\"\1\"/g") <(perl -pe "s/\n/\\\n/g" $i|perl -pe s"/\"/\'/g" |perl -pe "s/(.+)/\"\1\"/g") <(echo "mit"|perl -pe "s/(.+)/\"\1\"/g") >> RustBioGPT-validate.csv; done
sed -i '1i "repo_name","path","content","license"' RustBioGPT-validate.csv
``` |
musper | null | null | null | false | 2 | false | musper/hr_dataset_repo | 2022-09-27T14:13:23.000Z | null | false | 9faf4c6b77e44eef775cb951bd9cb094db9f301a | [] | [
"license:unlicense"
] | https://huggingface.co/datasets/musper/hr_dataset_repo/resolve/main/README.md | ---
license: unlicense
---
|
IDEA-CCNL | null | null | null | false | 9 | false | IDEA-CCNL/laion2B-multi-chinese-subset | 2022-09-28T18:07:45.000Z | null | false | 98893dcd564e85ee0e4d85e890f12ad4e5f5b07b | [] | [
"arxiv:2209.02970",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language:zh",
"license:cc-by-4.0",
"multilinguality:monolingual",
"task_categories:feature-extraction"
] | https://huggingface.co/datasets/IDEA-CCNL/laion2B-multi-chinese-subset/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- zh
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: laion2B-multi-chinese-subset
task_categories:
- feature-extraction
---
# laion2B-multi-chinese-subset
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
- Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
## 简介 Brief Introduction
取自Laion2B多语言多模态数据集中的中文部分,一共143M个图文对。
A subset from Laion2B (a multimodal dataset), around 143M image-text pairs (only Chinese).
## 数据集信息 Dataset Information
大约一共143M个中文图文对。大约占用19GB空间(仅仅是url等文本信息,不包含图片)。
- Homepage: [laion-5b](https://laion.ai/blog/laion-5b/)
- Huggingface: [laion/laion2B-multi](https://huggingface.co/datasets/laion/laion2B-multi)
## Lisence
CC-BY-4.0
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen and Ruyi Gan and Jiaxing Zhang},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
|
severo | null | @article{bitton2022winogavil,
title={WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language Models},
author={Bitton, Yonatan and Guetta, Nitzan Bitton and Yosef, Ron and Elovici, Yuval and Bansal, Mohit and Stanovsky, Gabriel and Schwartz, Roy},
journal={arXiv preprint arXiv:2207.12576},
year={2022}
} | WinoGAViL is a challenging dataset for evaluating vision-and-language commonsense reasoning abilities. Given a set of images, a cue, and a number K, the task is to select the K images that best fits the association. This dataset was collected via the WinoGAViL online game to collect vision-and-language associations, (e.g., werewolves to a full moon). Inspired by the popular card game Codenames, a spymaster gives a textual cue related to several visual candidates, and another player has to identify them. Human players are rewarded for creating associations that are challenging for a rival AI model but still solvable by other human players. We evaluate several state-of-the-art vision-and-language models, finding that they are intuitive for humans (>90% Jaccard index) but challenging for state-of-the-art AI models, where the best model (ViLT) achieves a score of 52%, succeeding mostly where the cue is visually salient. Our analysis as well as the feedback we collect from players indicate that the collected associations require diverse reasoning skills, including general knowledge, common sense, abstraction, and more. | false | 2 | false | severo/winogavil | 2022-09-27T14:00:32.000Z | winogavil | false | 4d17ebae87690692e4ce9f102f35d28fa7ed5b66 | [] | [
"arxiv:2207.12576",
"annotations_creators:crowdsourced",
"language:en",
"language_creators:found",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"tags:commonsense-reasoning",
"tags:visual-reasoning",
"extra_gated_prompt:By clicking ... | https://huggingface.co/datasets/severo/winogavil/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
paperswithcode_id: winogavil
pretty_name: WinoGAViL
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- commonsense-reasoning
- visual-reasoning
task_ids: []
extra_gated_prompt: "By clicking on “Access repository” below, you also agree that you are using it solely for research purposes. The full license agreement is available in the dataset files."
---
# Dataset Card for WinoGAViL
- [Dataset Description](#dataset-description)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Colab notebook code for Winogavil evaluation with CLIP](#colab-notebook-code-for-winogavil-evaluation-with-clip)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
WinoGAViL is a challenging dataset for evaluating vision-and-language commonsense reasoning abilities. Given a set of images, a cue, and a number K, the task is to select the K images that best fits the association. This dataset was collected via the WinoGAViL online game to collect vision-and-language associations, (e.g., werewolves to a full moon). Inspired by the popular card game Codenames, a spymaster gives a textual cue related to several visual candidates, and another player has to identify them. Human players are rewarded for creating associations that are challenging for a rival AI model but still solvable by other human players. We evaluate several state-of-the-art vision-and-language models, finding that they are intuitive for humans (>90% Jaccard index) but challenging for state-of-the-art AI models, where the best model (ViLT) achieves a score of 52%, succeeding mostly where the cue is visually salient. Our analysis as well as the feedback we collect from players indicate that the collected associations require diverse reasoning skills, including general knowledge, common sense, abstraction, and more.
- **Homepage:**
https://winogavil.github.io/
- **Colab**
https://colab.research.google.com/drive/19qcPovniLj2PiLlP75oFgsK-uhTr6SSi
- **Repository:**
https://github.com/WinoGAViL/WinoGAViL-experiments/
- **Paper:**
https://arxiv.org/abs/2207.12576
- **Leaderboard:**
https://winogavil.github.io/leaderboard
- **Point of Contact:**
winogavil@gmail.com; yonatanbitton1@gmail.com
### Supported Tasks and Leaderboards
https://winogavil.github.io/leaderboard.
https://paperswithcode.com/dataset/winogavil.
## Colab notebook code for Winogavil evaluation with CLIP
https://colab.research.google.com/drive/19qcPovniLj2PiLlP75oFgsK-uhTr6SSi
### Languages
English.
## Dataset Structure
### Data Fields
candidates (list): ["bison", "shelter", "beard", "flea", "cattle", "shave"] - list of image candidates.
cue (string): pogonophile - the generated cue.
associations (string): ["bison", "beard", "shave"] - the images associated with the cue selected by the user.
score_fool_the_ai (int64): 80 - the spymaster score (100 - model score) for fooling the AI, with CLIP RN50 model.
num_associations (int64): 3 - The number of images selected as associative with the cue.
num_candidates (int64): 6 - the number of total candidates.
solvers_jaccard_mean (float64): 1.0 - three solvers scores average on the generated association instance.
solvers_jaccard_std (float64): 1.0 - three solvers scores standard deviation on the generated association instance
ID (int64): 367 - association ID.
### Data Splits
There is a single TEST split. In the accompanied paper and code we sample it to create different training sets, but the intended use is to use winogavil as a test set.
There are different number of candidates, which creates different difficulty levels:
-- With 5 candidates, random model expected score is 38%.
-- With 6 candidates, random model expected score is 34%.
-- With 10 candidates, random model expected score is 24%.
-- With 12 candidates, random model expected score is 19%.
<details>
<summary>Why random chance for success with 5 candidates is 38%?</summary>
It is a binomial distribution probability calculation.
Assuming N=5 candidates, and K=2 associations, there could be three events:
(1) The probability for a random guess is correct in 0 associations is 0.3 (elaborate below), and the Jaccard index is 0 (there is no intersection between the correct labels and the wrong guesses). Therefore the expected random score is 0.
(2) The probability for a random guess is correct in 1 associations is 0.6, and the Jaccard index is 0.33 (intersection=1, union=3, one of the correct guesses, and one of the wrong guesses). Therefore the expected random score is 0.6*0.33 = 0.198.
(3) The probability for a random guess is correct in 2 associations is 0.1, and the Jaccard index is 1 (intersection=2, union=2). Therefore the expected random score is 0.1*1 = 0.1.
* Together, when K=2, the expected score is 0+0.198+0.1 = 0.298.
To calculate (1), the first guess needs to be wrong. There are 3 "wrong" guesses and 5 candidates, so the probability for it is 3/5. The next guess should also be wrong. Now there are only 2 "wrong" guesses, and 4 candidates, so the probability for it is 2/4. Multiplying 3/5 * 2/4 = 0.3.
Same goes for (2) and (3).
Now we can perform the same calculation with K=3 associations.
Assuming N=5 candidates, and K=3 associations, there could be four events:
(4) The probability for a random guess is correct in 0 associations is 0, and the Jaccard index is 0. Therefore the expected random score is 0.
(5) The probability for a random guess is correct in 1 associations is 0.3, and the Jaccard index is 0.2 (intersection=1, union=4). Therefore the expected random score is 0.3*0.2 = 0.06.
(6) The probability for a random guess is correct in 2 associations is 0.6, and the Jaccard index is 0.5 (intersection=2, union=4). Therefore the expected random score is 0.6*5 = 0.3.
(7) The probability for a random guess is correct in 3 associations is 0.1, and the Jaccard index is 1 (intersection=3, union=3). Therefore the expected random score is 0.1*1 = 0.1.
* Together, when K=3, the expected score is 0+0.06+0.3+0.1 = 0.46.
Taking the average of 0.298 and 0.46 we reach 0.379.
Same process can be recalculated with 6 candidates (and K=2,3,4), 10 candidates (and K=2,3,4,5) and 123 candidates (and K=2,3,4,5,6).
</details>
## Dataset Creation
Inspired by the popular card game Codenames, a “spymaster” gives a textual cue related to several visual candidates, and another player has to identify them. Human players are rewarded for creating
associations that are challenging for a rival AI model but still solvable by other
human players.
### Annotations
#### Annotation process
We paid Amazon Mechanical Turk Workers to play our game.
## Considerations for Using the Data
All associations were obtained with human annotators.
### Licensing Information
CC-By 4.0
### Citation Information
@article{bitton2022winogavil,
title={WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language Models},
author={Bitton, Yonatan and Guetta, Nitzan Bitton and Yosef, Ron and Elovici, Yuval and Bansal, Mohit and Stanovsky, Gabriel and Schwartz, Roy},
journal={arXiv preprint arXiv:2207.12576},
year={2022}
|
winfried | null | null | null | false | 2 | false | winfried/gnn_bvp_solver | 2022-09-27T16:52:13.000Z | null | false | c20fb7cdff2c4b197e4c4125f850db01a559b4ab | [] | [
"arxiv:2206.14092",
"license:mit"
] | https://huggingface.co/datasets/winfried/gnn_bvp_solver/resolve/main/README.md | ---
license: mit
---
Dataset for paper: Learning the Solution Operator of Boundary Value Problems using Graph Neural Networks
https://arxiv.org/abs/2206.14092 |
dracoglacius | null | null | null | false | 2 | false | dracoglacius/timit | 2022-09-27T15:39:35.000Z | null | false | 6b02cd3afdb4739ec50cd9d492fb9fbfbc2f584d | [] | [
"license:mit"
] | https://huggingface.co/datasets/dracoglacius/timit/resolve/main/README.md | ---
license: mit
---
|
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev-mathemakitte-7776e8-1573055858 | 2022-09-27T16:30:46.000Z | null | false | c219307f7fd35f295dcd0cdf4cc94cd949158b30 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_dev"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev-mathemakitte-7776e8-1573055858/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_dev
eval_info:
task: text_zero_shot_classification
model: facebook/opt-6.7b
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_dev
dataset_config: mathemakitten--winobias_antistereotype_dev
dataset_split: validation
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-6.7b
* Dataset: mathemakitten/winobias_antistereotype_dev
* Config: mathemakitten--winobias_antistereotype_dev
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev-mathemakitte-e92f99-1572955856 | 2022-09-27T16:17:41.000Z | null | false | 4596f8cd06aa6f0fc71957d2e6a1f33c8664b643 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_dev"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev-mathemakitte-e92f99-1572955856/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_dev
eval_info:
task: text_zero_shot_classification
model: facebook/opt-1.3b
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_dev
dataset_config: mathemakitten--winobias_antistereotype_dev
dataset_split: validation
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-1.3b
* Dataset: mathemakitten/winobias_antistereotype_dev
* Config: mathemakitten--winobias_antistereotype_dev
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev-mathemakitte-7776e8-1573055859 | 2022-09-27T16:43:28.000Z | null | false | fba43e6d568abcfdab87ffe3068571fd21dca450 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_dev"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev-mathemakitte-7776e8-1573055859/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_dev
eval_info:
task: text_zero_shot_classification
model: facebook/opt-13b
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_dev
dataset_config: mathemakitten--winobias_antistereotype_dev
dataset_split: validation
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-13b
* Dataset: mathemakitten/winobias_antistereotype_dev
* Config: mathemakitten--winobias_antistereotype_dev
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev-mathemakitte-e92f99-1572955857 | 2022-09-27T16:19:55.000Z | null | false | 25a3771e345e9226611b04bc2bd695eaebad972e | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_dev"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev-mathemakitte-e92f99-1572955857/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_dev
eval_info:
task: text_zero_shot_classification
model: facebook/opt-2.7b
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_dev
dataset_config: mathemakitten--winobias_antistereotype_dev
dataset_split: validation
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-2.7b
* Dataset: mathemakitten/winobias_antistereotype_dev
* Config: mathemakitten--winobias_antistereotype_dev
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
autoevaluate | null | null | null | false | 1 | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev-mathemakitte-7776e8-1573055860 | 2022-09-27T17:25:03.000Z | null | false | 36506bf4050ad3043e111c1812be9c557b238954 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_dev"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev-mathemakitte-7776e8-1573055860/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_dev
eval_info:
task: text_zero_shot_classification
model: facebook/opt-30b
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_dev
dataset_config: mathemakitten--winobias_antistereotype_dev
dataset_split: validation
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-30b
* Dataset: mathemakitten/winobias_antistereotype_dev
* Config: mathemakitten--winobias_antistereotype_dev
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev-mathemakitte-e92f99-1572955855 | 2022-09-27T16:15:50.000Z | null | false | 2afaf26908533ee079a8fe1fb7d36c595b8d7176 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_dev"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev-mathemakitte-e92f99-1572955855/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_dev
eval_info:
task: text_zero_shot_classification
model: facebook/opt-350m
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_dev
dataset_config: mathemakitten--winobias_antistereotype_dev
dataset_split: validation
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-350m
* Dataset: mathemakitten/winobias_antistereotype_dev
* Config: mathemakitten--winobias_antistereotype_dev
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
tner | null | @inproceedings{pan-etal-2017-cross,
title = "Cross-lingual Name Tagging and Linking for 282 Languages",
author = "Pan, Xiaoman and
Zhang, Boliang and
May, Jonathan and
Nothman, Joel and
Knight, Kevin and
Ji, Heng",
booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2017",
address = "Vancouver, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P17-1178",
doi = "10.18653/v1/P17-1178",
pages = "1946--1958",
abstract = "The ambitious goal of this work is to develop a cross-lingual name tagging and linking framework for 282 languages that exist in Wikipedia. Given a document in any of these languages, our framework is able to identify name mentions, assign a coarse-grained or fine-grained type to each mention, and link it to an English Knowledge Base (KB) if it is linkable. We achieve this goal by performing a series of new KB mining methods: generating {``}silver-standard{''} annotations by transferring annotations from English to other languages through cross-lingual links and KB properties, refining annotations through self-training and topic selection, deriving language-specific morphology features from anchor links, and mining word translation pairs from cross-lingual links. Both name tagging and linking results for 282 languages are promising on Wikipedia data and on-Wikipedia data.",
} | [WikiAnn](https://aclanthology.org/P17-1178/) | false | 254 | false | tner/wikiann | 2022-09-27T18:39:42.000Z | null | false | e17a8195959cef8071410fd7fa8c4130a16a3a72 | [] | [
"language:ace",
"language:bg",
"language:da",
"language:fur",
"language:ilo",
"language:lij",
"language:mzn",
"language:qu",
"language:su",
"language:vi",
"language:af",
"language:bh",
"language:de",
"language:fy",
"language:io",
"language:lmo",
"language:nap",
"language:rm",
"la... | https://huggingface.co/datasets/tner/wikiann/resolve/main/README.md | ---
language:
- ace
- bg
- da
- fur
- ilo
- lij
- mzn
- qu
- su
- vi
- af
- bh
- de
- fy
- io
- lmo
- nap
- rm
- sv
- vls
- als
- bn
- diq
- ga
- is
- ln
- nds
- ro
- sw
- vo
- am
- bo
- dv
- gan
- it
- lt
- ne
- ru
- szl
- wa
- an
- br
- el
- gd
- ja
- lv
- nl
- rw
- ta
- war
- ang
- bs
- eml
- gl
- jbo
- nn
- sa
- te
- wuu
- ar
- ca
- en
- gn
- jv
- mg
- no
- sah
- tg
- xmf
- arc
- eo
- gu
- ka
- mhr
- nov
- scn
- th
- yi
- arz
- cdo
- es
- hak
- kk
- mi
- oc
- sco
- tk
- yo
- as
- ce
- et
- he
- km
- min
- or
- sd
- tl
- zea
- ast
- ceb
- eu
- hi
- kn
- mk
- os
- sh
- tr
- ay
- ckb
- ext
- hr
- ko
- ml
- pa
- si
- tt
- az
- co
- fa
- hsb
- ksh
- mn
- pdc
- ug
- ba
- crh
- fi
- hu
- ku
- mr
- pl
- sk
- uk
- zh
- bar
- cs
- hy
- ky
- ms
- pms
- sl
- ur
- csb
- fo
- ia
- la
- mt
- pnb
- so
- uz
- cv
- fr
- id
- lb
- mwl
- ps
- sq
- vec
- be
- cy
- frr
- ig
- li
- my
- pt
- sr
multilinguality:
- multilingual
size_categories:
- 10K<100k
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: WikiAnn
---
# Dataset Card for "tner/wikiann"
## Dataset Description
- **Repository:** [T-NER](https://github.com/asahi417/tner)
- **Paper:** [https://aclanthology.org/P17-1178/](https://aclanthology.org/P17-1178/)
- **Dataset:** WikiAnn
- **Domain:** Wikipedia
- **Number of Entity:** 3
### Dataset Summary
WikiAnn NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
- Entity Types: `LOC`, `ORG`, `PER`
## Dataset Structure
### Data Instances
An example of `train` of `ja` looks as follows.
```
{
'tokens': ['#', '#', 'ユ', 'リ', 'ウ', 'ス', '・', 'ベ', 'ー', 'リ', 'ッ', 'ク', '#', '1', '9','9','9'],
'tags': [6, 6, 2, 5, 5, 5, 5, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6]
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/wikiann/raw/main/dataset/label.json).
```python
{
"B-LOC": 0,
"B-ORG": 1,
"B-PER": 2,
"I-LOC": 3,
"I-ORG": 4,
"I-PER": 5,
"O": 6
}
```
### Data Splits
| language | train | validation | test |
|:-------------|--------:|-------------:|-------:|
| ace | 100 | 100 | 100 |
| bg | 20000 | 10000 | 10000 |
| da | 20000 | 10000 | 10000 |
| fur | 100 | 100 | 100 |
| ilo | 100 | 100 | 100 |
| lij | 100 | 100 | 100 |
| mzn | 100 | 100 | 100 |
| qu | 100 | 100 | 100 |
| su | 100 | 100 | 100 |
| vi | 20000 | 10000 | 10000 |
| af | 5000 | 1000 | 1000 |
| bh | 100 | 100 | 100 |
| de | 20000 | 10000 | 10000 |
| fy | 1000 | 1000 | 1000 |
| io | 100 | 100 | 100 |
| lmo | 100 | 100 | 100 |
| nap | 100 | 100 | 100 |
| rm | 100 | 100 | 100 |
| sv | 20000 | 10000 | 10000 |
| vls | 100 | 100 | 100 |
| als | 100 | 100 | 100 |
| bn | 10000 | 1000 | 1000 |
| diq | 100 | 100 | 100 |
| ga | 1000 | 1000 | 1000 |
| is | 1000 | 1000 | 1000 |
| ln | 100 | 100 | 100 |
| nds | 100 | 100 | 100 |
| ro | 20000 | 10000 | 10000 |
| sw | 1000 | 1000 | 1000 |
| vo | 100 | 100 | 100 |
| am | 100 | 100 | 100 |
| bo | 100 | 100 | 100 |
| dv | 100 | 100 | 100 |
| gan | 100 | 100 | 100 |
| it | 20000 | 10000 | 10000 |
| lt | 10000 | 10000 | 10000 |
| ne | 100 | 100 | 100 |
| ru | 20000 | 10000 | 10000 |
| szl | 100 | 100 | 100 |
| wa | 100 | 100 | 100 |
| an | 1000 | 1000 | 1000 |
| br | 1000 | 1000 | 1000 |
| el | 20000 | 10000 | 10000 |
| gd | 100 | 100 | 100 |
| ja | 20000 | 10000 | 10000 |
| lv | 10000 | 10000 | 10000 |
| nl | 20000 | 10000 | 10000 |
| rw | 100 | 100 | 100 |
| ta | 15000 | 1000 | 1000 |
| war | 100 | 100 | 100 |
| ang | 100 | 100 | 100 |
| bs | 15000 | 1000 | 1000 |
| eml | 100 | 100 | 100 |
| gl | 15000 | 10000 | 10000 |
| jbo | 100 | 100 | 100 |
| map-bms | 100 | 100 | 100 |
| nn | 20000 | 1000 | 1000 |
| sa | 100 | 100 | 100 |
| te | 1000 | 1000 | 1000 |
| wuu | 100 | 100 | 100 |
| ar | 20000 | 10000 | 10000 |
| ca | 20000 | 10000 | 10000 |
| en | 20000 | 10000 | 10000 |
| gn | 100 | 100 | 100 |
| jv | 100 | 100 | 100 |
| mg | 100 | 100 | 100 |
| no | 20000 | 10000 | 10000 |
| sah | 100 | 100 | 100 |
| tg | 100 | 100 | 100 |
| xmf | 100 | 100 | 100 |
| arc | 100 | 100 | 100 |
| cbk-zam | 100 | 100 | 100 |
| eo | 15000 | 10000 | 10000 |
| gu | 100 | 100 | 100 |
| ka | 10000 | 10000 | 10000 |
| mhr | 100 | 100 | 100 |
| nov | 100 | 100 | 100 |
| scn | 100 | 100 | 100 |
| th | 20000 | 10000 | 10000 |
| yi | 100 | 100 | 100 |
| arz | 100 | 100 | 100 |
| cdo | 100 | 100 | 100 |
| es | 20000 | 10000 | 10000 |
| hak | 100 | 100 | 100 |
| kk | 1000 | 1000 | 1000 |
| mi | 100 | 100 | 100 |
| oc | 100 | 100 | 100 |
| sco | 100 | 100 | 100 |
| tk | 100 | 100 | 100 |
| yo | 100 | 100 | 100 |
| as | 100 | 100 | 100 |
| ce | 100 | 100 | 100 |
| et | 15000 | 10000 | 10000 |
| he | 20000 | 10000 | 10000 |
| km | 100 | 100 | 100 |
| min | 100 | 100 | 100 |
| or | 100 | 100 | 100 |
| sd | 100 | 100 | 100 |
| tl | 10000 | 1000 | 1000 |
| zea | 100 | 100 | 100 |
| ast | 1000 | 1000 | 1000 |
| ceb | 100 | 100 | 100 |
| eu | 10000 | 10000 | 10000 |
| hi | 5000 | 1000 | 1000 |
| kn | 100 | 100 | 100 |
| mk | 10000 | 1000 | 1000 |
| os | 100 | 100 | 100 |
| sh | 20000 | 10000 | 10000 |
| tr | 20000 | 10000 | 10000 |
| zh-classical | 100 | 100 | 100 |
| ay | 100 | 100 | 100 |
| ckb | 1000 | 1000 | 1000 |
| ext | 100 | 100 | 100 |
| hr | 20000 | 10000 | 10000 |
| ko | 20000 | 10000 | 10000 |
| ml | 10000 | 1000 | 1000 |
| pa | 100 | 100 | 100 |
| si | 100 | 100 | 100 |
| tt | 1000 | 1000 | 1000 |
| zh-min-nan | 100 | 100 | 100 |
| az | 10000 | 1000 | 1000 |
| co | 100 | 100 | 100 |
| fa | 20000 | 10000 | 10000 |
| hsb | 100 | 100 | 100 |
| ksh | 100 | 100 | 100 |
| mn | 100 | 100 | 100 |
| pdc | 100 | 100 | 100 |
| simple | 20000 | 1000 | 1000 |
| ug | 100 | 100 | 100 |
| zh-yue | 20000 | 10000 | 10000 |
| ba | 100 | 100 | 100 |
| crh | 100 | 100 | 100 |
| fi | 20000 | 10000 | 10000 |
| hu | 20000 | 10000 | 10000 |
| ku | 100 | 100 | 100 |
| mr | 5000 | 1000 | 1000 |
| pl | 20000 | 10000 | 10000 |
| sk | 20000 | 10000 | 10000 |
| uk | 20000 | 10000 | 10000 |
| zh | 20000 | 10000 | 10000 |
| bar | 100 | 100 | 100 |
| cs | 20000 | 10000 | 10000 |
| fiu-vro | 100 | 100 | 100 |
| hy | 15000 | 1000 | 1000 |
| ky | 100 | 100 | 100 |
| ms | 20000 | 1000 | 1000 |
| pms | 100 | 100 | 100 |
| sl | 15000 | 10000 | 10000 |
| ur | 20000 | 1000 | 1000 |
| bat-smg | 100 | 100 | 100 |
| csb | 100 | 100 | 100 |
| fo | 100 | 100 | 100 |
| ia | 100 | 100 | 100 |
| la | 5000 | 1000 | 1000 |
| mt | 100 | 100 | 100 |
| pnb | 100 | 100 | 100 |
| so | 100 | 100 | 100 |
| uz | 1000 | 1000 | 1000 |
| be-x-old | 5000 | 1000 | 1000 |
| cv | 100 | 100 | 100 |
| fr | 20000 | 10000 | 10000 |
| id | 20000 | 10000 | 10000 |
| lb | 5000 | 1000 | 1000 |
| mwl | 100 | 100 | 100 |
| ps | 100 | 100 | 100 |
| sq | 5000 | 1000 | 1000 |
| vec | 100 | 100 | 100 |
| be | 15000 | 1000 | 1000 |
| cy | 10000 | 1000 | 1000 |
| frr | 100 | 100 | 100 |
| ig | 100 | 100 | 100 |
| li | 100 | 100 | 100 |
| my | 100 | 100 | 100 |
| pt | 20000 | 10000 | 10000 |
| sr | 20000 | 10000 | 10000 |
| vep | 100 | 100 | 100 |
### Citation Information
```
@inproceedings{pan-etal-2017-cross,
title = "Cross-lingual Name Tagging and Linking for 282 Languages",
author = "Pan, Xiaoman and
Zhang, Boliang and
May, Jonathan and
Nothman, Joel and
Knight, Kevin and
Ji, Heng",
booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2017",
address = "Vancouver, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P17-1178",
doi = "10.18653/v1/P17-1178",
pages = "1946--1958",
abstract = "The ambitious goal of this work is to develop a cross-lingual name tagging and linking framework for 282 languages that exist in Wikipedia. Given a document in any of these languages, our framework is able to identify name mentions, assign a coarse-grained or fine-grained type to each mention, and link it to an English Knowledge Base (KB) if it is linkable. We achieve this goal by performing a series of new KB mining methods: generating {``}silver-standard{''} annotations by transferring annotations from English to other languages through cross-lingual links and KB properties, refining annotations through self-training and topic selection, deriving language-specific morphology features from anchor links, and mining word translation pairs from cross-lingual links. Both name tagging and linking results for 282 languages are promising on Wikipedia data and on-Wikipedia data.",
}
``` |
freddyaboulton | null | null | null | false | 2 | false | freddyaboulton/gradio-reviews | 2022-11-15T18:11:24.000Z | null | false | ba8f8a268f2cc77a37d3703580f50975975d16ec | [] | [
"license:mit"
] | https://huggingface.co/datasets/freddyaboulton/gradio-reviews/resolve/main/README.md | ---
license: mit
---
|
tner | null | @inproceedings{tedeschi-etal-2021-wikineural-combined,
title = "{W}iki{NE}u{R}al: {C}ombined Neural and Knowledge-based Silver Data Creation for Multilingual {NER}",
author = "Tedeschi, Simone and
Maiorca, Valentino and
Campolungo, Niccol{\`o} and
Cecconi, Francesco and
Navigli, Roberto",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.215",
doi = "10.18653/v1/2021.findings-emnlp.215",
pages = "2521--2533",
abstract = "Multilingual Named Entity Recognition (NER) is a key intermediate task which is needed in many areas of NLP. In this paper, we address the well-known issue of data scarcity in NER, especially relevant when moving to a multilingual scenario, and go beyond current approaches to the creation of multilingual silver data for the task. We exploit the texts of Wikipedia and introduce a new methodology based on the effective combination of knowledge-based approaches and neural models, together with a novel domain adaptation technique, to produce high-quality training corpora for NER. We evaluate our datasets extensively on standard benchmarks for NER, yielding substantial improvements up to 6 span-based F1-score points over previous state-of-the-art systems for data creation.",
} | [wikineural](https://aclanthology.org/2021.findings-emnlp.215/) | false | 46 | false | tner/wikineural | 2022-09-27T19:46:37.000Z | null | false | ce7483a909a7b68ddc02920087462355f7680057 | [] | [
"language:de",
"language:en",
"language:es",
"language:fr",
"language:it",
"language:nl",
"language:pl",
"language:pt",
"language:ru",
"multilinguality:multilingual",
"size_categories:10K<100k",
"task_categories:token-classification",
"task_ids:named-entity-recognition"
] | https://huggingface.co/datasets/tner/wikineural/resolve/main/README.md | ---
language:
- de
- en
- es
- fr
- it
- nl
- pl
- pt
- ru
multilinguality:
- multilingual
size_categories:
- 10K<100k
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: WikiNeural
---
# Dataset Card for "tner/wikineural"
## Dataset Description
- **Repository:** [T-NER](https://github.com/asahi417/tner)
- **Paper:** [https://aclanthology.org/2021.findings-emnlp.215/](https://aclanthology.org/2021.findings-emnlp.215/)
- **Dataset:** WikiNeural
- **Domain:** Wikipedia
- **Number of Entity:** 16
### Dataset Summary
WikiAnn NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
- Entity Types: `PER`, `LOC`, `ORG`, `ANIM`, `BIO`, `CEL`, `DIS`, `EVE`, `FOOD`, `INST`, `MEDIA`, `PLANT`, `MYTH`, `TIME`, `VEHI`, `MISC`
## Dataset Structure
### Data Instances
An example of `train` of `de` looks as follows.
```
{
'tokens': [ "Dieses", "wiederum", "basierte", "auf", "dem", "gleichnamigen", "Roman", "von", "Noël", "Calef", "." ],
'tags': [ 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 0 ]
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/wikineural/raw/main/dataset/label.json).
```python
{
"O": 0,
"B-PER": 1,
"I-PER": 2,
"B-LOC": 3,
"I-LOC": 4,
"B-ORG": 5,
"I-ORG": 6,
"B-ANIM": 7,
"I-ANIM": 8,
"B-BIO": 9,
"I-BIO": 10,
"B-CEL": 11,
"I-CEL": 12,
"B-DIS": 13,
"I-DIS": 14,
"B-EVE": 15,
"I-EVE": 16,
"B-FOOD": 17,
"I-FOOD": 18,
"B-INST": 19,
"I-INST": 20,
"B-MEDIA": 21,
"I-MEDIA": 22,
"B-PLANT": 23,
"I-PLANT": 24,
"B-MYTH": 25,
"I-MYTH": 26,
"B-TIME": 27,
"I-TIME": 28,
"B-VEHI": 29,
"I-VEHI": 30,
"B-MISC": 31,
"I-MISC": 32
}
```
### Data Splits
| language | train | validation | test |
|:-----------|--------:|-------------:|-------:|
| de | 98640 | 12330 | 12372 |
| en | 92720 | 11590 | 11597 |
| es | 76320 | 9540 | 9618 |
| fr | 100800 | 12600 | 12678 |
| it | 88400 | 11050 | 11069 |
| nl | 83680 | 10460 | 10547 |
| pl | 108160 | 13520 | 13585 |
| pt | 80560 | 10070 | 10160 |
| ru | 92320 | 11540 | 11580 |
### Citation Information
```
@inproceedings{tedeschi-etal-2021-wikineural-combined,
title = "{W}iki{NE}u{R}al: {C}ombined Neural and Knowledge-based Silver Data Creation for Multilingual {NER}",
author = "Tedeschi, Simone and
Maiorca, Valentino and
Campolungo, Niccol{\`o} and
Cecconi, Francesco and
Navigli, Roberto",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.215",
doi = "10.18653/v1/2021.findings-emnlp.215",
pages = "2521--2533",
abstract = "Multilingual Named Entity Recognition (NER) is a key intermediate task which is needed in many areas of NLP. In this paper, we address the well-known issue of data scarcity in NER, especially relevant when moving to a multilingual scenario, and go beyond current approaches to the creation of multilingual silver data for the task. We exploit the texts of Wikipedia and introduce a new methodology based on the effective combination of knowledge-based approaches and neural models, together with a novel domain adaptation technique, to produce high-quality training corpora for NER. We evaluate our datasets extensively on standard benchmarks for NER, yielding substantial improvements up to 6 span-based F1-score points over previous state-of-the-art systems for data creation.",
}
``` |
chunkeduptube | null | null | null | false | 1 | false | chunkeduptube/chunkis | 2022-09-27T18:26:34.000Z | null | false | 533a80b990626e7984be36fbfeb2371c425b2a27 | [] | [
"license:artistic-2.0"
] | https://huggingface.co/datasets/chunkeduptube/chunkis/resolve/main/README.md | ---
license: artistic-2.0
---
|
tner | null | @inproceedings{tedeschi-navigli-2022-multinerd,
title = "{M}ulti{NERD}: A Multilingual, Multi-Genre and Fine-Grained Dataset for Named Entity Recognition (and Disambiguation)",
author = "Tedeschi, Simone and
Navigli, Roberto",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2022",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-naacl.60",
doi = "10.18653/v1/2022.findings-naacl.60",
pages = "801--812",
abstract = "Named Entity Recognition (NER) is the task of identifying named entities in texts and classifying them through specific semantic categories, a process which is crucial for a wide range of NLP applications. Current datasets for NER focus mainly on coarse-grained entity types, tend to consider a single textual genre and to cover a narrow set of languages, thus limiting the general applicability of NER systems.In this work, we design a new methodology for automatically producing NER annotations, and address the aforementioned limitations by introducing a novel dataset that covers 10 languages, 15 NER categories and 2 textual genres.We also introduce a manually-annotated test set, and extensively evaluate the quality of our novel dataset on both this new test set and standard benchmarks for NER.In addition, in our dataset, we include: i) disambiguation information to enable the development of multilingual entity linking systems, and ii) image URLs to encourage the creation of multimodal systems.We release our dataset at https://github.com/Babelscape/multinerd.",
} | [MultiNERD](https://aclanthology.org/2022.findings-naacl.60/) | false | 659 | false | tner/multinerd | 2022-09-27T19:48:40.000Z | null | false | facdfd1c6f139820e44b5dd7b341d056fbe2044e | [] | [
"language:de",
"language:en",
"language:es",
"language:fr",
"language:it",
"language:nl",
"language:pl",
"language:pt",
"language:ru",
"multilinguality:multilingual",
"size_categories:<10K",
"task_categories:token-classification",
"task_ids:named-entity-recognition"
] | https://huggingface.co/datasets/tner/multinerd/resolve/main/README.md | ---
language:
- de
- en
- es
- fr
- it
- nl
- pl
- pt
- ru
multilinguality:
- multilingual
size_categories:
- <10K
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: MultiNERD
---
# Dataset Card for "tner/multinerd"
## Dataset Description
- **Repository:** [T-NER](https://github.com/asahi417/tner)
- **Paper:** [https://aclanthology.org/2022.findings-naacl.60/](https://aclanthology.org/2022.findings-naacl.60/)
- **Dataset:** MultiNERD
- **Domain:** Wikipedia, WikiNews
- **Number of Entity:** 18
### Dataset Summary
MultiNERD NER benchmark dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
- Entity Types: `PER`, `LOC`, `ORG`, `ANIM`, `BIO`, `CEL`, `DIS`, `EVE`, `FOOD`, `INST`, `MEDIA`, `PLANT`, `MYTH`, `TIME`, `VEHI`, `MISC`, `SUPER`, `PHY`
## Dataset Structure
### Data Instances
An example of `train` of `de` looks as follows.
```
{
'tokens': [ "Die", "Blätter", "des", "Huflattichs", "sind", "leicht", "mit", "den", "sehr", "ähnlichen", "Blättern", "der", "Weißen", "Pestwurz", "(", "\"", "Petasites", "albus", "\"", ")", "zu", "verwechseln", "." ],
'tags': [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 0, 0, 0 ]
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/multinerd/raw/main/dataset/label.json).
```python
{
"O": 0,
"B-PER": 1,
"I-PER": 2,
"B-LOC": 3,
"I-LOC": 4,
"B-ORG": 5,
"I-ORG": 6,
"B-ANIM": 7,
"I-ANIM": 8,
"B-BIO": 9,
"I-BIO": 10,
"B-CEL": 11,
"I-CEL": 12,
"B-DIS": 13,
"I-DIS": 14,
"B-EVE": 15,
"I-EVE": 16,
"B-FOOD": 17,
"I-FOOD": 18,
"B-INST": 19,
"I-INST": 20,
"B-MEDIA": 21,
"I-MEDIA": 22,
"B-PLANT": 23,
"I-PLANT": 24,
"B-MYTH": 25,
"I-MYTH": 26,
"B-TIME": 27,
"I-TIME": 28,
"B-VEHI": 29,
"I-VEHI": 30,
"B-SUPER": 31,
"I-SUPER": 32,
"B-PHY": 33,
"I-PHY": 34
}
```
### Data Splits
| language | test |
|:-----------|-------:|
| de | 156792 |
| en | 164144 |
| es | 173189 |
| fr | 176185 |
| it | 181927 |
| nl | 171711 |
| pl | 194965 |
| pt | 177565 |
| ru | 82858 |
### Citation Information
```
@inproceedings{tedeschi-navigli-2022-multinerd,
title = "{M}ulti{NERD}: A Multilingual, Multi-Genre and Fine-Grained Dataset for Named Entity Recognition (and Disambiguation)",
author = "Tedeschi, Simone and
Navigli, Roberto",
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2022",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-naacl.60",
doi = "10.18653/v1/2022.findings-naacl.60",
pages = "801--812",
abstract = "Named Entity Recognition (NER) is the task of identifying named entities in texts and classifying them through specific semantic categories, a process which is crucial for a wide range of NLP applications. Current datasets for NER focus mainly on coarse-grained entity types, tend to consider a single textual genre and to cover a narrow set of languages, thus limiting the general applicability of NER systems.In this work, we design a new methodology for automatically producing NER annotations, and address the aforementioned limitations by introducing a novel dataset that covers 10 languages, 15 NER categories and 2 textual genres.We also introduce a manually-annotated test set, and extensively evaluate the quality of our novel dataset on both this new test set and standard benchmarks for NER.In addition, in our dataset, we include: i) disambiguation information to enable the development of multilingual entity linking systems, and ii) image URLs to encourage the creation of multimodal systems.We release our dataset at https://github.com/Babelscape/multinerd.",
}
``` |
LucaBlight | null | null | null | false | 1 | false | LucaBlight/Kheiron | 2022-09-27T20:36:17.000Z | null | false | dfd59f85a7256d183b215f86b8ad1c8a8bdc6ec3 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/LucaBlight/Kheiron/resolve/main/README.md | ---
license: afl-3.0
---
|
marcmaxmeister | null | null | null | false | 2 | false | marcmaxmeister/unitarian-universalist-sermons | 2022-09-28T21:04:16.000Z | null | false | ebbb3a2ae953c0a73ab3db40e849c6c23a82542a | [] | [
"license:mit"
] | https://huggingface.co/datasets/marcmaxmeister/unitarian-universalist-sermons/resolve/main/README.md |
---
license: mit
---
---
Sample
---
- 6900 transcripts
- 44 churches
- timeframe: 2010-2022
- Denomination: Unitarian Universalist, USA
---
Dataset structure
---
- church (church name or website)
- source (mp3 file)
- text
- sentences (count)
- errors (number of sentences skipped because could not understand audio, or just long pauses skipped)
- duration (in seconds)
---
Dataset creation
---
- see notebook in files
|
jmercat | null | @InProceedings{NiMe:2022,
author = {Haruki Nishimura, Jean Mercat, Blake Wulfe, Rowan McAllister},
title = {RAP: Risk-Aware Prediction for Robust Planning},
booktitle = {Proceedings of the 2022 IEEE International Conference on Robot Learning (CoRL)},
month = {December},
year = {2022},
address = {Grafton Road, Auckland CBD, Auckland 1010},
url = {},
} | Dataset of pre-processed samples from a small portion of the Waymo Open Motion Data for our risk-biased prediction task. | false | 90 | false | jmercat/risk_biased_dataset | 2022-10-31T18:27:16.000Z | null | false | 820a382798e73abf28737e147e02c980180f9825 | [] | [
"license:cc-by-nc-4.0"
] | https://huggingface.co/datasets/jmercat/risk_biased_dataset/resolve/main/README.md | ---
license: cc-by-nc-4.0
---
The code is provided under a Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. Under the license, the code is provided royalty free for non-commercial purposes only. The code may be covered by patents and if you want to use the code for commercial purposes, please contact us for a different license.
This dataset is a pre-processed small sample of the Waymo Open Motion Dataset intended for illustration purposes only.
|
Zavek | null | null | null | false | 2 | false | Zavek/Contradictory-xnli | 2022-09-28T01:37:20.000Z | null | false | 01982dd3e03603a1e07e2c2d9ad30d0a5a722e95 | [] | [
"license:other"
] | https://huggingface.co/datasets/Zavek/Contradictory-xnli/resolve/main/README.md | ---
license: other
---
|
zyznull | null | @misc{bajaj2018ms,
title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset},
author={Payal Bajaj and Daniel Campos and Nick Craswell and Li Deng and Jianfeng Gao and Xiaodong Liu
and Rangan Majumder and Andrew McNamara and Bhaskar Mitra and Tri Nguyen and Mir Rosenberg and Xia Song
and Alina Stoica and Saurabh Tiwary and Tong Wang},
year={2018},
eprint={1611.09268},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | false | 1,307 | false | zyznull/msmarco-passage-ranking | 2022-09-28T03:30:10.000Z | null | false | e01e8edff5797a78f34c568ecab33a64794842f2 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/zyznull/msmarco-passage-ranking/resolve/main/README.md | ---
license: apache-2.0
---
|
zyznull | null | @misc{bajaj2018ms,
title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset},
author={Payal Bajaj and Daniel Campos and Nick Craswell and Li Deng and Jianfeng Gao and Xiaodong Liu
and Rangan Majumder and Andrew McNamara and Bhaskar Mitra and Tri Nguyen and Mir Rosenberg and Xia Song
and Alina Stoica and Saurabh Tiwary and Tong Wang},
year={2018},
eprint={1611.09268},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | false | 2 | false | zyznull/msmarco-passage-corpus | 2022-09-28T07:18:17.000Z | null | false | bfc3add0cfab775f5b4cd6fed9ea37ae66c5d4a4 | [] | [
"license:mit"
] | https://huggingface.co/datasets/zyznull/msmarco-passage-corpus/resolve/main/README.md | ---
license: mit
---
|
dhruvs00 | null | null | null | false | 1 | false | dhruvs00/datahogyaset | 2022-09-28T06:46:48.000Z | null | false | b7b9168a7ce51714c0914a4ac7c8511abc3d82c3 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/dhruvs00/datahogyaset/resolve/main/README.md | ---
license: openrail
---
|
dhruvs00 | null | null | null | false | 1 | false | dhruvs00/datahogyas | 2022-09-28T08:08:02.000Z | null | false | 5e92c47f62e3a16dc4b38ed70aa8841eacb22514 | [] | [] | https://huggingface.co/datasets/dhruvs00/datahogyas/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- mit
multilinguality:
- monolingual
paperswithcode_id: acronym-identification
pretty_name: datahogyas
size_categories:
- 10K<n<100K
source_datasets:
- original
tags: []
task_categories:
- token-classification
task_ids:
- part-of-speech
train-eval-index:
- col_mapping:
labels: tags
tokens: tokens
config: default
splits:
eval_split: test
task: token-classification
task_id: entity_extraction
--- |
autoevaluator | null | null | null | false | 13 | false | autoevaluator/benchmark-dummy-data | 2022-09-28T07:59:21.000Z | null | false | e0aa0d6203eced7d18f03fbbd6c7ffc73bf8646d | [] | [] | https://huggingface.co/datasets/autoevaluator/benchmark-dummy-data/resolve/main/README.md | # Dummy Dataset for AutoTrain Benchmark
This dataset contains dummy data that's needed to create AutoTrain projects for benchmarks like [RAFT](https://huggingface.co/spaces/ought/raft-leaderboard). See [here](https://github.com/huggingface/hf_benchmarks) for more details. |
zyznull | null | @article{Qiu2022DuReader\_retrievalAL,
title={DuReader\_retrieval: A Large-scale Chinese Benchmark for Passage Retrieval from Web Search Engine},
author={Yifu Qiu and Hongyu Li and Yingqi Qu and Ying Chen and Qiaoqiao She and Jing Liu and Hua Wu and Haifeng Wang},
journal={ArXiv},
year={2022},
volume={abs/2203.10232}
} | null | false | 1 | false | zyznull/dureader-retrieval-corpus | 2022-09-29T06:20:34.000Z | null | false | b45c6068ca0847df4c4bc9eabb99b42aa5b19996 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/zyznull/dureader-retrieval-corpus/resolve/main/README.md | ---
license: apache-2.0
---
|
esc-benchmark | null | null | null | false | 1 | false | esc-benchmark/esc-datasets | 2022-10-14T14:30:30.000Z | null | false | f33c72ade15f98638f3598a9ca4ac989d21f699e | [] | [
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language:en",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"license:cc-by-4.0",
"license:apache-2.0",
"license:cc0-1.0",
"license:cc-by-nc-3.0",
"li... | https://huggingface.co/datasets/esc-benchmark/esc-datasets/resolve/main/README.md | ---
annotations_creators:
- expert-generated
- crowdsourced
- machine-generated
language:
- en
language_creators:
- crowdsourced
- expert-generated
license:
- cc-by-4.0
- apache-2.0
- cc0-1.0
- cc-by-nc-3.0
- other
multilinguality:
- monolingual
pretty_name: esc-datasets
size_categories:
- 100K<n<1M
- 1M<n<10M
source_datasets:
- original
- extended|librispeech_asr
- extended|common_voice
tags:
- asr
- benchmark
- speech
- esc
task_categories:
- automatic-speech-recognition
task_ids: []
extra_gated_prompt: |-
Three of the ESC datasets have specific terms of usage that must be agreed to before using the data.
To do so, fill in the access forms on the specific datasets' pages:
* Common Voice: https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0
* GigaSpeech: https://huggingface.co/datasets/speechcolab/gigaspeech
* SPGISpeech: https://huggingface.co/datasets/kensho/spgispeech
extra_gated_fields:
I hereby confirm that I have registered on the original Common Voice page and agree to not attempt to determine the identity of speakers in the Common Voice dataset: checkbox
I hereby confirm that I have accepted the terms of usages on GigaSpeech page: checkbox
I hereby confirm that I have accepted the terms of usages on SPGISpeech page: checkbox
---
All eight of datasets in ESC can be downloaded and prepared in just a single line of code through the Hugging Face Datasets library:
```python
from datasets import load_dataset
librispeech = load_dataset("esc-benchmark/esc-datasets", "librispeech", split="train")
```
- `"esc-benchmark"`: the repository namespace. This is fixed for all ESC datasets.
- `"librispeech"`: the dataset name. This can be changed to any of any one of the eight datasets in ESC to download that dataset.
- `split="train"`: the split. Set this to one of train/validation/test to generate a specific split. Omit the `split` argument to generate all splits for a dataset.
The datasets are full prepared, such that the audio and transcription files can be used directly in training/evaluation scripts.
## Dataset Information
A data point can be accessed by indexing the dataset object loaded through `load_dataset`:
```python
print(librispeech[0])
```
A typical data point comprises the path to the audio file and its transcription. Also included is information of the dataset from which the sample derives and a unique identifier name:
```python
{
'dataset': 'librispeech',
'audio': {'path': '/home/esc-bencher/.cache/huggingface/datasets/downloads/extracted/d2da1969fe9e7d06661b5dc370cf2e3c119a14c35950045bcb76243b264e4f01/374-180298-0000.flac',
'array': array([ 7.01904297e-04, 7.32421875e-04, 7.32421875e-04, ...,
-2.74658203e-04, -1.83105469e-04, -3.05175781e-05]),
'sampling_rate': 16000},
'text': 'chapter sixteen i might have told you of the beginning of this liaison in a few lines but i wanted you to see every step by which we came i to agree to whatever marguerite wished',
'id': '374-180298-0000'
}
```
### Data Fields
- `dataset`: name of the ESC dataset from which the sample is taken.
- `audio`: a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.
- `text`: the transcription of the audio file.
- `id`: unique id of the data sample.
### Data Preparation
#### Audio
The audio for all ESC datasets is segmented into sample lengths suitable for training ASR systems. The Hugging Face datasets library decodes audio files on the fly, reading the segments and converting them to a Python arrays. Consequently, no further preparation of the audio is required to be used in training/evaluation scripts.
Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, i.e. `dataset[0]["audio"]` should always be preferred over `dataset["audio"][0]`.
#### Transcriptions
The transcriptions corresponding to each audio file are provided in their 'error corrected' format. No transcription pre-processing is applied to the text, only necessary 'error correction' steps such as removing junk tokens (_<unk>_) or converting symbolic punctuation to spelled out form (_<comma>_ to _,_). As such, no further preparation of the transcriptions is required to be used in training/evaluation scripts.
Transcriptions are provided for training and validation splits. The transcriptions are **not** provided for the test splits. The ESC benchmark requires you to generate predictions for the test sets and upload them to https://huggingface.co/spaces/esc-benchmark/esc for scoring.
### Access
All eight of the datasets in ESC are accessible and licensing is freely available. Three of the ESC datasets have specific terms of usage that must be agreed to before using the data. To do so, fill in the access forms on the specific datasets' pages:
* Common Voice: https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0
* GigaSpeech: https://huggingface.co/datasets/speechcolab/gigaspeech
* SPGISpeech: https://huggingface.co/datasets/kensho/spgispeech
## LibriSpeech
The LibriSpeech corpus is a standard large-scale corpus for assessing ASR systems. It consists of approximately 1,000 hours of narrated audiobooks from the [LibriVox](https://librivox.org) project. It is licensed under CC-BY-4.0.
Example Usage:
```python
librispeech = load_dataset("esc-benchmark/esc-datasets", "librispeech")
```
Train/validation splits:
- `train` (combination of `train.clean.100`, `train.clean.360` and `train.other.500`)
- `validation.clean`
- `validation.other`
Test splits:
- `test.clean`
- `test.other`
Also available are subsets of the train split, which can be accessed by setting the `subconfig` argument:
```python
librispeech = load_dataset("esc-benchmark/esc-datasets", "librispeech", subconfig="clean.100")
```
- `clean.100`: 100 hours of training data from the 'clean' subset
- `clean.360`: 360 hours of training data from the 'clean' subset
- `other.500`: 500 hours of training data from the 'other' subset
## Common Voice
Common Voice is a series of crowd-sourced open-licensed speech datasets where speakers record text from Wikipedia in various languages. The English subset of contains approximately 1,400 hours of audio data from speakers of various nationalities, accents and different recording conditions. It is licensed under CC0-1.0.
Example usage:
```python
common_voice = load_dataset("esc-benchmark/esc-datasets", "common_voice", use_auth_token=True)
```
Training/validation splits:
- `train`
- `validation`
Test splits:
- `test`
## VoxPopuli
VoxPopuli s a large-scale multilingual speech corpus consisting of political data sourced from 2009-2020 European Parliament event recordings. The English subset contains approximately 550 hours of speech largely from non-native English speakers. It is licensed under CC0.
Example usage:
```python
voxpopuli = load_dataset("esc-benchmark/esc-datasets", "voxpopuli")
```
Training/validation splits:
- `train`
- `validation`
Test splits:
- `test`
## TED-LIUM
TED-LIUM consists of English-language TED Talk conference videos covering a range of different cultural, political, and academic topics. It contains approximately 450 hours of transcribed speech data. It is licensed under CC-BY-NC-ND 3.0.
Example usage:
```python
tedlium = load_dataset("esc-benchmark/esc-datasets", "tedlium")
```
Training/validation splits:
- `train`
- `validation`
Test splits:
- `test`
## GigaSpeech
GigaSpeech is a multi-domain English speech recognition corpus created from audiobooks, podcasts and YouTube. We provide the large train set (2,500 hours) and the standard validation and test splits. It is licensed under apache-2.0.
Example usage:
```python
gigaspeech = load_dataset("esc-benchmark/esc-datasets", "gigaspeech", use_auth_token=True)
```
Training/validation splits:
- `train` (`l` subset of training data (2,500 h))
- `validation`
Test splits:
- `test`
Also available are subsets of the train split, which can be accessed by setting the `subconfig` argument:
```python
gigaspeech = load_dataset("esc-benchmark/esc-datasets", "spgispeech", subconfig="xs", use_auth_token=True)
```
- `xs`: extra-small subset of training data (10 h)
- `s`: small subset of training data (250 h)
- `m`: medium subset of training data (1,000 h)
- `xl`: extra-large subset of training data (10,000 h)
## SPGISpeech
SPGISpeech consists of company earnings calls that have been manually transcribed by S&P Global, Inc according to a professional style guide. We provide the large train set (5,000 hours) and the standard validation and test splits. It is licensed under a Kensho user agreement.
Loading the dataset requires authorization.
Example usage:
```python
spgispeech = load_dataset("esc-benchmark/esc-datasets", "spgispeech", use_auth_token=True)
```
Training/validation splits:
- `train` (`l` subset of training data (~5,000 h))
- `validation`
Test splits:
- `test`
Also available are subsets of the train split, which can be accessed by setting the `subconfig` argument:
```python
spgispeech = load_dataset("esc-benchmark/esc-datasets", "spgispeech", subconfig="s", use_auth_token=True)
```
- `s`: small subset of training data (~200 h)
- `m`: medium subset of training data (~1,000 h)
## Earnings-22
Earnings-22 is a 119-hour corpus of English-language earnings calls collected from global companies, with speakers of many different nationalities and accents. It is licensed under CC-BY-SA-4.0.
Example usage:
```python
earnings22 = load_dataset("esc-benchmark/esc-datasets", "earnings22")
```
Training/validation splits:
- `train`
- `validation`
Test splits:
- `test`
## AMI
The AMI Meeting Corpus consists of 100 hours of meeting recordings from multiple recording devices synced to a common timeline. It is licensed under CC-BY-4.0.
Example usage:
```python
ami = load_dataset("esc-benchmark/esc-datasets", "ami")
```
Training/validation splits:
- `train`
- `validation`
Test splits:
- `test`
|
zyznull | null | @article{Qiu2022DuReader\_retrievalAL,
title={DuReader\_retrieval: A Large-scale Chinese Benchmark for Passage Retrieval from Web Search Engine},
author={Yifu Qiu and Hongyu Li and Yingqi Qu and Ying Chen and Qiaoqiao She and Jing Liu and Hua Wu and Haifeng Wang},
journal={ArXiv},
year={2022},
volume={abs/2203.10232}
} | null | false | 16 | false | zyznull/dureader-retrieval-ranking | 2022-09-29T08:48:29.000Z | null | false | b545a35b467296410a4982bc25fafd9533b46d5b | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/zyznull/dureader-retrieval-ranking/resolve/main/README.md | ---
license: apache-2.0
---
|
mayjestro | null | null | null | false | 1 | false | mayjestro/LittleHodler | 2022-09-28T14:30:31.000Z | null | false | e7da52d27ed5301d1f0f4c7359c04f95befbada5 | [] | [
"license:c-uda"
] | https://huggingface.co/datasets/mayjestro/LittleHodler/resolve/main/README.md | ---
license: c-uda
---
|
autoevaluate | null | null | null | false | 2 | false | autoevaluate/autoeval-eval-big_patent-g-9d42aa-1581555947 | 2022-09-28T11:15:24.000Z | null | false | 0d792180b9349c544a2ea220de6b72f78611fb17 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:big_patent"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-big_patent-g-9d42aa-1581555947/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- big_patent
eval_info:
task: summarization
model: facebook/bart-large-cnn
metrics: ['perplexity']
dataset_name: big_patent
dataset_config: g
dataset_split: validation
col_mapping:
text: description
target: abstract
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: facebook/bart-large-cnn
* Dataset: big_patent
* Config: g
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jonesdaniel](https://huggingface.co/jonesdaniel) for evaluating this model. |
DFKI-SLT | null | @inproceedings{zhang-etal-2017-position,
title = "Position-aware Attention and Supervised Data Improve Slot Filling",
author = "Zhang, Yuhao and
Zhong, Victor and
Chen, Danqi and
Angeli, Gabor and
Manning, Christopher D.",
booktitle = "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
month = sep,
year = "2017",
address = "Copenhagen, Denmark",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D17-1004",
doi = "10.18653/v1/D17-1004",
pages = "35--45",
}
@inproceedings{alt-etal-2020-tacred,
title = "{TACRED} Revisited: A Thorough Evaluation of the {TACRED} Relation Extraction Task",
author = "Alt, Christoph and
Gabryszak, Aleksandra and
Hennig, Leonhard",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.142",
doi = "10.18653/v1/2020.acl-main.142",
pages = "1558--1569",
} | TACRED is a large-scale relation extraction dataset with 106,264 examples built over newswire
and web text from the corpus used in the yearly TAC Knowledge Base Population (TAC KBP) challenges.
Examples in TACRED cover 41 relation types as used in the TAC KBP challenges (e.g., per:schools_attended
and org:members) or are labeled as no_relation if no defined relation is held. These examples are created
by combining available human annotations from the TAC KBP challenges and crowdsourcing.
Please see our EMNLP paper, or our EMNLP slides for full details.
Note: There is currently a label-corrected version of the TACRED dataset, which you should consider using instead of
the original version released in 2017. For more details on this new version, see the TACRED Revisited paper
published at ACL 2020.
Note 2: This Datasetreader changes the offsets of the following fields, to conform with standard Python usage (see
#_generate_examples()):
- subj_end to subj_end + 1 (make end offset exclusive)
- obj_end to obj_end + 1 (make end offset exclusive)
- stanford_head to stanford_head - 1 (make head offsets 0-based) | false | 13 | false | DFKI-SLT/tacred | 2022-11-15T08:31:32.000Z | null | false | 9b5c795ae353daf809bcf58e852433762407b0f4 | [] | [
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language:en",
"language_creators:found",
"license:other",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|other",
"tags:relation extraction",
"task_categories:text-classification",
... | https://huggingface.co/datasets/DFKI-SLT/tacred/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
- expert-generated
language:
- en
language_creators:
- found
license:
- other
multilinguality:
- monolingual
pretty_name: The TAC Relation Extraction Dataset and TACRED Revisited
size_categories:
- 100K<n<1M
source_datasets:
- extended|other
tags:
- relation extraction
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# Dataset Card for "tacred"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://nlp.stanford.edu/projects/tacred](https://nlp.stanford.edu/projects/tacred)
- **Paper:** [Position-aware Attention and Supervised Data Improve Slot Filling](https://aclanthology.org/D17-1004/)
- **Point of Contact:** See [https://nlp.stanford.edu/projects/tacred/](https://nlp.stanford.edu/projects/tacred/)
- **Size of downloaded dataset files:** 62.3 MB
- **Size of the generated dataset:** 139.2 MB
- **Total amount of disk used:** 201.5 MB
### Dataset Summary
The TAC Relation Extraction Dataset (TACRED) is a large-scale relation extraction dataset with 106,264 examples built over newswire and web text from the corpus used in the yearly TAC Knowledge Base Population (TAC KBP) challenges. Examples in TACRED cover 41 relation types as used in the TAC KBP challenges (e.g., per:schools_attended
and org:members) or are labeled as no_relation if no defined relation is held. These examples are created by combining available human annotations from the TAC
KBP challenges and crowdsourcing. Please see [Stanford's EMNLP paper](https://nlp.stanford.edu/pubs/zhang2017tacred.pdf), or their [EMNLP slides](https://nlp.stanford.edu/projects/tacred/files/position-emnlp2017.pdf) for full details.
Note: There is currently a [label-corrected version](https://github.com/DFKI-NLP/tacrev) of the TACRED dataset, which you should consider using instead of
the original version released in 2017. For more details on this new version, see the [TACRED Revisited paper](https://aclanthology.org/2020.acl-main.142/)
published at ACL 2020.
This repository provides both versions of the dataset as BuilderConfigs - 'original' and 'revisited'.
### Supported Tasks and Leaderboards
- **Tasks:** Relation Classification
- **Leaderboards:** [https://paperswithcode.com/sota/relation-extraction-on-tacred](https://paperswithcode.com/sota/relation-extraction-on-tacred)
### Languages
The language in the dataset is English.
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 62.3 MB
- **Size of the generated dataset:** 139.2 MB
- **Total amount of disk used:** 201.5 MB
An example of 'train' looks as follows:
```json
{
"id": "61b3a5c8c9a882dcfcd2",
"docid": "AFP_ENG_20070218.0019.LDC2009T13",
"relation": "org:founded_by",
"token": ["Tom", "Thabane", "resigned", "in", "October", "last", "year", "to", "form", "the", "All", "Basotho", "Convention", "-LRB-", "ABC", "-RRB-", ",", "crossing", "the", "floor", "with", "17", "members", "of", "parliament", ",", "causing", "constitutional", "monarch", "King", "Letsie", "III", "to", "dissolve", "parliament", "and", "call", "the", "snap", "election", "."],
"subj_start": 10,
"subj_end": 13,
"obj_start": 0,
"obj_end": 2,
"subj_type": "ORGANIZATION",
"obj_type": "PERSON",
"stanford_pos": ["NNP", "NNP", "VBD", "IN", "NNP", "JJ", "NN", "TO", "VB", "DT", "DT", "NNP", "NNP", "-LRB-", "NNP", "-RRB-", ",", "VBG", "DT", "NN", "IN", "CD", "NNS", "IN", "NN", ",", "VBG", "JJ", "NN", "NNP", "NNP", "NNP", "TO", "VB", "NN", "CC", "VB", "DT", "NN", "NN", "."],
"stanford_ner": ["PERSON", "PERSON", "O", "O", "DATE", "DATE", "DATE", "O", "O", "O", "O", "O", "O", "O", "ORGANIZATION", "O", "O", "O", "O", "O", "O", "NUMBER", "O", "O", "O", "O", "O", "O", "O", "O", "PERSON", "PERSON", "O", "O", "O", "O", "O", "O", "O", "O", "O"],
"stanford_head": [2, 3, 0, 5, 3, 7, 3, 9, 3, 13, 13, 13, 9, 15, 13, 15, 3, 3, 20, 18, 23, 23, 18, 25, 23, 3, 3, 32, 32, 32, 32, 27, 34, 27, 34, 34, 34, 40, 40, 37, 3],
"stanford_deprel": ["compound", "nsubj", "ROOT", "case", "nmod", "amod", "nmod:tmod", "mark", "xcomp", "det", "compound", "compound", "dobj", "punct", "appos", "punct", "punct", "xcomp", "det", "dobj", "case", "nummod", "nmod", "case", "nmod", "punct", "xcomp", "amod", "compound", "compound", "compound", "dobj", "mark", "xcomp", "dobj", "cc", "conj", "det", "compound", "dobj", "punct"]
}
```
### Data Fields
The data fields are the same among all splits.
- `id`: the instance id of this sentence, a `string` feature.
- `docid`: the TAC KBP document id of this sentence, a `string` feature.
- `token`: the list of tokens of this sentence, obtained with the StanfordNLP toolkit, a `list` of `string` features.
- `relation`: the relation label of this instance, a `string` classification label.
- `subj_start`: the 0-based index of the start token of the relation subject mention, an `ìnt` feature.
- `subj_end`: the 0-based index of the end token of the relation subject mention, exclusive, an `ìnt` feature.
- `subj_type`: the NER type of the subject mention, among 23 fine-grained types used in the [Stanford NER system](https://stanfordnlp.github.io/CoreNLP/ner.html), a `string` feature.
- `obj_start`: the 0-based index of the start token of the relation object mention, an `ìnt` feature.
- `obj_end`: the 0-based index of the end token of the relation object mention, exclusive, an `ìnt` feature.
- `obj_type`: the NER type of the object mention, among 23 fine-grained types used in the [Stanford NER system](https://stanfordnlp.github.io/CoreNLP/ner.html), a `string` feature.
- `stanford_pos`: the part-of-speech tag per token. the NER type of the subject mention, among 23 fine-grained types used in the [Stanford NER system](https://stanfordnlp.github.io/CoreNLP/ner.html), a `list` of `string` features.
- `stanford_ner`: the NER tags of tokens (IO-Scheme), among 23 fine-grained types used in the [Stanford NER system](https://stanfordnlp.github.io/CoreNLP/ner.html), a `list` of `string` features.
- `stanford_deprel`: the Stanford dependency relation tag per token, a `list` of `string` features.
- `stanford_head`: the head (source) token index (0-based) for the dependency relation per token. The root token has a head index of -1, a `list` of `int` features.
### Data Splits
To miminize dataset bias, TACRED is stratified across years in which the TAC KBP challenge was run:
| | Train | Dev | Test |
| ----- | ------ | ----- | ---- |
| TACRED | 68,124 (TAC KBP 2009-2012) | 22,631 (TAC KBP 2013) | 15,509 (TAC KBP 2014) |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
See the Stanford paper and the Tacred Revisited paper, plus their appendices.
To ensure that models trained on TACRED are not biased towards predicting false positives on real-world text,
all sampled sentences where no relation was found between the mention pairs were fully annotated to be negative examples. As a result, 79.5% of the examples
are labeled as no_relation.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
To respect the copyright of the underlying TAC KBP corpus, TACRED is released via the
Linguistic Data Consortium ([LDC License](https://catalog.ldc.upenn.edu/license/ldc-non-members-agreement.pdf)).
You can download TACRED from the [LDC TACRED webpage](https://catalog.ldc.upenn.edu/LDC2018T24).
If you are an LDC member, the access will be free; otherwise, an access fee of $25 is needed.
### Citation Information
The original dataset:
```
@inproceedings{zhang2017tacred,
author = {Zhang, Yuhao and Zhong, Victor and Chen, Danqi and Angeli, Gabor and Manning, Christopher D.},
booktitle = {Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017)},
title = {Position-aware Attention and Supervised Data Improve Slot Filling},
url = {https://nlp.stanford.edu/pubs/zhang2017tacred.pdf},
pages = {35--45},
year = {2017}
}
```
For the revised version, please also cite:
```
@inproceedings{alt-etal-2020-tacred,
title = "{TACRED} Revisited: A Thorough Evaluation of the {TACRED} Relation Extraction Task",
author = "Alt, Christoph and
Gabryszak, Aleksandra and
Hennig, Leonhard",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.142",
doi = "10.18653/v1/2020.acl-main.142",
pages = "1558--1569",
}
```
### Contributions
Thanks to [@dfki-nlp](https://github.com/dfki-nlp) for adding this dataset.
|
projecte-aina | null | @misc{11234/1-3424,
title = {Universal Dependencies 2.7},
author = {Zeman, Daniel and Nivre, Joakim and Abrams, Mitchell and Ackermann, Elia and Aepli, No{\"e}mi and Aghaei, Hamid and Agi{\'c}, {\v Z}eljko and Ahmadi, Amir and Ahrenberg, Lars and Ajede, Chika Kennedy and Aleksandravi{\v c}i{\=u}t{\.e}, Gabriel{\.e} and Alfina, Ika and Antonsen, Lene and Aplonova, Katya and Aquino, Angelina and Aragon, Carolina and Aranzabe, Maria Jesus and Arnard{\'o}ttir, {\t H}{\'o}runn and Arutie, Gashaw and Arwidarasti, Jessica Naraiswari and Asahara, Masayuki and Ateyah, Luma and Atmaca, Furkan and Attia, Mohammed and Atutxa, Aitziber and Augustinus, Liesbeth and Badmaeva, Elena and Balasubramani, Keerthana and Ballesteros, Miguel and Banerjee, Esha and Bank, Sebastian and Barbu Mititelu, Verginica and Basmov, Victoria and Batchelor, Colin and Bauer, John and Bedir, Seyyit Talha and Bengoetxea, Kepa and Berk, G{\"o}zde and Berzak, Yevgeni and Bhat, Irshad Ahmad and Bhat, Riyaz Ahmad and Biagetti, Erica and Bick, Eckhard and Bielinskien{\.e}, Agn{\.e} and Bjarnad{\'o}ttir, Krist{\'{\i}}n and Blokland, Rogier and Bobicev, Victoria and Boizou, Lo{\"{\i}}c and Borges V{\"o}lker, Emanuel and B{\"o}rstell, Carl and Bosco, Cristina and Bouma, Gosse and Bowman, Sam and Boyd, Adriane and Brokait{\.e}, Kristina and Burchardt, Aljoscha and Candito, Marie and Caron, Bernard and Caron, Gauthier and Cavalcanti, Tatiana and Cebiroglu Eryigit, Gulsen and Cecchini, Flavio Massimiliano and Celano, Giuseppe G. A. and Ceplo, Slavomir and Cetin, Savas and Cetinoglu, Ozlem and Chalub, Fabricio and Chi, Ethan and Cho, Yongseok and Choi, Jinho and Chun, Jayeol and Cignarella, Alessandra T. and Cinkova, Silvie and Collomb, Aurelie and Coltekin, Cagr{\i} and Connor, Miriam and Courtin, Marine and Davidson, Elizabeth and de Marneffe, Marie-Catherine and de Paiva, Valeria and Derin, Mehmet Oguz and de Souza, Elvis and Diaz de Ilarraza, Arantza and Dickerson, Carly and Dinakaramani, Arawinda and Dione, Bamba and Dirix, Peter and Dobrovoljc, Kaja and Dozat, Timothy and Droganova, Kira and Dwivedi, Puneet and Eckhoff, Hanne and Eli, Marhaba and Elkahky, Ali and Ephrem, Binyam and Erina, Olga and Erjavec, Tomaz and Etienne, Aline and Evelyn, Wograine and Facundes, Sidney and Farkas, Rich{\'a}rd and Fernanda, Mar{\'{\i}}lia and Fernandez Alcalde, Hector and Foster, Jennifer and Freitas, Cl{\'a}udia and Fujita, Kazunori and Gajdosov{\'a}, Katar{\'{\i}}na and Galbraith, Daniel and Garcia, Marcos and G{\"a}rdenfors, Moa and Garza, Sebastian and Gerardi, Fabr{\'{\i}}cio Ferraz and Gerdes, Kim and Ginter, Filip and Goenaga, Iakes and Gojenola, Koldo and G{\"o}k{\i}rmak, Memduh and Goldberg, Yoav and G{\'o}mez Guinovart, Xavier and Gonz{\'a}lez Saavedra,
Berta and Grici{\=u}t{\.e}, Bernadeta and Grioni, Matias and Grobol, Lo{\"{\i}}c and Gr{\=u}z{\={\i}}tis, Normunds and Guillaume, Bruno and Guillot-Barbance, C{\'e}line and G{\"u}ng{\"o}r, Tunga and Habash, Nizar and Hafsteinsson, Hinrik and Haji{\v c}, Jan and Haji{\v c} jr., Jan and H{\"a}m{\"a}l{\"a}inen, Mika and H{\`a} M{\~y}, Linh and Han, Na-Rae and Hanifmuti, Muhammad Yudistira and Hardwick, Sam and Harris, Kim and Haug, Dag and Heinecke, Johannes and Hellwig, Oliver and Hennig, Felix and Hladk{\'a}, Barbora and Hlav{\'a}{\v c}ov{\'a}, Jaroslava and Hociung, Florinel and Hohle, Petter and Huber, Eva and Hwang, Jena and Ikeda, Takumi and Ingason, Anton Karl and Ion, Radu and Irimia, Elena and Ishola, {\d O}l{\'a}j{\'{\i}}d{\'e} and Jel{\'{\i}}nek, Tom{\'a}{\v s} and Johannsen, Anders and J{\'o}nsd{\'o}ttir, Hildur and J{\o}rgensen, Fredrik and Juutinen, Markus and K, Sarveswaran and Ka{\c s}{\i}kara, H{\"u}ner and Kaasen, Andre and Kabaeva, Nadezhda and Kahane, Sylvain and Kanayama, Hiroshi and Kanerva, Jenna and Katz, Boris and Kayadelen, Tolga and Kenney, Jessica and Kettnerov{\'a}, V{\'a}clava and Kirchner, Jesse and Klementieva, Elena and K{\"o}hn, Arne and K{\"o}ksal, Abdullatif and Kopacewicz, Kamil and Korkiakangas, Timo and Kotsyba, Natalia and Kovalevskait{\.e}, Jolanta and Krek, Simon and Krishnamurthy, Parameswari and Kwak, Sookyoung and Laippala, Veronika and Lam, Lucia and Lambertino, Lorenzo and Lando, Tatiana and Larasati, Septina Dian and Lavrentiev, Alexei and Lee, John and L{\^e} H{\`{\^o}}ng, Phương and Lenci, Alessandro and Lertpradit, Saran and Leung, Herman and Levina, Maria and Li, Cheuk Ying and Li, Josie and Li, Keying and Li, Yuan and Lim, {KyungTae} and Linden, Krister and Ljubesic, Nikola and Loginova, Olga and Luthfi, Andry and Luukko, Mikko and Lyashevskaya, Olga and Lynn, Teresa and Macketanz, Vivien and Makazhanov, Aibek and Mandl, Michael and Manning, Christopher and Manurung, Ruli and Maranduc, Catalina and Marcek, David and Marheinecke, Katrin and Mart{\'{\i}}nez Alonso, H{\'e}ctor and Martins, Andr{\'e} and Masek, Jan and Matsuda, Hiroshi and Matsumoto, Yuji and {McDonald}, Ryan and {McGuinness}, Sarah and Mendonca, Gustavo and Miekka, Niko and Mischenkova, Karina and Misirpashayeva, Margarita and Missil{\"a}, Anna and Mititelu, Catalin and Mitrofan, Maria and Miyao, Yusuke and Mojiri Foroushani, {AmirHossein} and Moloodi, Amirsaeid and Montemagni, Simonetta and More, Amir and Moreno Romero, Laura and Mori, Keiko Sophie and Mori, Shinsuke and Morioka, Tomohiko and Moro, Shigeki and Mortensen, Bjartur and Moskalevskyi, Bohdan and Muischnek, Kadri and Munro, Robert and Murawaki, Yugo and M{\"u}{\"u}risep, Kaili and Nainwani, Pinkey and Nakhl{\'e}, Mariam and Navarro Hor{\~n}iacek, Juan Ignacio and Nedoluzhko,
Anna and Ne{\v s}pore-B{\=e}rzkalne, Gunta and Nguy{\~{\^e}}n Th{\d i}, Lương and Nguy{\~{\^e}}n Th{\d i} Minh, Huy{\`{\^e}}n and Nikaido, Yoshihiro and Nikolaev, Vitaly and Nitisaroj, Rattima and Nourian, Alireza and Nurmi, Hanna and Ojala, Stina and Ojha, Atul Kr. and Ol{\'u}{\`o}kun, Ad{\'e}day{\d o}̀ and Omura, Mai and Onwuegbuzia, Emeka and Osenova, Petya and {\"O}stling, Robert and {\O}vrelid, Lilja and {\"O}zate{\c s}, {\c S}aziye Bet{\"u}l and {\"O}zg{\"u}r, Arzucan and {\"O}zt{\"u}rk Ba{\c s}aran, Balk{\i}z and Partanen, Niko and Pascual, Elena and Passarotti, Marco and Patejuk, Agnieszka and Paulino-Passos, Guilherme and Peljak-{\L}api{\'n}ska, Angelika and Peng, Siyao and Perez, Cenel-Augusto and Perkova, Natalia and Perrier, Guy and Petrov, Slav and Petrova, Daria and Phelan, Jason and Piitulainen, Jussi and Pirinen, Tommi A and Pitler, Emily and Plank, Barbara and Poibeau, Thierry and Ponomareva, Larisa and Popel, Martin and Pretkalnina, Lauma and Pr{\'e}vost, Sophie and Prokopidis, Prokopis and Przepi{\'o}rkowski, Adam and Puolakainen, Tiina and Pyysalo, Sampo and Qi, Peng and R{\"a}{\"a}bis, Andriela and Rademaker, Alexandre and Rama, Taraka and Ramasamy, Loganathan and Ramisch, Carlos and Rashel, Fam and Rasooli, Mohammad Sadegh and Ravishankar, Vinit and Real, Livy and Rebeja, Petru and Reddy, Siva and Rehm, Georg and Riabov, Ivan and Rie{\ss}ler, Michael and Rimkut{\.e}, Erika and Rinaldi, Larissa and Rituma, Laura and Rocha, Luisa and R{\"o}gnvaldsson, Eir{\'{\i}}kur and Romanenko, Mykhailo and Rosa, Rudolf and Roșca, Valentin and Rovati, Davide and Rudina, Olga and Rueter, Jack and R{\'u}narsson, Kristjan and Sadde, Shoval and Safari, Pegah and Sagot, Benoit and Sahala, Aleksi and Saleh, Shadi and Salomoni, Alessio and Samardzi{\'c}, Tanja and Samson, Stephanie and Sanguinetti, Manuela and S{\"a}rg,
Dage and Saul{\={\i}}te, Baiba and Sawanakunanon, Yanin and Scannell, Kevin and Scarlata, Salvatore and Schneider, Nathan and Schuster, Sebastian and Seddah, Djam{\'e} and Seeker, Wolfgang and Seraji, Mojgan and Shen, Mo and Shimada, Atsuko and Shirasu, Hiroyuki and Shohibussirri, Muh and Sichinava, Dmitry and Sigurðsson, Einar Freyr and Silveira, Aline and Silveira, Natalia and Simi, Maria and Simionescu, Radu and Simk{\'o}, Katalin and {\v S}imkov{\'a}, M{\'a}ria and Simov, Kiril and Skachedubova, Maria and Smith, Aaron and Soares-Bastos, Isabela and Spadine, Carolyn and Steingr{\'{\i}}msson, Stein{\t h}{\'o}r and Stella, Antonio and Straka, Milan and Strickland, Emmett and Strnadov{\'a}, Jana and Suhr, Alane and Sulestio, Yogi Lesmana and Sulubacak, Umut and Suzuki, Shingo and Sz{\'a}nt{\'o}, Zsolt and Taji, Dima and Takahashi, Yuta and Tamburini, Fabio and Tan, Mary Ann C. and Tanaka, Takaaki and Tella, Samson and Tellier, Isabelle and Thomas, Guillaume and Torga, Liisi and Toska, Marsida and Trosterud, Trond and Trukhina, Anna and Tsarfaty, Reut and T{\"u}rk, Utku and Tyers, Francis and Uematsu, Sumire and Untilov, Roman and Uresov{\'a}, Zdenka and Uria, Larraitz and Uszkoreit, Hans and Utka, Andrius and Vajjala, Sowmya and van Niekerk, Daniel and van Noord, Gertjan and Varga, Viktor and Villemonte de la Clergerie, Eric and Vincze, Veronika and Wakasa, Aya and Wallenberg, Joel C. and Wallin, Lars and Walsh, Abigail and Wang, Jing Xian and Washington, Jonathan North and Wendt, Maximilan and Widmer, Paul and Williams, Seyi and Wir{\'e}n, Mats and Wittern, Christian and Woldemariam, Tsegay and Wong, Tak-sum and Wr{\'o}blewska, Alina and Yako, Mary and Yamashita, Kayo and Yamazaki, Naoki and Yan, Chunxiao and Yasuoka, Koichi and Yavrumyan, Marat M. and Yu, Zhuoran and Zabokrtsk{\'y}, Zdenek and Zahra, Shorouq and Zeldes, Amir and Zhu, Hanzhi and Zhuravleva, Anna},
url = {http://hdl.handle.net/11234/1-3424},
note = {{LINDAT}/{CLARIAH}-{CZ} digital library at the Institute of Formal and Applied Linguistics ({{\'U}FAL}), Faculty of Mathematics and Physics, Charles University},
copyright = {Licence Universal Dependencies v2.7},
year = {2020} } | Universal Dependencies is a project that seeks to develop cross-linguistically consistent treebank annotation for many languages, with the goal of facilitating multilingual parser development, cross-lingual learning, and parsing research from a language typology perspective. The annotation scheme is based on (universal) Stanford dependencies (de Marneffe et al., 2006, 2008, 2014), Google universal part-of-speech tags (Petrov et al., 2012), and the Interset interlingua for morphosyntactic tagsets (Zeman, 2008). | false | 2 | false | projecte-aina/UD_Catalan-AnCora | 2022-10-26T15:08:47.000Z | null | false | 3be48607d4b49c42982b687b3efdc4b77eaebd6f | [] | [
"annotations_creators:expert-generated",
"language:ca",
"language_creators:found",
"license:cc-by-4.0",
"multilinguality:monolingual",
"task_categories:token-classification",
"task_ids:part-of-speech"
] | https://huggingface.co/datasets/projecte-aina/UD_Catalan-AnCora/resolve/main/README.md | ---
YAML tags:
annotations_creators:
- expert-generated
language:
- ca
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: UD_Catalan-AnCora
size_categories: []
source_datasets: []
tags: []
task_categories:
- token-classification
task_ids:
- part-of-speech
---
# UD_Catalan-AnCora
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Website:** https://github.com/UniversalDependencies/UD_Catalan-AnCora
- **Point of Contact:** [Daniel Zeman](zeman@ufal.mff.cuni.cz)
### Dataset Summary
This dataset is composed of the annotations from the [AnCora corpus](http://clic.ub.edu/corpus/), projected on the [Universal Dependencies treebank](https://universaldependencies.org/). We use the POS annotations of this corpus as part of the [Catalan Language Understanding Benchmark (CLUB)](https://club.aina.bsc.es/).
### Supported Tasks and Leaderboards
POS tagging
### Languages
The dataset is in Catalan (`ca-CA`)
## Dataset Structure
### Data Instances
Three conllu files.
Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines:
1) Word lines containing the annotation of a word/token in 10 fields separated by single tab characters (see below).
2) Blank lines marking sentence boundaries.
3) Comment lines starting with hash (#).
### Data Fields
Word lines contain the following fields:
1) ID: Word index, integer starting at 1 for each new sentence; may be a range for multiword tokens; may be a decimal number for empty nodes (decimal numbers can be lower than 1 but must be greater than 0).
2) FORM: Word form or punctuation symbol.
3) LEMMA: Lemma or stem of word form.
4) UPOS: Universal part-of-speech tag.
5) XPOS: Language-specific part-of-speech tag; underscore if not available.
6) FEATS: List of morphological features from the universal feature inventory or from a defined language-specific extension; underscore if not available.
7) HEAD: Head of the current word, which is either a value of ID or zero (0).
8) DEPREL: Universal dependency relation to the HEAD (root iff HEAD = 0) or a defined language-specific subtype of one.
9) DEPS: Enhanced dependency graph in the form of a list of head-deprel pairs.
10) MISC: Any other annotation.
From: [https://universaldependencies.org](https://universaldependencies.org/guidelines.html)
### Data Splits
- ca_ancora-ud-train.conllu
- ca_ancora-ud-dev.conllu
- ca_ancora-ud-test.conllu
## Dataset Creation
### Curation Rationale
[N/A]
### Source Data
- [UD_Catalan-AnCora](https://github.com/UniversalDependencies/UD_Catalan-AnCora)
#### Initial Data Collection and Normalization
The original annotation was done in a constituency framework as a part of the [AnCora project](http://clic.ub.edu/corpus/) at the University of Barcelona. It was converted to dependencies by the [Universal Dependencies team](https://universaldependencies.org/) and used in the CoNLL 2009 shared task. The CoNLL 2009 version was later converted to HamleDT and to Universal Dependencies.
For more information on the AnCora project, visit the [AnCora site](http://clic.ub.edu/corpus/).
To learn about the Universal Dependences, visit the webpage [https://universaldependencies.org](https://universaldependencies.org)
#### Who are the source language producers?
For more information on the AnCora corpus and its sources, visit the [AnCora site](http://clic.ub.edu/corpus/).
### Annotations
#### Annotation process
For more information on the first AnCora annotation, visit the [AnCora site](http://clic.ub.edu/corpus/).
#### Who are the annotators?
For more information on the AnCora annotation team, visit the [AnCora site](http://clic.ub.edu/corpus/).
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
### Licensing Information
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by/4.0/">CC Attribution 4.0 International License</a>.
### Citation Information
The following paper must be cited when using this corpus:
Taulé, M., M.A. Martí, M. Recasens (2008) 'Ancora: Multilevel Annotated Corpora for Catalan and Spanish', Proceedings of 6th International Conference on Language Resources and Evaluation. Marrakesh (Morocco).
To cite the Universal Dependencies project:
Rueter, J. (Creator), Erina, O. (Contributor), Klementeva, J. (Contributor), Ryabov, I. (Contributor), Tyers, F. M. (Contributor), Zeman, D. (Contributor), Nivre, J. (Creator) (15 Nov 2020). Universal Dependencies version 2.7 Erzya JR. Universal Dependencies Consortium.
|
bigscience | null | @misc{muennighoff2022crosslingual,
title={Crosslingual Generalization through Multitask Finetuning},
author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel},
year={2022},
eprint={2211.01786},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot. | false | 57 | false | bigscience/xP3mt | 2022-11-04T01:55:28.000Z | null | false | 1cac4727b8fe2de466c0f1d2e82f9d6b6b952200 | [] | [
"arxiv:2211.01786",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"language:ak",
"language:ar",
"language:as",
"language:bm",
"language:bn",
"language:ca",
"language:code",
"language:en",
"language:es",
"language:eu",
"language:fon",
"language:fr",
"lang... | https://huggingface.co/datasets/bigscience/xP3mt/resolve/main/README.md | ---
annotations_creators:
- expert-generated
- crowdsourced
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zu
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
license:
- apache-2.0
multilinguality:
- multilingual
pretty_name: xP3
size_categories:
- 100M<n<1B
task_categories:
- other
---
# Dataset Card for xP3
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/bigscience-workshop/xmtf
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
- **Point of Contact:** [Niklas Muennighoff](mailto:niklas@hf.co)
### Dataset Summary
> xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot.
- **Creation:** The dataset can be recreated using instructions available [here](https://github.com/bigscience-workshop/xmtf#create-xp3). We provide this version to save processing time and ease reproducibility.
- **Languages:** 46 (Can be extended by [recreating with more splits](https://github.com/bigscience-workshop/xmtf#create-xp3))
- **xP3 Dataset Family:**
<table>
<tr>
<th>Name</th>
<th>Explanation</th>
<th>Example models</th>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3>xP3</a></t>
<td>Mixture of 13 training tasks in 46 languages with English prompts</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a> & <a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a></t>
<td>Mixture of 13 training tasks in 46 languages with prompts in 20 languages (machine-translated from English)</td>
<td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3all>xP3all</a></t>
<td>xP3 + our evaluation datasets adding an additional 3 tasks for a total of 16 tasks in 46 languages with English prompts</td>
<td></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3megds>xP3megds</a></t>
<td><a href=https://github.com/bigscience-workshop/Megatron-DeepSpeed>Megatron-DeepSpeed</a> processed version of xP3</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/P3>P3</a></t>
<td>Repreprocessed version of the English-only <a href=https://huggingface.co/datasets/bigscience/P3>P3</a> with 8 training tasks</td>
<td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
</tr>
</table>
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
```json
{
"inputs": "Oración 1: Fue académico en literatura metafísica, teología y ciencias clásicas.\Oración 2: Fue académico en literatura metafísica, teología y ciencia clásica.\nPregunta: ¿La oración 1 parafrasea la oración 2? ¿Si o no?",
"targets": "Sí"
}
```
### Data Fields
The data fields are the same among all splits:
- `inputs`: the natural language input fed to the model
- `targets`: the natural language target that the model has to generate
### Data Splits
The below table summarizes sizes per language (computed from the `merged_{lang}.jsonl` files). Due to languages like `tw` only being single sentence translation samples from Flores, their byte percentage is significantly lower than their sample percentage. We machine-translated prompts for monolingual datasets, thus languages with only crosslingual datasets (e.g. Translation) do not have non-English prompts. Languages without non-English prompts are equivalent to [xP3](https://huggingface.co/datasets/bigscience/xP3).
|Language|Kilobytes|%|Samples|%|Non-English prompts|
|--------|------:|-:|---:|-:|-:|
|tw|106288|0.11|265071|0.33| |
|bm|107056|0.11|265180|0.33| |
|ak|108096|0.11|265071|0.33| |
|ca|110608|0.11|271191|0.34| |
|eu|113008|0.12|281199|0.35| |
|fon|113072|0.12|265063|0.33| |
|st|114080|0.12|265063|0.33| |
|ki|115040|0.12|265180|0.33| |
|tum|116032|0.12|265063|0.33| |
|wo|122560|0.13|365063|0.46| |
|ln|126304|0.13|365060|0.46| |
|as|156256|0.16|265063|0.33| |
|or|161472|0.17|265063|0.33| |
|kn|165456|0.17|265063|0.33| |
|ml|175040|0.18|265864|0.33| |
|rn|192992|0.2|318189|0.4| |
|nso|229712|0.24|915051|1.14| |
|tn|235536|0.24|915054|1.14| |
|lg|235936|0.24|915021|1.14| |
|rw|249360|0.26|915043|1.14| |
|ts|250256|0.26|915044|1.14| |
|sn|252496|0.26|865056|1.08| |
|xh|254672|0.26|915058|1.14| |
|zu|263712|0.27|915061|1.14| |
|ny|272128|0.28|915063|1.14| |
|ig|325440|0.33|950097|1.19|✅|
|yo|339664|0.35|913021|1.14|✅|
|ne|398144|0.41|315754|0.39|✅|
|pa|529632|0.55|339210|0.42|✅|
|sw|561392|0.58|1114439|1.39|✅|
|gu|566576|0.58|347499|0.43|✅|
|mr|674000|0.69|417269|0.52|✅|
|bn|854864|0.88|428725|0.54|✅|
|ta|943440|0.97|410633|0.51|✅|
|te|1384016|1.42|573354|0.72|✅|
|ur|1944416|2.0|855756|1.07|✅|
|vi|3113184|3.2|1667306|2.08|✅|
|code|4330752|4.46|2707724|3.38| |
|hi|4469712|4.6|1543441|1.93|✅|
|id|4538768|4.67|2582272|3.22|✅|
|zh|4604112|4.74|3571636|4.46|✅|
|ar|4703968|4.84|2148970|2.68|✅|
|fr|5558912|5.72|5055942|6.31|✅|
|pt|6130016|6.31|3562772|4.45|✅|
|es|7579424|7.8|5151349|6.43|✅|
|en|39252528|40.4|32740750|40.87| |
|total|97150128|100.0|80100816|100.0|✅|
## Dataset Creation
### Source Data
#### Training datasets
- Code Miscellaneous
- [CodeComplex](https://huggingface.co/datasets/codeparrot/codecomplex)
- [Docstring Corpus](https://huggingface.co/datasets/teven/code_docstring_corpus)
- [GreatCode](https://huggingface.co/datasets/great_code)
- [State Changes](https://huggingface.co/datasets/Fraser/python-state-changes)
- Closed-book QA
- [Hotpot QA](https://huggingface.co/datasets/hotpot_qa)
- [Trivia QA](https://huggingface.co/datasets/trivia_qa)
- [Web Questions](https://huggingface.co/datasets/web_questions)
- [Wiki QA](https://huggingface.co/datasets/wiki_qa)
- Extractive QA
- [Adversarial QA](https://huggingface.co/datasets/adversarial_qa)
- [CMRC2018](https://huggingface.co/datasets/cmrc2018)
- [DRCD](https://huggingface.co/datasets/clue)
- [DuoRC](https://huggingface.co/datasets/duorc)
- [MLQA](https://huggingface.co/datasets/mlqa)
- [Quoref](https://huggingface.co/datasets/quoref)
- [ReCoRD](https://huggingface.co/datasets/super_glue)
- [ROPES](https://huggingface.co/datasets/ropes)
- [SQuAD v2](https://huggingface.co/datasets/squad_v2)
- [xQuAD](https://huggingface.co/datasets/xquad)
- TyDI QA
- [Primary](https://huggingface.co/datasets/khalidalt/tydiqa-primary)
- [Goldp](https://huggingface.co/datasets/khalidalt/tydiqa-goldp)
- Multiple-Choice QA
- [ARC](https://huggingface.co/datasets/ai2_arc)
- [C3](https://huggingface.co/datasets/c3)
- [CoS-E](https://huggingface.co/datasets/cos_e)
- [Cosmos](https://huggingface.co/datasets/cosmos)
- [DREAM](https://huggingface.co/datasets/dream)
- [MultiRC](https://huggingface.co/datasets/super_glue)
- [OpenBookQA](https://huggingface.co/datasets/openbookqa)
- [PiQA](https://huggingface.co/datasets/piqa)
- [QUAIL](https://huggingface.co/datasets/quail)
- [QuaRel](https://huggingface.co/datasets/quarel)
- [QuaRTz](https://huggingface.co/datasets/quartz)
- [QASC](https://huggingface.co/datasets/qasc)
- [RACE](https://huggingface.co/datasets/race)
- [SciQ](https://huggingface.co/datasets/sciq)
- [Social IQA](https://huggingface.co/datasets/social_i_qa)
- [Wiki Hop](https://huggingface.co/datasets/wiki_hop)
- [WiQA](https://huggingface.co/datasets/wiqa)
- Paraphrase Identification
- [MRPC](https://huggingface.co/datasets/super_glue)
- [PAWS](https://huggingface.co/datasets/paws)
- [PAWS-X](https://huggingface.co/datasets/paws-x)
- [QQP](https://huggingface.co/datasets/qqp)
- Program Synthesis
- [APPS](https://huggingface.co/datasets/codeparrot/apps)
- [CodeContests](https://huggingface.co/datasets/teven/code_contests)
- [JupyterCodePairs](https://huggingface.co/datasets/codeparrot/github-jupyter-text-code-pairs)
- [MBPP](https://huggingface.co/datasets/Muennighoff/mbpp)
- [NeuralCodeSearch](https://huggingface.co/datasets/neural_code_search)
- [XLCoST](https://huggingface.co/datasets/codeparrot/xlcost-text-to-code)
- Structure-to-text
- [Common Gen](https://huggingface.co/datasets/common_gen)
- [Wiki Bio](https://huggingface.co/datasets/wiki_bio)
- Sentiment
- [Amazon](https://huggingface.co/datasets/amazon_polarity)
- [App Reviews](https://huggingface.co/datasets/app_reviews)
- [IMDB](https://huggingface.co/datasets/imdb)
- [Rotten Tomatoes](https://huggingface.co/datasets/rotten_tomatoes)
- [Yelp](https://huggingface.co/datasets/yelp_review_full)
- Simplification
- [BiSECT](https://huggingface.co/datasets/GEM/BiSECT)
- Summarization
- [CNN Daily Mail](https://huggingface.co/datasets/cnn_dailymail)
- [Gigaword](https://huggingface.co/datasets/gigaword)
- [MultiNews](https://huggingface.co/datasets/multi_news)
- [SamSum](https://huggingface.co/datasets/samsum)
- [Wiki-Lingua](https://huggingface.co/datasets/GEM/wiki_lingua)
- [XLSum](https://huggingface.co/datasets/GEM/xlsum)
- [XSum](https://huggingface.co/datasets/xsum)
- Topic Classification
- [AG News](https://huggingface.co/datasets/ag_news)
- [DBPedia](https://huggingface.co/datasets/dbpedia_14)
- [TNEWS](https://huggingface.co/datasets/clue)
- [TREC](https://huggingface.co/datasets/trec)
- [CSL](https://huggingface.co/datasets/clue)
- Translation
- [Flores-200](https://huggingface.co/datasets/Muennighoff/flores200)
- [Tatoeba](https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt)
- Word Sense disambiguation
- [WiC](https://huggingface.co/datasets/super_glue)
- [XL-WiC](https://huggingface.co/datasets/pasinit/xlwic)
#### Evaluation datasets (included in [xP3all](https://huggingface.co/datasets/bigscience/xP3all) except for HumanEval)
- Natural Language Inference
- [ANLI](https://huggingface.co/datasets/anli)
- [CB](https://huggingface.co/datasets/super_glue)
- [RTE](https://huggingface.co/datasets/super_glue)
- [XNLI](https://huggingface.co/datasets/xnli)
- Coreference Resolution
- [Winogrande](https://huggingface.co/datasets/winogrande)
- [XWinograd](https://huggingface.co/datasets/Muennighoff/xwinograd)
- Program Synthesis
- [HumanEval](https://huggingface.co/datasets/openai_humaneval)
- Sentence Completion
- [COPA](https://huggingface.co/datasets/super_glue)
- [Story Cloze](https://huggingface.co/datasets/story_cloze)
- [XCOPA](https://huggingface.co/datasets/xcopa)
- [XStoryCloze](https://huggingface.co/datasets/Muennighoff/xstory_cloze)
## Additional Information
### Licensing Information
The dataset is released under Apache 2.0.
### Citation Information
```bibtex
@misc{muennighoff2022crosslingual,
title={Crosslingual Generalization through Multitask Finetuning},
author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel},
year={2022},
eprint={2211.01786},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to the contributors of [promptsource](https://github.com/bigscience-workshop/promptsource/graphs/contributors) for adding many prompts used in this dataset. |
sasha | null | null | null | false | 1 | false | sasha/stablediffusionbias | 2022-09-28T13:33:54.000Z | null | false | 8b08f37958afaaf8b6afec45f6aa348167ea777f | [] | [
"license:cc-by-nc-4.0"
] | https://huggingface.co/datasets/sasha/stablediffusionbias/resolve/main/README.md | ---
license: cc-by-nc-4.0
---
|
ankitkupadhyay | null | null | null | false | 1 | false | ankitkupadhyay/XNLI | 2022-09-28T19:27:00.000Z | null | false | c5c81300c6eed75b0c2fba9e702ec21039d9a961 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/ankitkupadhyay/XNLI/resolve/main/README.md | ---
license: apache-2.0
---
|
OMGSAMUELRBR | null | null | null | false | 2 | false | OMGSAMUELRBR/Test47236 | 2022-09-28T15:08:59.000Z | null | false | dda37a4cbf1f2cee6d752d6bc501f03c53d90317 | [] | [
"license:gpl-3.0"
] | https://huggingface.co/datasets/OMGSAMUELRBR/Test47236/resolve/main/README.md | ---
license: gpl-3.0
---
|
NobuLuis | null | null | null | false | 2 | false | NobuLuis/zeein | 2022-09-28T15:21:04.000Z | null | false | 097422ac9004c632e11f3a0dcd52fca53226f85d | [] | [
"license:other"
] | https://huggingface.co/datasets/NobuLuis/zeein/resolve/main/README.md | ---
license: other
---
|
macfarrut | null | null | null | false | 3 | false | macfarrut/macfarrut | 2022-09-28T15:29:14.000Z | null | false | ee9293bbaae6d3604d2774b49e2cc93aaa10f585 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/macfarrut/macfarrut/resolve/main/README.md | ---
license: openrail
---
|
MrContext | null | null | null | false | 1 | false | MrContext/DREAMCONTEXT | 2022-09-28T15:54:13.000Z | null | false | 67141dfcd78fdce1b716624fe853988f3997b3de | [] | [] | https://huggingface.co/datasets/MrContext/DREAMCONTEXT/resolve/main/README.md | |
poeticoncept | null | null | null | false | 2 | false | poeticoncept/autoportrait | 2022-09-28T22:03:44.000Z | null | false | 6f98b0b08182c7cd804d2c01ea102780ed0ca4ba | [] | [
"license:unknown"
] | https://huggingface.co/datasets/poeticoncept/autoportrait/resolve/main/README.md | ---
license: unknown
---
|
semiller206 | null | null | null | false | 2 | false | semiller206/semiller206 | 2022-09-30T20:01:06.000Z | null | false | 38849a0521e548dd30f944f0e09f1799edf90415 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/semiller206/semiller206/resolve/main/README.md | ---
license: openrail
---
|
CANUTO | null | null | null | false | 2 | false | CANUTO/images | 2022-09-28T16:00:43.000Z | null | false | cc06d31cd266a978219b212ba00e72eb0ad14d4c | [] | [] | https://huggingface.co/datasets/CANUTO/images/resolve/main/README.md | a |
MrProcastinador | null | null | null | false | 2 | false | MrProcastinador/CHOLO | 2022-09-28T16:07:58.000Z | null | false | 4e531582d091467f2f3c4de4e530d0f9733314b5 | [] | [] | https://huggingface.co/datasets/MrProcastinador/CHOLO/resolve/main/README.md | |
khalidx199 | null | null | null | false | 2 | false | khalidx199/k199 | 2022-09-28T16:49:21.000Z | null | false | 2729379a3f4648fdee939b5e501e3dc2789d27e5 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/khalidx199/k199/resolve/main/README.md | ---
license: apache-2.0
---
|
Julioqt | null | null | null | false | 3 | false | Julioqt/pruebawobia | 2022-09-28T17:06:15.000Z | null | false | 67943d9fe9fa298222d4651003f417159796259c | [] | [
"license:openrail"
] | https://huggingface.co/datasets/Julioqt/pruebawobia/resolve/main/README.md | ---
license: openrail
---
|
almost | null | null | null | false | 1 | false | almost/test | 2022-09-28T16:51:34.000Z | null | false | e85d8a286079ca576ea7d8820dfd0f20f57dbef5 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/almost/test/resolve/main/README.md | ---
license: afl-3.0
---
|
PCScreen | null | null | null | false | 1 | false | PCScreen/Thomaz_Junior | 2022-09-28T16:57:51.000Z | null | false | 74c2e9f15ecd969d74ae3f82749c26d10268190a | [] | [
"license:unknown"
] | https://huggingface.co/datasets/PCScreen/Thomaz_Junior/resolve/main/README.md | ---
license: unknown
---
|
kashif | null | null | null | false | 1 | false | kashif/tourism-monthly-batch | 2022-09-28T17:29:04.000Z | null | false | e38cf8f0d16cdefbe65415f8173812f68b24108f | [] | [
"license:cc"
] | https://huggingface.co/datasets/kashif/tourism-monthly-batch/resolve/main/README.md | ---
license: cc
---
|
alxdfy | null | null | null | false | 1 | false | alxdfy/noggles_inversion | 2022-09-28T17:30:23.000Z | null | false | ed89518500ea14c7cf8208d1e82f16bf5abdd07c | [] | [
"license:cc0-1.0"
] | https://huggingface.co/datasets/alxdfy/noggles_inversion/resolve/main/README.md | ---
license: cc0-1.0
---
|
marcosfevre | null | null | null | false | 1 | false | marcosfevre/images | 2022-09-28T19:42:07.000Z | null | false | d0a11f31e2c40f1da8060c3377289514669606d6 | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/marcosfevre/images/resolve/main/README.md | ---
license: cc-by-4.0
---
|
CarlosMachucaFotografia | null | null | null | false | 1 | false | CarlosMachucaFotografia/Imagenesmias | 2022-09-28T18:38:45.000Z | null | false | d965544df7c29b63d21cd188684998673e726467 | [] | [] | https://huggingface.co/datasets/CarlosMachucaFotografia/Imagenesmias/resolve/main/README.md | |
JosephEudave | null | null | null | false | 2 | false | JosephEudave/Stabledifussion-dreambooth | 2022-09-28T19:21:08.000Z | null | false | 9a76277bcbb403d82f84201035723d3d7bd600c7 | [] | [
"license:other"
] | https://huggingface.co/datasets/JosephEudave/Stabledifussion-dreambooth/resolve/main/README.md | ---
license: other
---
|
jurer | null | null | null | false | 3 | false | jurer/farias | 2022-09-28T18:51:07.000Z | null | false | 42b703eeb2f8b004158d0cb88752aaeca90eb439 | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/jurer/farias/resolve/main/README.md | ---
license: cc-by-4.0
---
|
nuprl | null | @misc{multipl-e,
doi = {10.48550/ARXIV.2208.08227},
url = {https://arxiv.org/abs/2208.08227},
author = {Cassano, Federico and Gouwar, John and Nguyen, Daniel and
Nguyen, Sydney and Phipps-Costin, Luna and Pinckney, Donald and
Yee, Ming-Ho and Zi, Yangtian and Anderson, Carolyn Jane and
Feldman, Molly Q and Guha, Arjun and
Greenberg, Michael and Jangda, Abhinav},
title = {A Scalable and Extensible Approach to Benchmarking NL2Code for 18
Programming Languages},
publisher = {arXiv},
year = {2022},
} | MultiPL-E is a dataset for evaluating large language models for code generation that supports 18 programming languages. It takes the OpenAI "HumanEval" and the MBPP Python benchmarks and uses little compilers to translate them to other languages. It is easy to add support for new languages and benchmarks. | false | 839 | false | nuprl/MultiPL-E | 2022-10-03T16:52:09.000Z | null | false | fa40fdc81019cb5af05453f21989eb0d4b54f355 | [] | [
"arxiv:2208.08227",
"annotations_creators:machine-generated",
"language:en",
"language_creators:machine-generated",
"language_creators:expert-generated",
"license:mit",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"source_datasets:extended|openai_humaneval... | https://huggingface.co/datasets/nuprl/MultiPL-E/resolve/main/README.md | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- machine-generated
- expert-generated
license:
- mit
multilinguality:
- monolingual
pretty_name: MultiPLE-E
size_categories:
- 1K<n<10K
source_datasets:
- original
- extended|openai_humaneval
- extended|mbpp
tags: []
task_categories: []
task_ids: []
---
# Dataset Card for MultiPL-E
## Dataset Description
- **Homepage:** https://nuprl.github.io/MultiPL-E/
- **Repository:** https://github.com/nuprl/MultiPL-E
- **Paper:** https://arxiv.org/abs/2208.08227
- **Point of Contact:** carolyn.anderson@wellesley.edu, mfeldman@oberlin.edu, a.guha@northeastern.edu
## Dataset Summary
MultiPL-E is a dataset for evaluating large language models for code
generation that supports 18 programming languages. It takes the OpenAI
"HumanEval" and the MBPP Python benchmarks and uses little compilers to
translate them to other languages. It is easy to add support for new languages
and benchmarks.
## Subsets
For most purposes, you should use the variations called *SRCDATA-LANG*, where
*SRCDATA* is either "humaneval" or "mbpp" and *LANG* is one of the supported
languages. We use the canonical file extension for each language to identify
the language, e.g., "py" for Python, "cpp" for C++, "lua" for Lua, and so on.
We also provide a few other variations:
- *SRCDATA-LANG-keep* is the same as *SRCDATA-LANG*, but the text of the prompt
is totally unchanged. If the original prompt had Python doctests, they remain
as Python instead of being translated to *LANG*. If the original prompt had
Python-specific terminology, e.g., "list", it remains "list", instead of
being translated, e.g., to "vector" for C++.
- *SRCDATA-LANG-transform* transforms the doctests to *LANG* but leaves
the natural language text of the prompt unchanged.
- *SRCDATA-LANG-removed* removes the doctests from the prompt.
Note that MBPP does not have any doctests, so the "removed" and "transform"
variations are not available for MBPP.
## Example
The following script uses the Salesforce/codegen model to generate Lua
and MultiPL-E to produce a script with unit tests for luaunit.
```python
import datasets
from transformers import AutoTokenizer, AutoModelForCausalLM
LANG = "lua"
MODEL_NAME = "Salesforce/codegen-350M-multi"
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForCausalLM.from_pretrained(MODEL_NAME).half().cuda()
problems = datasets.load_dataset("nuprl/MultiPL-E", f"humaneval-{LANG}")
def stop_at_stop_token(decoded_string, problem):
"""
Truncates the output at stop tokens, taking care to skip the prompt
which may have stop tokens.
"""
min_stop_index = len(decoded_string)
for stop_token in problem["stop_tokens"]:
stop_index = decoded_string.find(stop_token)
if stop_index != -1 and stop_index > len(problem["prompt"]) and stop_index < min_stop_index:
min_stop_index = stop_index
return decoded_string[:min_stop_index]
for problem in problems["test"]:
input_ids = tokenizer(
problem["prompt"],
return_tensors="pt",
).input_ids.cuda()
generated_ids = model.generate(
input_ids, max_length=512, pad_token_id=tokenizer.eos_token_id + 2
)
truncated_string = stop_at_stop_token(tokenizer.decode(generated_ids[0]), problem)
filename = problem["name"] + "." + LANG
with open(filename, "w") as f:
print(f"Created {filename}")
f.write(truncated_string)
f.write("\n")
f.write(problem["tests"])
``` |
bastiankase | null | null | null | false | 1 | false | bastiankase/dianakreuz | 2022-09-29T18:07:05.000Z | null | false | 34326d1ee26cafea5e2ac83b0f3b5308de2077c0 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/bastiankase/dianakreuz/resolve/main/README.md | ---
license: openrail
---
|
LuisPerezT | null | null | null | false | 1 | false | LuisPerezT/Fotos | 2022-09-28T21:27:29.000Z | null | false | 53f065e69993fb412774efb69e933fec782970e4 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/LuisPerezT/Fotos/resolve/main/README.md | ---
license: openrail
---
|
Grim421 | null | null | null | false | 3 | false | Grim421/testing | 2022-09-28T19:51:56.000Z | null | false | cda2e3de3397cb59cb0eed606c2179e780e66663 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/Grim421/testing/resolve/main/README.md | ---
license: afl-3.0
---
|
cannlytics | null | @inproceedings{cannlytics2022cannabis_licenses,
author = {Skeate, Keegan and O'Sullivan-Sutherland, Candace},
title = {Cannabis Licenses},
booktitle = {Cannabis Data Science},
month = {October},
year = {2022},
address = {United States of America},
publisher = {Cannlytics}
} | Cannabis Licenses (https://cannlytics.com/data/licenses) is a
dataset of curated cannabis license data. The dataset consists of 18
sub-datasets for each state with permitted adult-use cannabis, as well
as a sub-dataset that includes all licenses. | false | 27 | false | cannlytics/cannabis_licenses | 2022-10-08T19:47:54.000Z | null | false | ee3a1272126c3cb6ebf434c1dc63ae8ceb33f22e | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"source_datasets:original",
"tags:cannabis",
"tags:licenses",
"tags:licensees",
"tags:retail"
] | https://huggingface.co/datasets/cannlytics/cannabis_licenses/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
license:
- cc-by-4.0
pretty_name: cannabis_licenses
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- cannabis
- licenses
- licensees
- retail
---
# Cannabis Licenses, Curated by Cannlytics
<div align="center" style="text-align:center; margin-top:1rem; margin-bottom: 1rem;">
<img style="max-height:365px;width:100%;max-width:720px;" alt="" src="analysis/figures/cannabis-licenses-map.png">
</div>
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Data Collection and Normalization](#data-collection-and-normalization)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [License](#license)
- [Citation](#citation)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** <https://github.com/cannlytics/cannlytics>
- **Repository:** <https://huggingface.co/datasets/cannlytics/cannabis_licenses>
- **Point of Contact:** <dev@cannlytics.com>
### Dataset Summary
**Cannabis Licenses** is a collection of cannabis license data for each state with permitted adult-use cannabis. The dataset also includes a sub-dataset, `all`, that includes all licenses.
## Dataset Structure
The dataset is partitioned into 18 subsets for each state and the aggregate.
| State | Code | Status |
|-------|------|--------|
| [All](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/all) | `all` | ✅ |
| [Alaska](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/ak) | `ak` | ✅ |
| [Arizona](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/az) | `az` | ✅ |
| [California](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/ca) | `ca` | ✅ |
| [Colorado](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/co) | `co` | ✅ |
| [Connecticut](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/ct) | `ct` | ✅ |
| [Illinois](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/il) | `il` | ✅ |
| [Maine](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/me) | `me` | ✅ |
| [Massachusetts](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/ma) | `ma` | ✅ |
| [Michigan](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/mi) | `mi` | ✅ |
| [Montana](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/mt) | `mt` | ✅ |
| [Nevada](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/nv) | `nv` | ✅ |
| [New Jersey](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/nj) | `nj` | ✅ |
| New York | `ny` | ⏳ Expected 2022 Q4 |
| [New Mexico](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/nm) | `nm` | ⚠️ Under development |
| [Oregon](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/or) | `or` | ✅ |
| [Rhode Island](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/ri) | `ri` | ✅ |
| [Vermont](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/vt) | `vt` | ✅ |
| Virginia | `va` | ⏳ Expected 2024 |
| [Washington](https://huggingface.co/datasets/cannlytics/cannabis_licenses/tree/main/data/wa) | `wa` | ✅ |
The following (18) states have issued medical cannabis licenses, but are not (yet) included in the dataset:
- Alabama
- Arkansas
- Delaware
- District of Columbia (D.C.)
- Florida
- Louisiana
- Maryland
- Minnesota
- Mississippi
- Missouri
- New Hampshire
- North Dakota
- Ohio
- Oklahoma
- Pennsylvania
- South Dakota
- Utah
- West Virginia
### Data Instances
You can load the licenses for each state. For example:
```py
from datasets import load_dataset
# Get the licenses for a specific state.
dataset = load_dataset('cannlytics/cannabis_licenses', 'ca')
data = dataset['data']
assert len(data) > 0
print('%i licenses.' % len(data))
```
### Data Fields
Below is a non-exhaustive list of fields, used to standardize the various data that are encountered, that you may expect encounter in the parsed COA data.
| Field | Example | Description |
|-------|-----|-------------|
| `id` | `"1046"` | A state-unique ID for the license. |
| `license_number` | `"C10-0000423-LIC"` | A unique license number. |
| `license_status` | `"Active"` | The status of the license. Only licenses that are active are included. |
| `license_status_date` | `"2022-04-20T00:00"` | The date the status was assigned, an ISO-formatted date if present. |
| `license_term` | `"Provisional"` | The term for the license. |
| `license_type` | `"Commercial - Retailer"` | The type of business license. |
| `license_designation` | `"Adult-Use and Medicinal"` | A state-specific classification for the license. |
| `issue_date` | `"2019-07-15T00:00:00"` | An issue date for the license, an ISO-formatted date if present. |
| `expiration_date` | `"2023-07-14T00:00:00"` | An expiration date for the license, an ISO-formatted date if present. |
| `licensing_authority_id` | `"BCC"` | A unique ID for the state licensing authority. |
| `licensing_authority` | `"Bureau of Cannabis Control (BCC)"` | The state licensing authority. |
| `business_legal_name` | `"Movocan"` | The legal name of the business that owns the license. |
| `business_dba_name` | `"Movocan"` | The name the license is doing business as. |
| `business_owner_name` | `"redacted"` | The name of the owner of the license. |
| `business_structure` | `"Corporation"` | The structure of the business that owns the license. |
| `activity` | `"Pending Inspection"` | Any relevant license activity. |
| `premise_street_address` | `"1632 Gateway Rd"` | The street address of the business. |
| `premise_city` | `"Calexico"` | The city of the business. |
| `premise_state` | `"CA"` | The state abbreviation of the business. |
| `premise_county` | `"Imperial"` | The county of the business. |
| `premise_zip_code` | `"92231"` | The zip code of the business. |
| `business_email` | `"redacted@gmail.com"` | The business email of the license. |
| `business_phone` | `"(555) 555-5555"` | The business phone of the license. |
| `business_website` | `"cannlytics.com"` | The business website of the license. |
| `parcel_number` | `"A42"` | An ID for the business location. |
| `premise_latitude` | `32.69035693` | The latitude of the business. |
| `premise_longitude` | `-115.38987552` | The longitude of the business. |
| `data_refreshed_date` | `"2022-09-21T12:16:33.3866667"` | An ISO-formatted time when the license data was updated. |
### Data Splits
The data is split into subsets by state. You can retrieve all licenses by requesting the `all` subset.
```py
from datasets import load_dataset
# Get all cannabis licenses.
repo = 'cannlytics/cannabis_licenses'
dataset = load_dataset(repo, 'all')
data = dataset['data']
```
## Dataset Creation
### Curation Rationale
Data about organizations operating in the cannabis industry for each state is valuable for research.
### Source Data
| State | Data Source URL |
|-------|-----------------|
| Alaska | <https://www.commerce.alaska.gov/abc/marijuana/Home/licensesearch> |
| Arizona | <https://azcarecheck.azdhs.gov/s/?licenseType=null> |
| California | <https://search.cannabis.ca.gov/> |
| Colorado | <https://sbg.colorado.gov/med/licensed-facilities> |
| Connecticut | <https://portal.ct.gov/DCP/Medical-Marijuana-Program/Connecticut-Medical-Marijuana-Dispensary-Facilities> |
| Illinois | <https://www.idfpr.com/LicenseLookup/AdultUseDispensaries.pdf> |
| Maine | <https://www.maine.gov/dafs/ocp/open-data/adult-use> |
| Massachusetts | <https://masscannabiscontrol.com/open-data/data-catalog/> |
| Michigan | <https://michigan.maps.arcgis.com/apps/webappviewer/index.html?id=cd5a1a76daaf470b823a382691c0ff60> |
| Montana | <https://mtrevenue.gov/cannabis/#CannabisLicenses> |
| Nevada | <https://ccb.nv.gov/list-of-licensees/> |
| New Jersey | <https://data.nj.gov/stories/s/ggm4-mprw> |
| New Mexico | <https://nmrldlpi.force.com/bcd/s/public-search-license?division=CCD&language=en_US> |
| Oregon | <https://www.oregon.gov/olcc/marijuana/pages/recreational-marijuana-licensing.aspx> |
| Rhode Island | <https://dbr.ri.gov/office-cannabis-regulation/compassion-centers/licensed-compassion-centers> |
| Vermont | <https://ccb.vermont.gov/licenses> |
| Washington | <https://lcb.wa.gov/records/frequently-requested-lists> |
### Data Collection and Normalization
In the `algorithms` directory, you can find the algorithms used for data collection. You can use these algorithms to recreate the dataset. First, you will need to clone the repository:
```
git clone https://huggingface.co/datasets/cannlytics/cannabis_licenses
```
You can then install the algorithm Python (3.9+) requirements:
```
cd cannabis_licenses
pip install -r requirements.txt
```
Then you can run all of the data-collection algorithms:
```
python algorithms/main.py
```
Or you can run each algorithm individually. For example:
```
python algorithms/get_licenses_ca.py
```
### Personal and Sensitive Information
This dataset includes names of individuals, public addresses, and contact information for cannabis licensees. It is important to take care to use these data points in a legal manner.
## Considerations for Using the Data
### Social Impact of Dataset
Arguably, there is substantial social impact that could result from the study of permitted adult-use cannabis, therefore, researchers and data consumers alike should take the utmost care in the use of this dataset.
### Discussion of Biases
Cannlytics is a for-profit data and analytics company that primarily serves cannabis businesses. The data are not randomly collected and thus sampling bias should be taken into consideration.
### Other Known Limitations
The data is for adult-use cannabis licenses. It would be valuable to include medical cannabis licenses too.
## Additional Information
### Dataset Curators
Curated by [🔥Cannlytics](https://cannlytics.com)<br>
<contact@cannlytics.com>
### License
```
Copyright (c) 2022 Cannlytics and the Cannabis Data Science Team
The files associated with this dataset are licensed under a
Creative Commons Attribution 4.0 International license.
You can share, copy and modify this dataset so long as you give
appropriate credit, provide a link to the CC BY license, and
indicate if changes were made, but you may not do so in a way
that suggests the rights holder has endorsed you or your use of
the dataset. Note that further permission may be required for
any content within the dataset that is identified as belonging
to a third party.
```
### Citation
Please cite the following if you use the code examples in your research:
```bibtex
@misc{cannlytics2022,
title={Cannabis Data Science},
author={Skeate, Keegan and O'Sullivan-Sutherland, Candace},
journal={https://github.com/cannlytics/cannabis-data-science},
year={2022}
}
```
### Contributions
Thanks to [🔥Cannlytics](https://cannlytics.com), [@candy-o](https://github.com/candy-o), [@hcadeaux](https://huggingface.co/hcadeaux), [@keeganskeate](https://github.com/keeganskeate), and the entire [Cannabis Data Science Team](https://meetup.com/cannabis-data-science/members) for their contributions.
|
radm | null | null | null | false | 3 | false | radm/tathagata | 2022-09-28T20:20:13.000Z | null | false | 3562204543b81d961ccef05e11e3d69011fe5104 | [] | [
"annotations_creators:found",
"language:ru",
"language_creators:found",
"license:apache-2.0",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"tags:text_generation",
"tags:quotes",
"task_categories:text-generation",
"task_ids:language-modeling"
] | https://huggingface.co/datasets/radm/tathagata/resolve/main/README.md | ---
annotations_creators:
- found
language:
- ru
language_creators:
- found
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: tathagata
size_categories:
- n<1K
source_datasets:
- original
tags:
- text_generation
- quotes
task_categories:
- text-generation
task_ids:
- language-modeling
---
# ****Dataset Card for tathagata****
# **I-Dataset Summary**
tathagata.txt is a dataset based on summaries of major Buddhist, Hindu and Advaita texts such as:
- Diamond Sutra
- Lankavatara Sutra
- Sri Nisargadatta Maharaj quotes
- Quotes from the Bhagavad Gita
This dataset was used to train this model https://huggingface.co/radm/rugpt3medium-tathagata
# **II-Languages**
The texts in the dataset are in Russian (ru). |
valluvera | null | null | null | false | 1 | false | valluvera/gemma | 2022-09-28T20:12:34.000Z | null | false | bc637e0366cdba0bf5cd9542b4cb6ed819d925b7 | [] | [
"license:other"
] | https://huggingface.co/datasets/valluvera/gemma/resolve/main/README.md | ---
license: other
---
|
bjornsing | null | null | null | false | 3 | false | bjornsing/PCG-signals | 2022-09-28T20:44:06.000Z | null | false | 9d61249c9d960863eeefff485280129c7c0b1e44 | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/bjornsing/PCG-signals/resolve/main/README.md | ---
license: cc-by-4.0
---
|
thewalkerdenton | null | null | null | false | 3 | false | thewalkerdenton/Canny | 2022-09-28T21:02:20.000Z | null | false | c10a50d07a444af455999711419682ae9d6dba15 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/thewalkerdenton/Canny/resolve/main/README.md | ---
license: apache-2.0
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.