id stringlengths 2 115 | author stringlengths 2 42 ⌀ | last_modified timestamp[us, tz=UTC] | downloads int64 0 8.87M | likes int64 0 3.84k | paperswithcode_id stringlengths 2 45 ⌀ | tags list | lastModified timestamp[us, tz=UTC] | createdAt stringlengths 24 24 | key stringclasses 1 value | created timestamp[us] | card stringlengths 1 1.01M | embedding list | library_name stringclasses 21 values | pipeline_tag stringclasses 27 values | mask_token null | card_data null | widget_data null | model_index null | config null | transformers_info null | spaces null | safetensors null | transformersInfo null | modelId stringlengths 5 111 ⌀ | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Biomedical-TeMU/ProfNER_corpus_classification | Biomedical-TeMU | 2022-03-10T21:24:30Z | 20 | 0 | null | [
"license:cc-by-4.0",
"region:us"
] | 2022-03-10T21:24:30Z | 2022-03-10T20:28:10.000Z | 2022-03-10T20:28:10 | ---
license: cc-by-4.0
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SetFit/catalonia_independence_ca | SetFit | 2022-03-13T09:10:29Z | 20 | 0 | null | [
"region:us"
] | 2022-03-13T09:10:29Z | 2022-03-13T02:43:15.000Z | 2022-03-13T02:43:15 | #catalonian independence tweet dataset
This dataset is a port of the official ['catalonia_independence' dataset] (https://huggingface.co/datasets/catalonia_independence) on the Hub. It has just the Catalan language version. | [
-0.32041841745376587,
-0.4489215016365051,
0.10577990859746933,
0.8200194835662842,
-0.4843854606151581,
0.4602813124656677,
-0.08048216253519058,
-0.0641593486070633,
1.251279354095459,
0.9070641398429871,
-0.7140899300575256,
-0.7873138189315796,
-0.46182915568351746,
-0.2508253753185272... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
GEM-submissions/lewtun__this-is-a-test__1647263213 | GEM-submissions | 2022-03-14T13:06:58Z | 20 | 0 | null | [
"benchmark:gem",
"evaluation",
"benchmark",
"region:us"
] | 2022-03-14T13:06:58Z | 2022-03-14T13:06:57.000Z | 2022-03-14T13:06:57 | ---
benchmark: gem
type: prediction
submission_name: This is a test
tags:
- evaluation
- benchmark
---
# GEM Submission
Submission name: This is a test
| [
-0.01583663560450077,
-0.9654795527458191,
0.584195077419281,
0.12924723327159882,
-0.2803734242916107,
0.45494675636291504,
0.18859510123729706,
0.3502407371997833,
0.47759556770324707,
0.41622957587242126,
-1.1466838121414185,
-0.1300487071275711,
-0.49302777647972107,
0.0401801504194736... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
scjnugacj/scjn_dataset_ner | scjnugacj | 2022-10-23T05:14:56Z | 20 | 0 | null | [
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:es",
"license:cc-by-sa-4.0",
"region:us"
] | 2022-10-23T05:14:56Z | 2022-03-19T03:13:28.000Z | 2022-03-19T03:13:28 | ---
annotations_creators:
- expert-generated
language_creators:
- other
language:
- es
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: Corpus SCJN NER
size_categories:
- unknown
source_datasets:
- original
task_categories:
- Token Classification
task_ids:
- NER
---
# Corpus SCJN NER, para el reconocimiento de entidades nombradas
En su primera versión contiene etiquetas para identificar leyes y tratados internacionales de los que el Estado Mexicano es parte.
## Dataset Structure
### Data Instances
Un ejemplo de 'train' se ve de la siguiente forma:
```
{
'id': '3',
'ner_tags': [0, 0, 0, 0, 0, 1, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'tokens': ['el', 'artículo', '15', 'de', 'la', 'ley', 'general', 'de', 'títulos', 'y', 'operaciones', 'de', 'crédito', 'exige', 'que', 'se', 'satisfagan', 'las', 'expresiones', 'omitidas', 'en', 'el', 'título', ',', 'antes', 'de', 'la', 'presentación', 'de', 'éste', 'para', 'su', 'aceptación', 'o', 'para', 'su', 'pago', '.', 'aunque', 'varios', 'autores', 'estiman', 'que', 'el', 'tenedor', 'puede', 'completar', 'los', 'requisitos', 'faltantes', 'a', 'la', 'cambial', ',', 'en', 'cualquier', 'instante', 'anterior', 'a', 'su', 'vencimiento', ',', 'este', 'criterio', 'no', 'es', 'aplicable', 'frente', 'a', 'la', 'disposición', 'terminante', 'de', 'la', 'ley', 'mexicana', ';', 'y', 'si', 'nuestro', 'legislador', 'hubiera', 'aceptado', 'la', 'posibilidad', 'de', 'llenar', 'los', 'requisitos', 'en', 'cualquier', 'momento', ',', 'hasta', 'antes', 'de', 'la', 'presentación', 'del', 'documento', 'para', ',', 'el', 'pago', ',', 'no', 'habría', 'hablado', 'de', 'la', 'presentación', 'para', 'la', 'aceptación', ';', 'máxime', ',', 'que', 'mientras', 'todas', 'las', 'letras', 'de', 'cambio', 'son', 'susceptibles', 'de', 'pago', ',', 'no', 'todas', 'lo', 'son', 'de', 'aceptación', '.', 'la', 'cambial', 'en', 'blanco', 'bien', 'puede', 'existir', 'y', 'circular', 'antes', 'de', 'que', 'sea', 'presentada', 'para', 'su', 'aceptación', ';', 'pero', 'cuando', 'ya', 'el', 'tenedor', 'va', 'a', 'hacer', 'valer', 'sus', 'derechos', '(', 'y', 'la', 'presentación', 'para', 'la', 'aceptación', 'es', 'el', 'ejercicio', 'de', 'uno', 'de', 'ellos', ')', ',', 'debe', 'llenar', 'los', 'extremos', 'necesarios', 'y', 'presentar', 'un', 'documento', 'completo', '.', 'cuando', 'el', 'girado', ',', 'al', 'aceptar', 'la', 'letra', ',', 'se', 'muestra', 'conforme', 'en', 'que', 'después', 'se', 'llene', 'la', 'expresión', 'de', 'su', 'importe', ',', 'ello', 'no', 'le', 'reporta', 'perjuicio', ',', 'si', 'el', 'beneficiario', 'lo', 'hace', 'dentro', 'de', 'los', 'límites', 'convenidos', ';', 'más', 'si', 'éste', 'se', 'excede', 'en', 'la', 'expresión', 'de', 'la', 'cantidad', 'convenida', ',', 'el', 'girado', 'sí', 'recibe', 'perjuicio', 'considerable', ',', 'ya', 'que', 'a', 'pesar', 'de', 'que', 'pueda', 'válidamente', 'oponer', 'las', 'excepciones', 'de', 'dolo', 'y', 'plus', 'petitio', 'correspondientes', ',', 'frente', 'al', 'beneficiario', 'que', 'violó', 'lo', 'pactado', ',', 'no', 'podrá', 'hacerlo', 'si', 'el', 'tenedor', 'es', 'un', 'tercero', 'que', 'de', 'buena', 'fe', 'adquirió', 'el', 'documento', ',', 'ignorando', 'las', 'circunstancias', 'precedentes', ';', 'en', 'cambio', ',', 'si', 'de', 'acuerdo', 'con', 'lo', 'preceptuado', 'por', 'nuestra', 'ley', ',', 'falta', 'el', 'título', 'de', 'crédito', ',', 'pues', 'el', 'documento', 'cuyos', 'requisitos', 'omitidos', 'no', 'se', 'satisficieron', 'oportunamente', ',', 'no', 'produce', 'efectos', 'como', 'tal', '(', 'artículo', '14', 'de', 'la', 'ley', 'de', 'la', 'materia', ')', ',', 'ésta', 'será', 'excepción', 'que', ',', 'demostrada', ',', 'puede', 'ser', 'oponible', 'a', 'cualquier', 'tenedor', ',', 'es', 'decir', ',', 'ya', 'no', 'será', 'una', 'excepción', 'personal', ',', 'sino', 'una', 'excepción', 'real', '.']
}
```
### Data Fields
Los campos son los mismos para todos los splits.
- `id`: a `string` feature.
- `tokens`: a `list` of `string` features.
- `ner_tags`: a `list` of classification labels (`int`). Full tagset with indices:
```python
{'O': 0, 'B-LEY': 1, 'I-LEY': 2, 'B-TRAT_INTL': 3, 'I-TRAT_INTL': 4}
```
### Data Splits
| name |train|validation|test|
|---------|----:|---------:|---:|
|SCJNNER|1396|345|0|
## Dataset Creation
### Annotations
| annotations|train|validation|test|
|---------|----:|---------:|---:|
|LEY|1084|329|0|
|TRAT_INTL|935|161|0|
### Dataset Curators
Ana Gabriela Palomeque Ortiz, from SCJN - Unidad General de Administración del Conocimiento Jurídico.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Other Known Limitations
La información contenida en este dataset es para efectos demostrativos y no representa una fuente oficial de la Suprema Corte de Justicia de la Nación.
## License
<br/>This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/deed.es">Attribution-ShareAlike 4.0 International License</a>.
| [
-0.5181469321250916,
-0.4186996519565582,
0.2841140329837799,
0.5661032795906067,
-0.38890570402145386,
0.30469709634780884,
-0.10389560461044312,
-0.26417431235313416,
1.0257922410964966,
0.43168047070503235,
-0.5211880207061768,
-0.9160463213920593,
-0.5232495069503784,
0.481914550065994... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pietrolesci/conj_nli | pietrolesci | 2022-04-25T13:27:25Z | 20 | 0 | null | [
"region:us"
] | 2022-04-25T13:27:25Z | 2022-03-25T10:17:37.000Z | 2022-03-25T10:17:37 | ## Overview
The original dataset can be found [here](https://github.com/swarnaHub/ConjNLI). It has been
proposed in [ConjNLI: Natural Language Inference Over Conjunctive Sentences](https://aclanthology.org/2020.emnlp-main.661/).
This dataset is a stress test for natural language inference over conjunctive sentences,
where the premise differs from the hypothesis by conjuncts removed, added, or replaced.
## Dataset curation
The label mapping is the usual `{"entailment": 0, "neutral": 1, "contradiction": 2}`
used in NLI datasets. Note that labels for `test` split are not available.
Also, the `train` split is originally named `adversarial_train_15k`.
There are 2 instances (join on "premise", "hypothesis", "label") present both in `train` and `dev`.
The `test` split does not have labels.
Finally, in the `train` set there are a few instances without a label, they are removed.
## Code to create the dataset
```python
import pandas as pd
from datasets import Dataset, ClassLabel, Value, Features, DatasetDict
# download data from repo https://github.com/swarnaHub/ConjNLI
paths = {
"train": "<path_to_folder>/ConjNLI-master/data/NLI/adversarial_train_15k.tsv",
"dev": "<path_to_folder>/ConjNLI-master/data/NLI/conj_dev.tsv",
"test": "<path_to_folder>/ConjNLI-master/data/NLI/conj_test.tsv",
}
dataset_splits = {}
for split, path in paths.items():
# load data
df = pd.read_csv(paths[split], sep="\t")
# encode labels using the default mapping used by other nli datasets
# i.e, entailment: 0, neutral: 1, contradiction: 2
df.columns = df.columns.str.lower()
if "test" in path:
df["label"] = -1
else:
# remove empty labels
df = df.loc[~df["label"].isna()]
# encode labels
df["label"] = df["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2})
# cast to dataset
features = Features({
"premise": Value(dtype="string", id=None),
"hypothesis": Value(dtype="string", id=None),
"label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]),
})
dataset = Dataset.from_pandas(df, features=features)
dataset_splits[split] = dataset
conj_nli = DatasetDict(dataset_splits)
conj_nli.push_to_hub("pietrolesci/conj_nli", token="<token>")
# check overlap between splits
from itertools import combinations
for i, j in combinations(conj_nli.keys(), 2):
print(
f"{i} - {j}: ",
pd.merge(
conj_nli[i].to_pandas(),
conj_nli[j].to_pandas(),
on=["premise", "hypothesis", "label"], how="inner"
).shape[0],
)
#> train - dev: 2
#> train - test: 0
#> dev - test: 0
``` | [
-0.5524442791938782,
-0.7497214078903198,
0.03809717297554016,
0.2844638526439667,
-0.1652035415172577,
-0.16981026530265808,
-0.3377496302127838,
-0.33984941244125366,
0.39312857389450073,
0.3502257466316223,
-0.6623750329017639,
-0.5127240419387817,
-0.5371914505958557,
0.307921826839447... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
andreamorgar/spanish_poetry | andreamorgar | 2022-03-30T12:39:22Z | 20 | 2 | null | [
"license:gpl-3.0",
"region:us"
] | 2022-03-30T12:39:22Z | 2022-03-30T12:29:11.000Z | 2022-03-30T12:29:11 | ---
license: gpl-3.0
---
# Spanish Poetry Dataset
There are not many poetry datasets, and in Spanish language is even worst! With this dataset, we want to give access to these quality Spanish data for NLP tasks.
It is a simple dataset, but its potential is huge. I'm itching to discover new literary structures within Spanish literature data, a wider analysis, and so on!
# Authors
Andrea Morales (@andreamorgar) and Miguel López (@wizmik12)
### Motivation
This dataset was built for the PyConES2020 conference with the purpose of using it for a poem generation task. More information: https://github.com/andreamorgar/poesIA
### Content
Data was acquired in July 2020 from the poetry webpage www.poemas-del-alma.com. It provides a wide amount of data involving poems in Spanish. Data was scraped using Python library BeautifulSoup. For each poem in www.poemas-del-alma.com, we collected the name of the poet, poem, and poem title. Scraping processed is available at https://github.com/andreamorgar/poesIA/blob/master/poetry-scrapper.py.
### Languages
Spanish
### Acknowledgements
We wouldn't be here without www.poemas-del-alma.com, which provides the poetry collection in this dataset. | [
-0.12287712097167969,
-0.03566873073577881,
0.27260738611221313,
0.6954303979873657,
-0.34345588088035583,
-0.01183834858238697,
-0.2491825968027115,
-0.6491090655326843,
0.5936458110809326,
0.48928922414779663,
-0.6748363375663757,
-0.7433493733406067,
-0.5709875226020813,
0.1833935678005... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
blo05/cleaned_wiki_en_0-20 | blo05 | 2022-03-30T14:15:55Z | 20 | 0 | null | [
"region:us"
] | 2022-03-30T14:15:55Z | 2022-03-30T13:25:08.000Z | 2022-03-30T13:25:08 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Splend1dchan/phone-squad | Splend1dchan | 2022-03-30T13:40:31Z | 20 | 0 | null | [
"region:us"
] | 2022-03-30T13:40:31Z | 2022-03-30T13:33:20.000Z | 2022-03-30T13:33:20 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
blo05/cleaned_wiki_en_20-40 | blo05 | 2022-03-30T14:53:59Z | 20 | 0 | null | [
"region:us"
] | 2022-03-30T14:53:59Z | 2022-03-30T14:40:41.000Z | 2022-03-30T14:40:41 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
metashift | null | 2023-01-25T15:03:59Z | 20 | 3 | metashift | [
"task_categories:image-classification",
"task_categories:other",
"task_ids:multi-label-image-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-b... | 2023-01-25T15:03:59Z | 2022-04-01T15:16:57.000Z | 2022-04-01T15:16:57 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- image-classification
- other
task_ids:
- multi-label-image-classification
paperswithcode_id: metashift
pretty_name: MetaShift
tags:
- domain-generalization
dataset_info:
features:
- name: image_id
dtype: string
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': cat
'1': dog
'2': bus
'3': truck
'4': elephant
'5': horse
'6': bowl
'7': cup
- name: context
dtype: string
config_name: metashift
splits:
- name: train
num_bytes: 16333509
num_examples: 86808
download_size: 21878013674
dataset_size: 16333509
---
# Dataset Card for MetaShift
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [MetaShift homepage](https://metashift.readthedocs.io/)
- **Repository:** [MetaShift repository](https://github.com/Weixin-Liang/MetaShift)
- **Paper:** [MetaShift paper](https://arxiv.org/abs/2202.06523v1)
- **Point of Contact:** [Weixin Liang](mailto:wxliang@stanford.edu)
### Dataset Summary
The MetaShift dataset is a collection of 12,868 sets of natural images across 410 classes. It was created for understanding the performance of a machine learning model across diverse data distributions.
The authors leverage the natural heterogeneity of Visual Genome and its annotations to construct MetaShift.
The key idea is to cluster images using its metadata which provides context for each image.
For example : cats with cars or cats in bathroom.
The main advantage is the dataset contains many more coherent sets of data compared to other benchmarks.
Two important benefits of MetaShift :
- Contains orders of magnitude more natural data shifts than previously available.
- Provides explicit explanations of what is unique about each of its data sets and a distance score that measures the amount of distribution shift between any two of its data sets.
### Dataset Usage
The dataset has the following configuration parameters:
- selected_classes: `list[string]`, optional, list of the classes to generate the MetaShift dataset for. If `None`, the list is equal to `['cat', 'dog', 'bus', 'truck', 'elephant', 'horse']`.
- attributes_dataset: `bool`, default `False`, if `True`, the script generates the MetaShift-Attributes dataset. Refer [MetaShift-Attributes Dataset](https://github.com/Weixin-Liang/MetaShift#bonus-generate-the-metashift-attributes-dataset-subsets-defined-by-subject-attributes) for more information.
- attributes: `list[string]`, optional, list of attributes classes included in the Attributes dataset. If `None` and `attributes_dataset` is `True`, it's equal to `["cat(orange)", "cat(white)", "dog(sitting)", "dog(jumping)"]`. You can find the full attribute ontology in the above link.
- with_image_metadata: `bool`, default `False`, whether to include image metadata. If set to `True`, this will give additional metadata about each image. See [Scene Graph](https://cs.stanford.edu/people/dorarad/gqa/download.html) for more information.
- image_subset_size_threshold: `int`, default `25`, the number of images required to be considered a subset. If the number of images is less than this threshold, the subset is ignored.
- min_local_groups: `int`, default `5`, the minimum number of local groups required to be considered an object class.
Consider the following examples to get an idea of how you can use the configuration parameters :
1. To generate the MetaShift Dataset :
```python
load_dataset("metashift", selected_classes=['cat', 'dog', 'bus'])
```
The full object vocabulary and its hierarchy can be seen [here](https://github.com/Weixin-Liang/MetaShift/blob/main/dataset/meta_data/class_hierarchy.json).
The default classes are `['cat', 'dog', 'bus', 'truck', 'elephant', 'horse']`
2. To generate the MetaShift-Attributes Dataset (subsets defined by subject attributes) :
```python
load_dataset("metashift", attributes_dataset = True, attributes=["dog(smiling)", "cat(resting)"])
```
The default attributes are `["cat(orange)", "cat(white)", "dog(sitting)", "dog(jumping)"]`
3. To generate the dataset with additional image metadata information :
```python
load_dataset("metashift", selected_classes=['cat', 'dog', 'bus'], with_image_metadata=True)
```
4. Further, you can specify your own configuration different from those used in the papers as follows:
```python
load_dataset("metashift", image_subset_size_threshold=20, min_local_groups=3)
```
### Dataset Meta-Graphs
From the MetaShift Github Repo :
> MetaShift splits the data points of each class (e.g., Cat) into many subsets based on visual contexts. Each node in the meta-graph represents one subset. The weight of each edge is the overlap coefficient between the corresponding two subsets. Node colors indicate the graph-based community detection results. Inter-community edges are colored. Intra-community edges are grayed out for better visualization. The border color of each example image indicates its community in the meta-graph. We have one such meta-graph for each of the 410 classes in the MetaShift.
The following are the metagraphs for the default classes, these have been generated using the `generate_full_MetaShift.py` file.
<p align='center'>
<img width='75%' src='https://i.imgur.com/wrpezCK.jpg' alt="Cat Meta-graph" /> </br>
<b>Figure: Meta-graph: visualizing the diverse data distributions within the “cat” class. </b>
</p>
<p align='center'>
<img width='75%' src='https://i.imgur.com/FhuAwfT.jpg' alt="Dog Meta-graph" /> </br>
<b>Figure: Meta-graph for the “Dog” class, which captures meaningful semantics of the multi-modal data distribution of “Dog”. </b>
</p>
<p align='center'>
<img width='75%' src='https://i.imgur.com/FFCcN6L.jpg' alt="Bus Meta-graph" /> </br>
<b>Figure: Meta-graph for the “Bus” class. </b>
</p>
<p align='center'>
<img width='75%' src='https://i.imgur.com/rx5b5Vo.jpg' alt="Elephant Meta-graph" /> </br>
<b>Figure: Meta-graph for the "Elephant" class. </b>
</p>
<p align='center'>
<img width='75%' src='https://i.imgur.com/6f6U3S8.jpg' alt="Horse Meta-graph" /> </br>
<b>Figure: Meta-graph for the "Horse" class. </b>
</p>
<p align='center'>
<img width='75%' src='https://i.imgur.com/x9zhQD7.jpg' alt="Truck Meta-graph"/> </br>
<b>Figure: Meta-graph for the Truck class. </b>
</p>
### Supported Tasks and Leaderboards
From the paper:
> MetaShift supports evaluation on both :
> - domain generalization and subpopulation shifts settings,
> - assessing training conflicts.
### Languages
All the classes and subsets use English as their primary language.
## Dataset Structure
### Data Instances
A sample from the MetaShift dataset is provided below:
```
{
'image_id': '2411520',
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x375 at 0x7F99115B8D90>,
'label': 2,
'context': 'fence'
}
```
A sample from the MetaShift-Attributes dataset is provided below:
```
{
'image_id': '2401643',
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x333 at 0x7FED371CE350>
'label': 0
}
```
The format of the dataset with image metadata included by passing `with_image_metadata=True` to `load_dataset` is provided below:
```
{
'image_id': '2365745',
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x333 at 0x7FEBCD39E4D0>
'label': 0,
'context': 'ground',
'width': 500,
'height': 333,
'location': None,
'weather': None,
'objects':
{
'object_id': ['2676428', '3215330', '1962110', '2615742', '3246028', '3232887', '3215329', '1889633', '3882667', '3882663', '1935409', '3882668', '3882669'],
'name': ['wall', 'trailer', 'floor', 'building', 'walkway', 'head', 'tire', 'ground', 'dock', 'paint', 'tail', 'cat', 'wall'],
'x': [194, 12, 0, 5, 3, 404, 27, 438, 2, 142, 324, 328, 224],
'y': [1, 7, 93, 10, 100, 46, 215, 139, 90, 172, 157, 45, 246],
'w': [305, 477, 499, 492, 468, 52, 283, 30, 487, 352, 50, 122, 274],
'h': [150, 310, 72, 112, 53, 59, 117, 23, 240, 72, 107, 214, 85],
'attributes': [['wood', 'green'], [], ['broken', 'wood'], [], [], [], ['black'], [], [], [], ['thick'], ['small'], ['blue']],
'relations': [{'name': [], 'object': []}, {'name': [], 'object': []}, {'name': [], 'object': []}, {'name': [], 'object': []}, {'name': [], 'object': []}, {'name': ['of'], 'object': ['3882668']}, {'name': ['to the left of'], 'object': ['3882669']}, {'name': ['to the right of'], 'object': ['3882668']}, {'name': [], 'object': []}, {'name': [], 'object': []}, {'name': ['of'], 'object': ['3882668']}, {'name': ['perched on', 'to the left of'], 'object': ['3882667', '1889633']}, {'name': ['to the right of'], 'object': ['3215329']}]
}
}
```
### Data Fields
- `image_id`: Unique numeric ID of the image in Base Visual Genome dataset.
- `image`: A PIL.Image.Image object containing the image.
- `label`: an int classification label.
- `context`: represents the context in which the label is seen. A given label could have multiple contexts.
Image Metadata format can be seen [here](https://cs.stanford.edu/people/dorarad/gqa/download.html) and a sample above has been provided for reference.
### Data Splits
All the data is contained in training set.
## Dataset Creation
### Curation Rationale
From the paper:
> We present MetaShift as an important resource for studying the behavior of
ML algorithms and training dynamics across data with heterogeneous contexts. In order to assess the reliability and fairness of a model, we need to evaluate
its performance and training behavior across heterogeneous types of data. MetaShift contains many more coherent sets of data compared to other benchmarks. Importantly, we have explicit annotations of what makes each subset unique (e.g. cats with cars or dogs next to a bench) as well as a score that measures the distance between any two subsets, which is not available in previous benchmarks of natural data.
### Source Data
#### Initial Data Collection and Normalization
From the paper:
> We leverage the natural heterogeneity of Visual Genome and its annotations to construct MetaShift. Visual Genome contains over 100k images across 1,702 object classes. MetaShift is constructed on a class-by-class basis. For each class, say “cat”, we pull out all cat images and proceed with generating candidate subests, constructing meta-graphs and then duantify distances of distribution shifts.
#### Who are the source language producers?
[More Information Needed]
### Annotations
The MetaShift dataset uses Visual Genome as its base, therefore the annotations process is same as the Visual Genome dataset.
#### Annotation process
From the Visual Genome paper :
> We used Amazon Mechanical Turk (AMT) as our primary source of annotations. Overall, a total of over 33,000 unique workers contributed to the dataset. The dataset was collected over the course of 6 months after 15 months of experimentation and iteration on the data representation. Approximately 800, 000 Human Intelligence Tasks (HITs) were launched on AMT, where each HIT involved creating descriptions, questions and answers, or region graphs.
#### Who are the annotators?
From the Visual Genome paper :
> Visual Genome was collected and verified entirely by crowd workers from Amazon Mechanical Turk.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
From the paper:
> One limitation is that our MetaShift might inherit existing biases in Visual Genome, which is the
base dataset of our MetaShift. Potential concerns include minority groups being under-represented
in certain classes (e.g., women with snowboard), or annotation bias where people in images are
by default labeled as male when gender is unlikely to be identifiable. Existing work in analyzing,
quantifying, and mitigating biases in general computer vision datasets can help with addressing this
potential negative societal impact.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
From the paper :
> Our MetaShift and the code would use the Creative Commons Attribution 4.0 International License. Visual Genome (Krishna et al., 2017) is licensed under a Creative Commons Attribution 4.0 International License. MS-COCO (Lin et al., 2014) is licensed under CC-BY 4.0. The Visual Genome dataset uses 108, 077 images from the intersection of the YFCC100M (Thomee et al., 2016) and MS-COCO. We use the pre-processed and cleaned version of Visual Genome by GQA (Hudson & Manning, 2019).
### Citation Information
```bibtex
@InProceedings{liang2022metashift,
title={MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts},
author={Weixin Liang and James Zou},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=MTex8qKavoS}
}
```
### Contributions
Thanks to [@dnaveenr](https://github.com/dnaveenr) for adding this dataset. | [
-0.7505197525024414,
-0.6182889342308044,
0.24224591255187988,
-0.21460475027561188,
-0.4705619215965271,
-0.10052565485239029,
-0.055221471935510635,
-0.4949341416358948,
0.4371769428253174,
0.4139007031917572,
-0.7993382811546326,
-0.8391573429107666,
-0.3024338483810425,
0.0332956463098... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
hackathon-pln-es/readability-es-hackathon-pln-public | hackathon-pln-es | 2023-04-13T08:51:15Z | 20 | 1 | null | [
"task_categories:text-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:es",
"license:cc-by-4.0",
"readability",
"region:us"
] | 2023-04-13T08:51:15Z | 2022-04-04T10:26:51.000Z | 2022-04-04T10:26:51 | ---
annotations_creators:
- found
language_creators:
- found
language:
- es
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
pretty_name: readability-es-sentences
tags:
- readability
---
# Dataset Card for [readability-es-sentences]
## Dataset Description
Compilation of short Spanish articles for readability assessment.
### Dataset Summary
This dataset is a compilation of short articles from websites dedicated to learn Spanish as a second language. These articles have been compiled from the following sources:
- **Coh-Metrix-Esp corpus (Quispesaravia, et al., 2016):** collection of 100 parallel texts with simple and complex variants in Spanish. These texts include children's and adult stories to fulfill each category.
- **[kwiziq](https://www.kwiziq.com/):** a language learner assistant
- **[hablacultura.com](https://hablacultura.com/):** Spanish resources for students and teachers. We have downloaded the available content in their websites.
### Languages
Spanish
## Dataset Structure
The dataset includes 1019 text entries between 80 and 8714 characters long. The vast majority (97%) are below 4,000 characters long.
### Data Fields
The dataset is formatted as a json lines and includes the following fields:
- **Category:** when available, this includes the level of this text according to the Common European Framework of Reference for Languages (CEFR).
- **Level:** standardized readability level: complex or simple.
- **Level-3:** standardized readability level: basic, intermediate or advanced
- **Text:** original text formatted into sentences.
Not all the entries contain usable values for `category`, `level` and `level-3`, but all of them should contain at least one of `level`, `level-3`. When the corresponding information could not be derived, we use the special `"N/A"` value to indicate so.
## Additional Information
### Licensing Information
https://creativecommons.org/licenses/by-nc-sa/4.0/
### Citation Information
Please cite this page to give credit to the authors :)
### Team
- [Laura Vásquez-Rodríguez](https://lmvasque.github.io/)
- [Pedro Cuenca](https://twitter.com/pcuenq)
- [Sergio Morales](https://www.fireblend.com/)
- [Fernando Alva-Manchego](https://feralvam.github.io/)
| [
-0.2625982463359833,
-0.3735290467739105,
0.26835376024246216,
0.3411879539489746,
-0.21666587889194489,
0.3603128492832184,
-0.212254598736763,
-0.6638604998588562,
0.25512540340423584,
0.45786455273628235,
-0.7145598530769348,
-0.8461010456085205,
-0.5055938959121704,
0.47286227345466614... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mwong/climatetext-claim-evidence-pair-related-evaluation | mwong | 2022-10-25T10:08:55Z | 20 | 1 | null | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|climate_text",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
"... | 2022-10-25T10:08:55Z | 2022-04-21T10:26:24.000Z | 2022-04-21T10:26:24 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gpl-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|climate_text
task_categories:
- text-classification
task_ids:
- fact-checking
---
### Dataset Summary
This dataset is extracted from Climate Text dataset (https://www.sustainablefinance.uzh.ch/en/research/climate-fever/climatext.html), pre-processed and, ready to evaluate.
The evaluation objective is a text classification task - given a claim and climate related evidence, predict if pair is related. | [
-0.14223499596118927,
-0.48623234033584595,
0.3433614671230316,
0.1430058479309082,
-0.26697078347206116,
-0.12376929819583893,
-0.1593625247478485,
-0.3718506395816803,
0.06565658748149872,
0.9034709930419922,
-0.544058620929718,
-0.6032372713088989,
-0.7192929983139038,
0.110209658741950... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SetFit/amazon_massive_intent_km-KH | SetFit | 2022-05-06T09:09:34Z | 20 | 0 | null | [
"region:us"
] | 2022-05-06T09:09:34Z | 2022-05-06T09:09:31.000Z | 2022-05-06T09:09:31 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SetFit/amazon_massive_intent_mn-MN | SetFit | 2022-05-06T09:10:03Z | 20 | 0 | null | [
"region:us"
] | 2022-05-06T09:10:03Z | 2022-05-06T09:09:59.000Z | 2022-05-06T09:09:59 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SetFit/amazon_massive_intent_ms-MY | SetFit | 2022-05-06T09:10:09Z | 20 | 0 | null | [
"region:us"
] | 2022-05-06T09:10:09Z | 2022-05-06T09:10:06.000Z | 2022-05-06T09:10:06 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SetFit/amazon_massive_intent_zh-TW | SetFit | 2022-05-06T09:12:16Z | 20 | 0 | null | [
"region:us"
] | 2022-05-06T09:12:16Z | 2022-05-06T09:12:13.000Z | 2022-05-06T09:12:13 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
swcrazyfan/net-kjv | swcrazyfan | 2022-05-06T10:05:48Z | 20 | 1 | null | [
"region:us"
] | 2022-05-06T10:05:48Z | 2022-05-06T09:43:22.000Z | 2022-05-06T09:43:22 | languages:
- en
task_categories:
- translation
licenses:
- unknown
# Dataset Card for [Needs More Information]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This is a dataset made up of two Bible translations-- NET and KJV.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
The original intention is to use the dataset to "translate" between modern and 17th-century English. By doing so, we can potentially read understand things from that period more clearly.
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
Before the 18th and 19th centuries, English spelling was inconsistent. Because of this, the model often does not recognize spellings different from those in the KJV.
The model was trained on a relatively small amount of data, so it will not be as accurate as a model trained on a larger data set.
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
[Needs More Information] | [
-0.33594590425491333,
-0.5509604811668396,
0.15509776771068573,
0.2941642105579376,
-0.2428330034017563,
0.09632118046283722,
-0.48492932319641113,
-0.45146673917770386,
0.3652789890766144,
0.7235003113746643,
-0.8376736640930176,
-1.0726529359817505,
-0.8321373462677002,
0.373242497444152... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
crabz/boolq_sk | crabz | 2022-05-06T09:46:35Z | 20 | 0 | null | [
"region:us"
] | 2022-05-06T09:46:35Z | 2022-05-06T09:45:15.000Z | 2022-05-06T09:45:15 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bananabot/TrumpSpeeches | bananabot | 2022-05-12T03:41:02Z | 20 | 2 | null | [
"license:wtfpl",
"region:us"
] | 2022-05-12T03:41:02Z | 2022-05-12T03:37:03.000Z | 2022-05-12T03:37:03 | ---
license: wtfpl
---
| [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
mdroth/transformers_issues_labels | mdroth | 2023-07-26T15:38:13Z | 20 | 0 | null | [
"region:us"
] | 2023-07-26T15:38:13Z | 2022-05-17T00:30:58.000Z | 2022-05-17T00:30:58 | ---
dataset_info:
features:
- name: url
dtype: string
- name: text
dtype: string
- name: num_labels
sequence: int64
- name: arr_labels
sequence: int64
- name: labels
sequence: string
splits:
- name: train
num_bytes: 326243.372
num_examples: 122
- name: valid
num_bytes: 82897.906
num_examples: 31
- name: test
num_bytes: 104290.914
num_examples: 39
- name: dev
num_bytes: 2674.126
num_examples: 1
download_size: 296139
dataset_size: 516106.31799999997
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
- split: dev
path: data/dev-*
---
# Dataset Card for "transformers_issues_labels"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7103233337402344,
-0.1099129319190979,
0.1320808380842209,
0.3521649241447449,
0.14583340287208557,
0.18096616864204407,
0.26366668939590454,
-0.14804820716381073,
0.7513546943664551,
0.48832061886787415,
-0.9535550475120544,
-0.6538395285606384,
-0.7722935676574707,
-0.2080224603414535... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
erickdp/ndat | erickdp | 2022-05-19T23:05:43Z | 20 | 0 | null | [
"region:us"
] | 2022-05-19T23:05:43Z | 2022-05-19T21:25:51.000Z | 2022-05-19T21:25:51 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
aaraki/github-issues7 | aaraki | 2022-05-23T07:49:37Z | 20 | 0 | null | [
"region:us"
] | 2022-05-23T07:49:37Z | 2022-05-20T05:08:55.000Z | 2022-05-20T05:08:55 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
reallycarlaost/emobank-w-valence | reallycarlaost | 2022-05-20T15:09:40Z | 20 | 0 | null | [
"region:us"
] | 2022-05-20T15:09:40Z | 2022-05-20T10:43:48.000Z | 2022-05-20T10:43:48 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PoolC/2-fold-clone-detection-600k-5fold | PoolC | 2022-06-01T07:01:52Z | 20 | 0 | null | [
"region:us"
] | 2022-06-01T07:01:52Z | 2022-06-01T06:49:34.000Z | 2022-06-01T06:49:34 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
BeardedJohn/FakeNews | BeardedJohn | 2022-06-18T12:21:10Z | 20 | 1 | null | [
"region:us"
] | 2022-06-18T12:21:10Z | 2022-06-17T14:19:43.000Z | 2022-06-17T14:19:43 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
c17hawke/stackoverflow-dataset | c17hawke | 2022-06-18T21:27:37Z | 20 | 4 | null | [
"region:us"
] | 2022-06-18T21:27:37Z | 2022-06-18T21:27:23.000Z | 2022-06-18T21:27:23 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
FacePerceiver/laion-face | FacePerceiver | 2022-11-18T04:04:56Z | 20 | 15 | null | [
"region:us"
] | 2022-11-18T04:04:56Z | 2022-06-21T13:28:35.000Z | 2022-06-21T13:28:35 | # Laion-Face
[LAION-Face](https://github.com/FacePerceiver/LAION-Face) is the human face subset of [LAION-400M](https://laion.ai/laion-400-open-dataset/), it consists of 50 million image-text pairs. Face detection is conducted to find images with faces. Apart from the 50 million full-set(LAION-Face 50M), there is a 20 million sub-set(LAION-Face 20M) for fast evaluation.
LAION-Face is first used as the training set of [FaRL](https://github.com/FacePerceiver/FaRL), which provides powerful pre-training transformer backbones for face analysis tasks.
For more details, please check the offical repo at https://github.com/FacePerceiver/LAION-Face .
## Download and convert metadata
```bash
wget -l1 -r --no-parent https://the-eye.eu/public/AI/cah/laion400m-met-release/laion400m-meta/
mv the-eye.eu/public/AI/cah/laion400m-met-release/laion400m-meta/ .
wget https://huggingface.co/datasets/FacePerceiver/laion-face/resolve/main/laion_face_ids.pth
wget https://raw.githubusercontent.com/FacePerceiver/LAION-Face/master/convert_parquet.py
python convert_parquet.py ./laion_face_ids.pth ./laion400m-meta ./laion_face_meta
```
## Download the images with img2dataset
When metadata is ready, you can start download the images.
```bash
wget https://raw.githubusercontent.com/FacePerceiver/LAION-Face/master/download.sh
bash download.sh ./laion_face_meta ./laion_face_data
```
Please be patient, this command might run over days, and cost about 2T disk space, and it will download 50 million image-text pairs as 32 parts.
- To use the **LAION-Face 50M**, you should use all the 32 parts.
- To use the **LAION-Face 20M**, you should use these parts.
```
0,2,5,8,13,15,17,18,21,22,24,25,28
```
checkout `download.sh` and [img2dataset](https://github.com/rom1504/img2dataset) for more details and parameter setting.
| [
-0.8194286227226257,
-0.3176352083683014,
0.38935551047325134,
0.28840428590774536,
-0.34250137209892273,
-0.18664973974227905,
0.12566979229450226,
-0.4497039318084717,
0.24006620049476624,
0.7719810605049133,
-0.74078369140625,
-0.4601705074310303,
-0.430633544921875,
0.09625116735696793... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
rjac/kaggle-entity-annotated-corpus-ner-dataset | rjac | 2022-10-25T10:37:24Z | 20 | 1 | null | [
"annotations_creators:Abhinav Walia (Owner)",
"language:en",
"license:odbl",
"region:us"
] | 2022-10-25T10:37:24Z | 2022-06-23T20:31:55.000Z | 2022-06-23T20:31:55 | ---
annotations_creators:
- Abhinav Walia (Owner)
language:
- en
license:
- odbl
---
**Date**: 2022-07-10<br/>
**Files**: ner_dataset.csv<br/>
**Source**: [Kaggle entity annotated corpus](https://www.kaggle.com/datasets/abhinavwalia95/entity-annotated-corpus)<br/>
**notes**: The dataset only contains the tokens and ner tag labels. Labels are uppercase.
# About Dataset
[**from Kaggle Datasets**](https://www.kaggle.com/datasets/abhinavwalia95/entity-annotated-corpus)
## Context
Annotated Corpus for Named Entity Recognition using GMB(Groningen Meaning Bank) corpus for entity classification with enhanced and popular features by Natural Language Processing applied to the data set.
Tip: Use Pandas Dataframe to load dataset if using Python for convenience.
## Content
This is the extract from GMB corpus which is tagged, annotated and built specifically to train the classifier to predict named entities such as name, location, etc.
Number of tagged entities:
'O': 1146068', geo-nam': 58388, 'org-nam': 48034, 'per-nam': 23790, 'gpe-nam': 20680, 'tim-dat': 12786, 'tim-dow': 11404, 'per-tit': 9800, 'per-fam': 8152, 'tim-yoc': 5290, 'tim-moy': 4262, 'per-giv': 2413, 'tim-clo': 891, 'art-nam': 866, 'eve-nam': 602, 'nat-nam': 300, 'tim-nam': 146, 'eve-ord': 107, 'per-ini': 60, 'org-leg': 60, 'per-ord': 38, 'tim-dom': 10, 'per-mid': 1, 'art-add': 1
## Essential info about entities
* geo = Geographical Entity
* org = Organization
* per = Person
* gpe = Geopolitical Entity
* tim = Time indicator
* art = Artifact
* eve = Event
* nat = Natural Phenomenon
* Total Words Count = 1354149
* Target Data Column: "tag" (ner_tag in this repo)
Inspiration: This dataset is getting more interested because of more features added to the recent version of this dataset. Also, it helps to create a broad view of Feature Engineering with respect to this dataset.
## Modifications
the ner_dataset.csv was modified to have a similar data Structure as [CoNLL-2003 dataset](https://huggingface.co/datasets/conll2003)
## Licensing information
Database: Open Database, Contents: Database Contents.
| [
-0.5581005215644836,
-0.712078332901001,
0.0976778194308281,
0.04887329414486885,
-0.0051689762622118,
-0.054596785455942154,
-0.3736170530319214,
-0.6487246751785278,
0.5930345058441162,
0.5853169560432434,
-0.4424586296081543,
-0.8898374438285828,
-0.642483651638031,
0.24341139197349548,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
smangrul/MuDoConv | smangrul | 2022-06-29T06:39:30Z | 20 | 1 | null | [
"license:cc-by-nc-4.0",
"region:us"
] | 2022-06-29T06:39:30Z | 2022-06-24T06:05:04.000Z | 2022-06-24T06:05:04 | ---
license: cc-by-nc-4.0
---
Collated datasets from 10 sources and preprocessed it to have ["texts", "labels"] columns to train/finetune sequence-to-sequence models such as T5/Blenderbot ... Below are the 10 datasets:
1. blended_skill_talk,
2. conv_ai_2
3. empathetic_dialogues
4. wizard_of_wikipedia
5. meta_woz
6. multi_woz,
7. spolin
8. dailydialog
9. cornell_movie_dialogues
10. taskmaster
The data access and preprocessing code is [here](https://github.com/pacman100/accelerate-deepspeed-test/blob/main/src/data_preprocessing/DataPreprocessing.ipynb) | [
-0.6647269129753113,
-0.6134335398674011,
0.26689064502716064,
0.16625694930553436,
0.045646559447050095,
0.2949621081352234,
0.0321732722222805,
-0.19859056174755096,
-0.07539884001016617,
0.7646394968032837,
-0.9781273603439331,
-0.4764196276664734,
-0.5086913108825684,
0.412473499774932... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Nexdata/Passenger_Behavior_Recognition_Data | Nexdata | 2023-08-31T02:42:29Z | 20 | 0 | null | [
"region:us"
] | 2023-08-31T02:42:29Z | 2022-06-27T08:15:49.000Z | 2022-06-27T08:15:49 | ---
YAML tags:
- copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
---
# Dataset Card for Nexdata/Passenger_Behavior_Recognition_Data
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.nexdata.ai/datasets/1083?source=Huggingface
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
122 People - Passenger Behavior Recognition Data. The data includes multiple age groups, multiple time periods and multiple races (Caucasian, Black, Indian). The passenger behaviors include passenger normal behavior, passenger abnormal behavior(passenger carsick behavior, passenger sleepy behavior, passenger lost items behavior). In terms of device, binocular cameras of RGB and infrared channels were applied. This data can be used for tasks such as passenger behavior analysis.
For more details, please refer to the link: https://www.nexdata.ai/datasets/1083?source=Huggingface
### Supported Tasks and Leaderboards
face-detection, computer-vision, object-detection: The dataset can be used to train a model for face detection.
### Languages
English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing
### Citation Information
[More Information Needed]
### Contributions | [
-0.7126109600067139,
-0.4701650142669678,
0.17573805153369904,
0.4823110103607178,
-0.12452531605958939,
-0.05321912467479706,
-0.09013762325048447,
-0.5895122289657593,
0.7090099453926086,
0.5474112629890442,
-0.843468427658081,
-0.6695540547370911,
-0.4568597674369812,
0.0889598578214645... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ZeyadAhmed/Arabic-SQuADv2.0 | ZeyadAhmed | 2022-06-29T16:04:58Z | 20 | 0 | null | [
"region:us"
] | 2022-06-29T16:04:58Z | 2022-06-29T15:14:11.000Z | 2022-06-29T15:14:11 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622263669967651,
0.43461522459983826,
-0.52829909324646,
0.7012971639633179,
0.7915719747543335,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104475975036621,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
PolyAI/evi | PolyAI | 2022-10-25T10:39:33Z | 20 | 2 | evi-multilingual-spoken-dialogue-tasks-and-1 | [
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"language:en",
"language:fr",
"language:pl",
"license:cc-by-4.0",
"arxiv:2204.13496",
"region:us"
] | 2022-10-25T10:39:33Z | 2022-06-30T11:42:45.000Z | 2022-06-30T11:42:45 | ---
annotations_creators:
- crowdsourced
- machine-generated
language_creators:
- crowdsourced
- expert-generated
language:
- en
- fr
- pl
license:
- cc-by-4.0
multilinguality:
- multilingual
paperswithcode_id: evi-multilingual-spoken-dialogue-tasks-and-1
language_bcp47:
- en
- en-GB
- fr
- fr-FR
- pl
---
# EVI
## Dataset Description
- **Paper:** [EVI: Multilingual Spoken Dialogue Tasks and Dataset for Knowledge-Based Enrolment, Verification, and Identification](https://arxiv.org/abs/2204.13496)
- **Repository:** [Github](https://github.com/PolyAI-LDN/evi-paper)
EVI is a challenging spoken multilingual dataset
with 5,506 dialogues in English, Polish, and French
that can be used for benchmarking and developing
knowledge-based enrolment, identification, and identification for spoken dialogue systems.
## Example
EVI can be downloaded and used as follows:
```py
from datasets import load_dataset
evi = load_dataset("PolyAI/evi", "en-GB") # for British English
# to download data from all locales use:
# evi = load_dataset("PolyAI/evi", "all")
# see structure
print(evi)
```
## Dataset Structure
We show detailed information of the example for the `en-GB` configuration of the dataset.
All other configurations have the same structure.
### Data Instances
An example of a data instance of the config `en-GB` looks as follows:
```
{
"language": 0,
"dialogue_id": "CA0007220161df7be23f4554704c8720f5",
"speaker_id": "e80e9bdd33eda593f16a1b6f2fb228ff",
"turn_id": 0,
"target_profile_id": "en.GB.608",
"asr_transcription": "w20 a b",
"asr_nbest'": ["w20 a b", "w20 a bee", "w20 a baby"],
"path": "audios/en/CA0007220161df7be23f4554704c8720f5/0.wav",
"audio": {
"path": "/home/georgios/.cache/huggingface/datasets/downloads/extracted/0335ebc25feace53243133b49ba17ba18e26f0f97cb083ffdf4e73dd7427b443/audios/en/CA0007220161df7be23f4554704c8720f5/0.wav",
"array": array([ 0.00024414, 0.00024414, 0.00024414, ..., 0.00024414,
-0.00024414, 0.00024414], dtype=float32),
"sampling_rate": 8000,
}
}
```
### Data Fields
The data fields are the same among all splits.
- **language** (int): ID of language
- **dialogue_id** (str): the ID of the dialogue
- **speaker_id** (str): the ID of the speaker
- **turn_id** (int)": the ID of the turn
- **target_profile_id** (str): the ID of the target profile
- **asr_transcription** (str): ASR transcription of the audio file
- **asr_nbest** (list): n-best ASR transcriptions of the audio file
- **path** (str): Path to the audio file
- **audio** (dict): Audio object including loaded audio array, sampling rate and path of audio
### Data Splits
Every config only has the `"test"` split containing *ca.* 1,800 dialogues.
## Dataset Creation
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
All datasets are licensed under the [Creative Commons license (CC-BY)](https://creativecommons.org/licenses/).
### Citation Information
```
@inproceedings{Spithourakis2022evi,
author = {Georgios P. Spithourakis and
Ivan Vuli\'{c} and
Micha\l{} Lis and
I\~{n}igo Casanueva
and Pawe\l{} Budzianowski},
title = {{EVI}: Multilingual Spoken Dialogue Tasks and Dataset for Knowledge-Based Enrolment, Verification, and Identification},
year = {2022},
note = {Data available at https://github.com/PolyAI-LDN/evi-paper},
url = {https://arxiv.org/abs/2204.13496},
booktitle = {Findings of NAACL (publication pending)}
}
```
### Contributions
Thanks to [@polinaeterna](https://github.com/polinaeterna) for helping with adding this dataset | [
-0.479764848947525,
-0.5028718709945679,
0.17887654900550842,
0.26166120171546936,
-0.06902128458023071,
-0.0699336975812912,
-0.425972044467926,
-0.24898552894592285,
0.505195140838623,
0.3544427752494812,
-0.8195830583572388,
-0.8650850653648376,
-0.4779616892337799,
0.10648737847805023,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ksramalakshmi/VertebraSegmentation | ksramalakshmi | 2022-07-06T08:24:09Z | 20 | 0 | null | [
"region:us"
] | 2022-07-06T08:24:09Z | 2022-07-06T08:23:55.000Z | 2022-07-06T08:23:55 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
userGagan/ResizedSample | userGagan | 2022-07-14T06:24:52Z | 20 | 0 | null | [
"region:us"
] | 2022-07-14T06:24:52Z | 2022-07-07T16:52:43.000Z | 2022-07-07T16:52:43 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
jonaskoenig/Questions-vs-Statements-Classification | jonaskoenig | 2022-07-11T15:36:35Z | 20 | 2 | null | [
"region:us"
] | 2022-07-11T15:36:35Z | 2022-07-10T20:24:09.000Z | 2022-07-10T20:24:09 | [Needs More Information]
# Dataset Card for Questions-vs-Statements-Classification
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** [Kaggle](https://www.kaggle.com/datasets/shahrukhkhan/questions-vs-statementsclassificationdataset)
- **Point of Contact:** [Shahrukh Khan](https://www.kaggle.com/shahrukhkhan)
### Dataset Summary
A dataset containing statements and questions with their corresponding labels.
### Supported Tasks and Leaderboards
multi-class-classification
### Languages
en
## Dataset Structure
### Data Splits
Train Test Valid
## Dataset Creation
### Curation Rationale
The goal of this project is to classify sentences, based on type:
Statement (Declarative Sentence)
Question (Interrogative Sentence)
### Source Data
[Kaggle](https://www.kaggle.com/datasets/shahrukhkhan/questions-vs-statementsclassificationdataset)
#### Initial Data Collection and Normalization
The dataset is created by parsing out the SQuAD dataset and combining it with the SPAADIA dataset.
### Other Known Limitations
Questions in this case ar are only one sentence, statements are a single sentence or more. They are classified correctly but don't include sentences prior to questions.
## Additional Information
### Dataset Curators
[SHAHRUKH KHAN](https://www.kaggle.com/shahrukhkhan)
### Licensing Information
[CC0: Public Domain](https://creativecommons.org/publicdomain/zero/1.0/)
| [
-0.4997873306274414,
-0.8704720139503479,
-0.006594248581677675,
0.19804853200912476,
0.012833213433623314,
0.3651944696903229,
-0.3088638484477997,
-0.19665659964084625,
0.1882275640964508,
0.5973588228225708,
-0.9064226746559143,
-0.8783596158027649,
-0.5530211925506592,
0.24150173366069... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tner/conll2003 | tner | 2022-07-18T00:43:28Z | 20 | 1 | null | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:other",
"region:us"
] | 2022-07-18T00:43:28Z | 2022-07-16T10:39:09.000Z | 2022-07-16T10:39:09 | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: CoNLL-2003
---
# Dataset Card for "tner/conll2003"
## Dataset Description
- **Repository:** [T-NER](https://github.com/asahi417/tner)
- **Paper:** [https://www.aclweb.org/anthology/W03-0419/](https://www.aclweb.org/anthology/W03-0419/)
- **Dataset:** CoNLL 2003
- **Domain:** News
- **Number of Entity:** 3
### Dataset Summary
CoNLL-2003 NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
- Entity Types: `ORG`, `PER`, `LOC`, `MISC`
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```
{
'tags': ['SOCCER','-', 'JAPAN', 'GET', 'LUCKY', 'WIN', ',', 'CHINA', 'IN', 'SURPRISE', 'DEFEAT', '.'],
'tokens': [0, 0, 5, 0, 0, 0, 0, 3, 0, 0, 0, 0]
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/conll2003/raw/main/dataset/label.json).
```python
{
"O": 0,
"B-ORG": 1,
"B-MISC": 2,
"B-PER": 3,
"I-PER": 4,
"B-LOC": 5,
"I-ORG": 6,
"I-MISC": 7,
"I-LOC": 8
}
```
### Data Splits
| name |train|validation|test|
|---------|----:|---------:|---:|
|conll2003|14041| 3250|3453|
### Licensing Information
From the [CoNLL2003 shared task](https://www.clips.uantwerpen.be/conll2003/ner/) page:
> The English data is a collection of news wire articles from the Reuters Corpus. The annotation has been done by people of the University of Antwerp. Because of copyright reasons we only make available the annotations. In order to build the complete data sets you will need access to the Reuters Corpus. It can be obtained for research purposes without any charge from NIST.
The copyrights are defined below, from the [Reuters Corpus page](https://trec.nist.gov/data/reuters/reuters.html):
> The stories in the Reuters Corpus are under the copyright of Reuters Ltd and/or Thomson Reuters, and their use is governed by the following agreements:
>
> [Organizational agreement](https://trec.nist.gov/data/reuters/org_appl_reuters_v4.html)
>
> This agreement must be signed by the person responsible for the data at your organization, and sent to NIST.
>
> [Individual agreement](https://trec.nist.gov/data/reuters/ind_appl_reuters_v4.html)
>
> This agreement must be signed by all researchers using the Reuters Corpus at your organization, and kept on file at your organization.
### Citation Information
```
@inproceedings{tjong-kim-sang-de-meulder-2003-introduction,
title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition",
author = "Tjong Kim Sang, Erik F. and De Meulder, Fien",
booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003",
year = "2003",
url = "https://www.aclweb.org/anthology/W03-0419",
pages = "142--147",
}
``` | [
-0.5437624454498291,
-0.4068536162376404,
0.17595364153385162,
0.1661398857831955,
-0.36318230628967285,
-0.11598557978868484,
-0.31045064330101013,
-0.6023128628730774,
0.4794403910636902,
0.3439996838569641,
-0.3780530095100403,
-0.5878181457519531,
-0.6716737151145935,
0.520568013191223... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tner/tweebank_ner | tner | 2022-11-27T20:59:13Z | 20 | 3 | null | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"size_categories:1k<10K",
"language:en",
"license:other",
"arxiv:2201.07281",
"region:us"
] | 2022-11-27T20:59:13Z | 2022-07-18T10:39:20.000Z | 2022-07-18T10:39:20 | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 1k<10K
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: TweeBank NER
---
# Dataset Card for "tner/tweebank_ner"
## Dataset Description
- **Repository:** [T-NER](https://github.com/asahi417/tner)
- **Paper:** [https://arxiv.org/abs/2201.07281](https://arxiv.org/abs/2201.07281)
- **Dataset:** TweeBank NER
- **Domain:** Twitter
- **Number of Entity:** 4
### Dataset Summary
TweeBank NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
- Entity Types: `LOC`, `MISC`, `PER`, `ORG`
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```
{
'tokens': ['RT', '@USER2362', ':', 'Farmall', 'Heart', 'Of', 'The', 'Holidays', 'Tabletop', 'Christmas', 'Tree', 'With', 'Lights', 'And', 'Motion', 'URL1087', '#Holiday', '#Gifts'],
'tags': [8, 8, 8, 2, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8]
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/tweebank_ner/raw/main/dataset/label.json).
```python
{
"B-LOC": 0,
"B-MISC": 1,
"B-ORG": 2,
"B-PER": 3,
"I-LOC": 4,
"I-MISC": 5,
"I-ORG": 6,
"I-PER": 7,
"O": 8
}
```
### Data Splits
| name |train|validation|test|
|---------|----:|---------:|---:|
|tweebank_ner | 1639| 710 |1201|
### Citation Information
```
@article{DBLP:journals/corr/abs-2201-07281,
author = {Hang Jiang and
Yining Hua and
Doug Beeferman and
Deb Roy},
title = {Annotating the Tweebank Corpus on Named Entity Recognition and Building
{NLP} Models for Social Media Analysis},
journal = {CoRR},
volume = {abs/2201.07281},
year = {2022},
url = {https://arxiv.org/abs/2201.07281},
eprinttype = {arXiv},
eprint = {2201.07281},
timestamp = {Fri, 21 Jan 2022 13:57:15 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-07281.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | [
-0.4180237948894501,
-0.45708224177360535,
0.04429171234369278,
0.29328715801239014,
-0.36393260955810547,
0.21119150519371033,
-0.2776031494140625,
-0.3176443576812744,
0.6878536343574524,
0.24356606602668762,
-0.3838551342487335,
-0.9143521189689636,
-0.7683714032173157,
0.20703563094139... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ttxy/tweet_disaster | ttxy | 2022-07-24T06:02:30Z | 20 | 0 | null | [
"region:us"
] | 2022-07-24T06:02:30Z | 2022-07-24T06:02:10.000Z | 2022-07-24T06:02:10 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
nsarker/plantspecies-demo | nsarker | 2022-08-12T15:07:38Z | 20 | 1 | null | [
"region:us"
] | 2022-08-12T15:07:38Z | 2022-08-11T23:35:37.000Z | 2022-08-11T23:35:37 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sepidmnorozy/Chinese_sentiment | sepidmnorozy | 2022-08-15T23:09:45Z | 20 | 3 | null | [
"region:us"
] | 2022-08-15T23:09:45Z | 2022-08-15T23:08:48.000Z | 2022-08-15T23:08:48 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
sepidmnorozy/Spanish_sentiment | sepidmnorozy | 2022-08-16T10:00:12Z | 20 | 0 | null | [
"region:us"
] | 2022-08-16T10:00:12Z | 2022-08-16T09:59:18.000Z | 2022-08-16T09:59:18 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Abelespin/vini_dataset | Abelespin | 2022-09-28T15:51:13Z | 20 | 0 | null | [
"region:us"
] | 2022-09-28T15:51:13Z | 2022-09-28T15:47:40.000Z | 2022-09-28T15:47:40 | Entry not found | [
-0.32276472449302673,
-0.22568407654762268,
0.8622258901596069,
0.4346148371696472,
-0.5282984972000122,
0.7012965679168701,
0.7915717363357544,
0.07618629932403564,
0.7746022939682007,
0.2563222646713257,
-0.785281777381897,
-0.22573848068714142,
-0.9104482531547546,
0.5715669393539429,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
CANUTO/images | CANUTO | 2022-09-28T16:00:43Z | 20 | 0 | null | [
"region:us"
] | 2022-09-28T16:00:43Z | 2022-09-28T15:54:45.000Z | 2022-09-28T15:54:45 | a | [
-0.12115570902824402,
-0.683180034160614,
0.5243465304374695,
0.31528282165527344,
-0.24262170493602753,
0.33621278405189514,
0.4461815357208252,
-0.019997993484139442,
0.7404854893684387,
0.5023880004882812,
-0.8753649592399597,
-0.021468261256814003,
-1.0433977842330933,
0.25828588008880... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test-mathemakitt-c50da3-1597456334 | autoevaluate | 2022-09-29T15:59:04Z | 20 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-29T15:59:04Z | 2022-09-29T15:31:36.000Z | 2022-09-29T15:31:36 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test
eval_info:
task: text_zero_shot_classification
model: facebook/opt-13b
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test
dataset_config: mathemakitten--winobias_antistereotype_test
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-13b
* Dataset: mathemakitten/winobias_antistereotype_test
* Config: mathemakitten--winobias_antistereotype_test
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | [
-0.4192511737346649,
-0.3883885145187378,
0.22582240402698517,
0.03031722456216812,
-0.055588722229003906,
-0.07068253308534622,
0.0841626524925232,
-0.4892548620700836,
0.2948180139064789,
0.3234342038631439,
-1.0298339128494263,
-0.25130677223205566,
-0.6060537695884705,
-0.0956225916743... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
trevfran/perfil | trevfran | 2022-09-30T02:07:57Z | 20 | 0 | null | [
"license:other",
"region:us"
] | 2022-09-30T02:07:57Z | 2022-09-30T01:50:24.000Z | 2022-09-30T01:50:24 | ---
license: other
---
| [
-0.12853367626667023,
-0.18616794049739838,
0.6529126763343811,
0.4943627417087555,
-0.19319313764572144,
0.23607443273067474,
0.36071979999542236,
0.05056338757276535,
0.5793654322624207,
0.7400138974189758,
-0.6508103013038635,
-0.23783987760543823,
-0.710224986076355,
-0.047825977206230... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
quinsclr/answerable_tydiqa_tokenized_english | quinsclr | 2022-09-30T10:54:37Z | 20 | 0 | null | [
"region:us"
] | 2022-09-30T10:54:37Z | 2022-09-30T10:54:19.000Z | 2022-09-30T10:54:19 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
basilis/nerDatasetv1 | basilis | 2022-10-05T17:05:36Z | 20 | 0 | null | [
"region:us"
] | 2022-10-05T17:05:36Z | 2022-10-05T16:56:39.000Z | 2022-10-05T16:56:39 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
dojo/newsgroups | dojo | 2022-10-11T04:19:26Z | 20 | 0 | null | [
"region:us"
] | 2022-10-11T04:19:26Z | 2022-10-11T03:54:54.000Z | 2022-10-11T03:54:54 | Entry not found | [
-0.3227649927139282,
-0.225684255361557,
0.862226128578186,
0.43461498618125916,
-0.5282987952232361,
0.7012963891029358,
0.7915717363357544,
0.07618629932403564,
0.7746025919914246,
0.2563219666481018,
-0.7852816581726074,
-0.2257382869720459,
-0.9104480743408203,
0.5715669393539429,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
krm/for-ULPGL-Dissertation | krm | 2022-10-16T07:53:00Z | 20 | 0 | null | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|orange_sum",
"language:fr",
"license:other",
"krm",
"ulpgl",
"orange",
"reg... | 2022-10-16T07:53:00Z | 2022-10-13T11:01:24.000Z | 2022-10-13T11:01:24 | ---
annotations_creators:
- other
language:
- fr
language_creators:
- other
license:
- other
multilinguality:
- monolingual
pretty_name: for-ULPGL-Dissertation
size_categories:
- 10K<n<100K
source_datasets:
- extended|orange_sum
tags:
- krm
- ulpgl
- orange
task_categories:
- summarization
task_ids:
- news-articles-summarization
---
# Dataset Card for [for-ULPGL-Dissertation]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** krm/for-ULPGL-Dissertation
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Ce dataset est essentiellement basé sur le dataset *GEM/Orange_sum* dédié à la synthèse d'articles en français. Il est constitué des données abstract de ce dataset (Orange_sum) auxquelles a été ajouté un certain nombre de synthèses générées par le système **Mon Résumeur** de **David Krame**.
### Supported Tasks and Leaderboards
Synthèse automatique
### Languages
Français
## Dataset Structure
### Data Fields
*summary* et *text* sont les champs du dataset avec :
**text** contient les textes et
**summary** les synthèses correspondantes.
### Data Splits
Pour le moment (le 16 Octobre 2022), le dataset est constitué de :
> **21721** données d'entraînement (split dénommé **train**)
> **1545** données de validation (split dénommé **validation**)
> **1581** données de test (split dénommé **test**)
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
| [
-0.40610459446907043,
-0.36603519320487976,
0.2745222747325897,
0.27918633818626404,
-0.07517112046480179,
0.035203494131565094,
-0.26253741979599,
-0.10702264308929443,
0.2651335299015045,
0.48226839303970337,
-0.6087443232536316,
-1.0992594957351685,
-0.46478787064552307,
0.2191160321235... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
zhenzi/test | zhenzi | 2022-10-18T02:03:54Z | 20 | 0 | null | [
"region:us"
] | 2022-10-18T02:03:54Z | 2022-10-14T01:38:17.000Z | 2022-10-14T01:38:17 |
## test | [
-0.3340488374233246,
-0.5287365913391113,
0.2672910690307617,
0.5127611756324768,
-0.5576208233833313,
0.16945025324821472,
0.39883556962013245,
0.31236302852630615,
0.08509017527103424,
0.7430499196052551,
-0.29360494017601013,
-0.24781589210033417,
-0.567743182182312,
0.31540441513061523... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
aneesh-b/SQuAD_Hindi | aneesh-b | 2022-10-16T06:18:33Z | 20 | 1 | null | [
"license:unknown",
"region:us"
] | 2022-10-16T06:18:33Z | 2022-10-14T19:20:33.000Z | 2022-10-14T19:20:33 | ---
license: unknown
---
This dataset is created by translating a part of the Stanford QA dataset.
It contains 5k QA pairs from the original SQuad dataset translated to Hindi using the googletrans api. | [
-0.11016609519720078,
-0.25958573818206787,
0.12139934301376343,
0.33697837591171265,
-0.1871100217103958,
0.5792409181594849,
0.3484727442264557,
-0.44149085879325867,
0.41588255763053894,
0.3803553283214569,
-1.1635180711746216,
-0.41528910398483276,
-0.1830742359161377,
0.44038799405097... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arbml/Commonsense_Validation | arbml | 2022-10-14T21:52:21Z | 20 | 1 | null | [
"region:us"
] | 2022-10-14T21:52:21Z | 2022-10-14T21:52:13.000Z | 2022-10-14T21:52:13 | ---
dataset_info:
features:
- name: id
dtype: string
- name: first_sentence
dtype: string
- name: second_sentence
dtype: string
- name: label
dtype:
class_label:
names:
0: 0
1: 1
splits:
- name: train
num_bytes: 1420233
num_examples: 10000
- name: validation
num_bytes: 133986
num_examples: 1000
download_size: 837486
dataset_size: 1554219
---
# Dataset Card for "Commonsense_Validation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5167104005813599,
-0.281087189912796,
0.24861359596252441,
0.09127888828516006,
-0.1931314468383789,
-0.26356518268585205,
0.21805614233016968,
-0.039397045969963074,
0.4266177713871002,
0.4852343201637268,
-0.6992639303207397,
-0.8124571442604065,
-0.45100027322769165,
-0.0178620703518... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Harsit/xnli2.0_train_swahili | Harsit | 2022-10-15T09:22:30Z | 20 | 0 | null | [
"region:us"
] | 2022-10-15T09:22:30Z | 2022-10-15T09:21:59.000Z | 2022-10-15T09:21:59 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
andrewkroening/538-NBA-Historical-Raptor | andrewkroening | 2022-11-06T22:14:56Z | 20 | 0 | null | [
"license:cc",
"region:us"
] | 2022-11-06T22:14:56Z | 2022-10-19T16:47:53.000Z | 2022-10-19T16:47:53 | ---
license: cc
---
## Dataset Overview
### Intro
This dataset was downloaded from the good folks at fivethirtyeight. You can find the original (or in the future, updated) versions of this and several similar datasets at [this GitHub link.](https://github.com/fivethirtyeight/data/tree/master/nba-raptor)
### Data layout
Here are the columns in this dataset, which contains data on every NBA player, broken out by season, since the 1976 NBA-ABA merger:
Column | Description
-------|---------------
`player_name` | Player name
`player_id` | Basketball-Reference.com player ID
`season` | Season
`season_type` | Regular season (RS) or playoff (PO)
`team` | Basketball-Reference ID of team
`poss` | Possessions played
`mp` | Minutes played
`raptor_box_offense` | Points above average per 100 possessions added by player on offense, based only on box score estimate
`raptor_box_defense` | Points above average per 100 possessions added by player on defense, based only on box score estimate
`raptor_box_total` | Points above average per 100 possessions added by player, based only on box score estimate
`raptor_onoff_offense` | Points above average per 100 possessions added by player on offense, based only on plus-minus data
`raptor_onoff_defense` | Points above average per 100 possessions added by player on defense, based only on plus-minus data
`raptor_onoff_total` | Points above average per 100 possessions added by player, based only on plus-minus data
`raptor_offense` | Points above average per 100 possessions added by player on offense, using both box and on-off components
`raptor_defense` | Points above average per 100 possessions added by player on defense, using both box and on-off components
`raptor_total` | Points above average per 100 possessions added by player on both offense and defense, using both box and on-off components
`war_total` | Wins Above Replacement between regular season and playoffs
`war_reg_season` | Wins Above Replacement for regular season
`war_playoffs` | Wins Above Replacement for playoffs
`predator_offense` | Predictive points above average per 100 possessions added by player on offense
`predator_defense` | Predictive points above average per 100 possessions added by player on defense
`predator_total` | Predictive points above average per 100 possessions added by player on both offense and defense
`pace_impact` | Player impact on team possessions per 48 minutes
### More information
This dataset was put together for Hugging Face by this guy: [Andrew Kroening](https://github.com/andrewkroening)
He was building some kind of a silly tool using this dataset. It's an NBA WAR Predictor tool, and you can find the Gradio interface [here.](https://huggingface.co/spaces/andrewkroening/nba-war-predictor) The GitHub repo can be found [here.](https://github.com/andrewkroening/nba-war-predictor-tool) | [
-0.7492937445640564,
-0.41296422481536865,
0.20103274285793304,
0.2526555061340332,
0.05056954547762871,
0.571533739566803,
0.40852460265159607,
-0.5765367746353149,
0.5282831192016602,
0.11983852088451385,
-0.6707403659820557,
-0.6704725623130798,
-0.8864890336990356,
0.14012762904167175,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
tanay/nli-corpus | tanay | 2022-10-25T08:11:01Z | 20 | 0 | null | [
"region:us"
] | 2022-10-25T08:11:01Z | 2022-10-25T07:44:29.000Z | 2022-10-25T07:44:29 | Entry not found | [
-0.32276487350463867,
-0.22568444907665253,
0.8622263073921204,
0.43461570143699646,
-0.5282988548278809,
0.7012969255447388,
0.7915717363357544,
0.07618642598390579,
0.7746027112007141,
0.25632190704345703,
-0.7852815389633179,
-0.22573848068714142,
-0.910447895526886,
0.5715675354003906,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
pseeej/animal-crossing-data | pseeej | 2022-11-02T03:31:55Z | 20 | 2 | null | [
"region:us"
] | 2022-11-02T03:31:55Z | 2022-11-02T03:30:51.000Z | 2022-11-02T03:30:51 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 7209776.0
num_examples: 389
download_size: 7181848
dataset_size: 7209776.0
---
# Dataset Card for "animal-crossing-data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.8795072436332703,
-0.16618584096431732,
0.07439253479242325,
0.5374366044998169,
-0.08805007487535477,
0.17046456038951874,
0.5925691723823547,
-0.7380562424659729,
0.8278598189353943,
0.45953837037086487,
-0.9226988554000854,
-0.6783260107040405,
-0.6860972046852112,
-0.058113381266593... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
frankier/processed_multiscale_rt_critics | frankier | 2023-10-03T17:16:04Z | 20 | 0 | null | [
"region:us"
] | 2023-10-03T17:16:04Z | 2022-11-02T12:15:25.000Z | 2022-11-02T12:15:25 | ---
dataset_info:
features:
- name: movie_title
dtype: string
- name: publisher_name
dtype: string
- name: critic_name
dtype: string
- name: review_content
dtype: string
- name: review_score
dtype: string
- name: grade_type
dtype: string
- name: orig_num
dtype: float32
- name: orig_denom
dtype: float32
- name: includes_zero
dtype: bool
- name: label
dtype: uint8
- name: scale_points
dtype: uint8
- name: multiplier
dtype: uint8
- name: group_id
dtype: uint32
splits:
- name: train
num_bytes: 117244343
num_examples: 540256
- name: test
num_bytes: 28517095
num_examples: 131563
download_size: 0
dataset_size: 145761438
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "processed_multiscale_rt_critics"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.9892969727516174,
-0.43475964665412903,
0.4014473557472229,
0.4795728027820587,
-0.13557229936122894,
0.045541826635599136,
-0.13092352449893951,
-0.14595037698745728,
0.7042940258979797,
0.5358117818832397,
-0.9403520226478577,
-0.5872722864151001,
-0.6482906937599182,
-0.1873831897974... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
matchbench/viznet | matchbench | 2022-12-06T09:22:30Z | 20 | 1 | null | [
"region:us"
] | 2022-12-06T09:22:30Z | 2022-11-02T13:42:45.000Z | 2022-11-02T13:42:45 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
SALT-NLP/MIC | SALT-NLP | 2022-11-03T03:37:01Z | 20 | 1 | null | [
"region:us"
] | 2022-11-03T03:37:01Z | 2022-11-03T03:32:46.000Z | 2022-11-03T03:32:46 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ju-resplande/qa-pt | ju-resplande | 2022-11-25T20:31:56Z | 20 | 6 | null | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:no-annotation",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:extended|mqa",
"language:pt",
"license:cc0-1.0",
"region:us"
] | 2022-11-25T20:31:56Z | 2022-11-03T22:57:12.000Z | 2022-11-03T22:57:12 | ---
annotations_creators:
- no-annotation
language_creators:
- other
language:
- pt
license:
- cc0-1.0
multilinguality:
- monolingual
pretty_name: qa-portuguese
size_categories:
- 1M<n<10M
source_datasets:
- extended|mqa
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
---
# Dataset Card for QA-Portuguese
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Portuguese preprocessed split from [MQA dataset](https://huggingface.co/datasets/clips/mqa).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset is Portuguese.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@ju-resplande](https://github.com/ju-resplande) for adding this dataset.
| [
-0.470576673746109,
-0.48652413487434387,
0.05025418475270271,
0.3115983307361603,
-0.444492906332016,
0.24736206233501434,
-0.07135991007089615,
-0.3342674970626831,
0.8216720223426819,
0.6476390957832336,
-0.7688577175140381,
-1.0462218523025513,
-0.6395551562309265,
0.2403528392314911,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Javtor/biomedical-topic-categorization | Javtor | 2022-11-13T02:22:35Z | 20 | 0 | null | [
"region:us"
] | 2022-11-13T02:22:35Z | 2022-11-13T00:24:11.000Z | 2022-11-13T00:24:11 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/bionlp_st_2011_ge | bigbio | 2022-12-22T15:43:51Z | 20 | 0 | null | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-3.0",
"region:us"
] | 2022-12-22T15:43:51Z | 2022-11-13T22:06:52.000Z | 2022-11-13T22:06:52 |
---
language:
- en
bigbio_language:
- English
license: cc-by-3.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_3p0
pretty_name: BioNLP 2011 GE
homepage: https://sites.google.com/site/bionlpst/bionlp-shared-task-2011/genia-event-extraction-genia
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- EVENT_EXTRACTION
- NAMED_ENTITY_RECOGNITION
- COREFERENCE_RESOLUTION
---
# Dataset Card for BioNLP 2011 GE
## Dataset Description
- **Homepage:** https://sites.google.com/site/bionlpst/bionlp-shared-task-2011/genia-event-extraction-genia
- **Pubmed:** True
- **Public:** True
- **Tasks:** EE,NER,COREF
The BioNLP-ST GE task has been promoting development of fine-grained information extraction (IE) from biomedical
documents, since 2009. Particularly, it has focused on the domain of NFkB as a model domain of Biomedical IE.
The GENIA task aims at extracting events occurring upon genes or gene products, which are typed as "Protein"
without differentiating genes from gene products. Other types of physical entities, e.g. cells, cell components,
are not differentiated from each other, and their type is given as "Entity".
## Citation Information
```
@inproceedings{10.5555/2107691.2107693,
author = {Kim, Jin-Dong and Wang, Yue and Takagi, Toshihisa and Yonezawa, Akinori},
title = {Overview of Genia Event Task in BioNLP Shared Task 2011},
year = {2011},
isbn = {9781937284091},
publisher = {Association for Computational Linguistics},
address = {USA},
abstract = {The Genia event task, a bio-molecular event extraction task,
is arranged as one of the main tasks of BioNLP Shared Task 2011.
As its second time to be arranged for community-wide focused
efforts, it aimed to measure the advance of the community since 2009,
and to evaluate generalization of the technology to full text papers.
After a 3-month system development period, 15 teams submitted their
performance results on test cases. The results show the community has
made a significant advancement in terms of both performance improvement
and generalization.},
booktitle = {Proceedings of the BioNLP Shared Task 2011 Workshop},
pages = {7–15},
numpages = {9},
location = {Portland, Oregon},
series = {BioNLP Shared Task '11}
}
```
| [
-0.20819447934627533,
-0.6285663843154907,
0.3216264545917511,
0.04056776314973831,
-0.41105884313583374,
0.0035077708307653666,
-0.09791713953018188,
-0.7171359658241272,
0.6057114005088806,
0.06878578662872314,
-0.41204625368118286,
-0.7535246014595032,
-0.5070390105247498,
0.24121722579... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
bigbio/bionlp_st_2011_id | bigbio | 2022-12-22T15:43:52Z | 20 | 1 | null | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-12-22T15:43:52Z | 2022-11-13T22:06:56.000Z | 2022-11-13T22:06:56 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: GENIA_PROJECT_LICENSE
pretty_name: BioNLP 2011 ID
homepage: https://github.com/openbiocorpora/bionlp-st-2011-id
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- EVENT_EXTRACTION
- COREFERENCE_RESOLUTION
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for BioNLP 2011 ID
## Dataset Description
- **Homepage:** https://github.com/openbiocorpora/bionlp-st-2011-id
- **Pubmed:** True
- **Public:** True
- **Tasks:** EE,COREF,NER
The dataset of the Infectious Diseases (ID) task of
BioNLP Shared Task 2011.
## Citation Information
```
@inproceedings{pyysalo-etal-2011-overview,
title = "Overview of the Infectious Diseases ({ID}) task of {B}io{NLP} Shared Task 2011",
author = "Pyysalo, Sampo and
Ohta, Tomoko and
Rak, Rafal and
Sullivan, Dan and
Mao, Chunhong and
Wang, Chunxia and
Sobral, Bruno and
Tsujii, Jun{'}ichi and
Ananiadou, Sophia",
booktitle = "Proceedings of {B}io{NLP} Shared Task 2011 Workshop",
month = jun,
year = "2011",
address = "Portland, Oregon, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W11-1804",
pages = "26--35",
}
```
| [
0.0529959611594677,
-0.3662673532962799,
0.2382963001728058,
0.22472502291202545,
-0.4134540259838104,
-0.06450150161981583,
-0.21323077380657196,
-0.591213047504425,
0.7183929085731506,
0.09546647220849991,
-0.3260326385498047,
-0.8012725114822388,
-0.5277214646339417,
0.4527698755264282,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
WillHeld/top_v2 | WillHeld | 2022-12-10T17:52:27Z | 20 | 0 | null | [
"region:us"
] | 2022-12-10T17:52:27Z | 2022-11-18T00:41:44.000Z | 2022-11-18T00:41:44 | ---
dataset_info:
features:
- name: domain
dtype: string
- name: utterance
dtype: string
- name: semantic_parse
dtype: string
splits:
- name: eval
num_bytes: 2650777
num_examples: 17160
- name: test
num_bytes: 5947186
num_examples: 38785
- name: train
num_bytes: 19433606
num_examples: 124597
download_size: 9672445
dataset_size: 28031569
---
# Dataset Card for "top_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.6034470200538635,
-0.2629382610321045,
0.12483619898557663,
0.15342089533805847,
-0.26548445224761963,
-0.12353496998548508,
0.4250085651874542,
-0.15454243123531342,
0.7226957678794861,
0.5853952169418335,
-0.9179359674453735,
-0.6382377743721008,
-0.8665900230407715,
-0.45442441105842... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
gojiteji/QRsst2 | gojiteji | 2022-11-20T15:50:42Z | 20 | 0 | null | [
"region:us"
] | 2022-11-20T15:50:42Z | 2022-11-20T15:50:22.000Z | 2022-11-20T15:50:22 | ---
dataset_info:
features:
- name: idx
dtype: int32
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
0: negative
1: positive
- name: image
dtype: image
splits:
- name: train
num_bytes: 150258864.979
num_examples: 67349
download_size: 77510123
dataset_size: 150258864.979
---
# Dataset Card for "QRsst2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.345175176858902,
-0.003453807672485709,
0.18403582274913788,
0.11984328180551529,
-0.3927304148674011,
0.08217920362949371,
0.4663993716239929,
-0.0776647999882698,
0.5530955791473389,
0.27560704946517944,
-0.763876736164093,
-0.5419867038726807,
-0.5498954653739929,
-0.3157562613487243... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-eval-futin__feed-top_vi-b5257d-2174969943 | autoevaluate | 2022-11-21T05:28:44Z | 20 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-21T05:28:44Z | 2022-11-21T04:36:12.000Z | 2022-11-21T04:36:12 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- futin/feed
eval_info:
task: text_zero_shot_classification
model: bigscience/bloom-1b7
metrics: []
dataset_name: futin/feed
dataset_config: top_vi
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b7
* Dataset: futin/feed
* Config: top_vi
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | [
-0.206252783536911,
-0.3654276132583618,
0.4389948546886444,
0.06491697579622269,
0.0860862135887146,
-0.1528279334306717,
-0.00980802159756422,
-0.41004329919815063,
0.04385107383131981,
0.3269839882850647,
-0.9553228616714478,
-0.24725092947483063,
-0.7314328551292419,
-0.032433915883302... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-eval-futin__feed-top_vi-71f14a-2175469965 | autoevaluate | 2022-11-21T06:02:48Z | 20 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-21T06:02:48Z | 2022-11-21T05:41:21.000Z | 2022-11-21T05:41:21 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- futin/feed
eval_info:
task: text_zero_shot_classification
model: facebook/opt-350m
metrics: []
dataset_name: futin/feed
dataset_config: top_vi
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-350m
* Dataset: futin/feed
* Config: top_vi
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | [
-0.32531359791755676,
-0.4648949205875397,
0.32163000106811523,
0.022916371002793312,
0.020463233813643456,
-0.12787769734859467,
0.005259434226900339,
-0.4121301770210266,
0.1403820961713791,
0.3942025601863861,
-0.9995837807655334,
-0.22120237350463867,
-0.6689043045043945,
-0.0491343624... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Norod78/hewiki-20220901-articles-dataset | Norod78 | 2022-11-22T10:57:40Z | 20 | 0 | null | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"source_datasets:extended|wikipedia",
"language:he... | 2022-11-22T10:57:40Z | 2022-11-21T08:10:15.000Z | 2022-11-21T08:10:15 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1458031124
num_examples: 4325836
download_size: 745537027
dataset_size: 1458031124
annotations_creators:
- other
language_creators:
- other
language:
- he
multilinguality:
- monolingual
pretty_name: hewiki Corpus from hewiki-20220901-pages-articles-multistream.xml.bz2
size_categories:
- 100M<n<1B
source_datasets:
- extended|wikipedia
tags:
- he-wiki
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
---
# Dataset Card for "hewiki-20220901-articles-dataset" | [
-0.49590733647346497,
0.1791626363992691,
-0.030783742666244507,
0.38198691606521606,
-0.6519379019737244,
0.13135398924350739,
-0.04456311836838722,
0.0006368214380927384,
0.45239725708961487,
0.5181339383125305,
-0.7877065539360046,
-0.873751699924469,
-0.5042684078216553,
0.294074714183... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
autoevaluate/autoeval-eval-futin__feed-sen_en-2f01d7-2175769987 | autoevaluate | 2022-11-21T12:43:31Z | 20 | 0 | null | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-21T12:43:31Z | 2022-11-21T09:05:34.000Z | 2022-11-21T09:05:34 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- futin/feed
eval_info:
task: text_zero_shot_classification
model: facebook/opt-13b
metrics: []
dataset_name: futin/feed
dataset_config: sen_en
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-13b
* Dataset: futin/feed
* Config: sen_en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | [
-0.3108331859111786,
-0.4782126247882843,
0.33091822266578674,
0.030126716941595078,
0.04671886935830116,
-0.14786143600940704,
-0.003894468769431114,
-0.4753929376602173,
0.1303798407316208,
0.364164799451828,
-1.0351256132125854,
-0.21542459726333618,
-0.6413382291793823,
0.0131028657779... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ML-Projects-Kiel/tweetyface | ML-Projects-Kiel | 2022-11-27T20:41:29Z | 20 | 1 | null | [
"task_categories:text-generation",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"language:en",
"language:de",
"license:apache-2.0",
"region:us"
] | 2022-11-27T20:41:29Z | 2022-11-23T12:04:44.000Z | 2022-11-23T12:04:44 | ---
annotations_creators:
- machine-generated
language:
- en
- de
language_creators:
- crowdsourced
license:
- apache-2.0
multilinguality:
- multilingual
pretty_name: tweetyface_en
size_categories:
- 10K<n<100K
source_datasets: []
tags: []
task_categories:
- text-generation
task_ids: []
---
# Dataset Card for "tweetyface"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** [GitHub](https://github.com/ml-projects-kiel/OpenCampus-ApplicationofTransformers)
### Dataset Summary
Dataset containing Tweets from prominent Twitter Users.
The dataset has been created utilizing a crawler for the Twitter API.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English, German
## Dataset Structure
### Data Instances
#### english
- **Size of downloaded dataset files:** 4.77 MB
- **Size of the generated dataset:** 5.92 MB
- **Total amount of disk used:** 4.77 MB
#### german
- **Size of downloaded dataset files:** 2.58 MB
- **Size of the generated dataset:** 3.10 MB
- **Total amount of disk used:** 2.59 MB
An example of 'validation' looks as follows.
```
{
"text": "@SpaceX @Space_Station About twice as much useful mass to orbit as rest of Earth combined",
"label": elonmusk,
"idx": 1001283
}
```
### Data Fields
The data fields are the same among all splits and languages.
- `text`: a `string` feature.
- `label`: a classification label
- `idx`: an `int64` feature.
### Data Splits
| name | train | validation |
| ------- | ----: | ---------: |
| english | 27857 | 6965 |
| german | 10254 | 2564 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
| [
-0.3161562383174896,
-0.5588123798370361,
0.2033025473356247,
0.3906839191913605,
-0.17903846502304077,
0.38246583938598633,
-0.28491276502609253,
-0.3542366623878479,
0.6120897531509399,
0.557133674621582,
-0.9104232788085938,
-1.133093237876892,
-0.7312836050987244,
0.028512384742498398,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ibm/MedMentions-ZS | ibm | 2022-11-25T16:49:58Z | 20 | 0 | null | [
"region:us"
] | 2022-11-25T16:49:58Z | 2022-11-25T16:28:59.000Z | 2022-11-25T16:28:59 | Entry not found | [
-0.3227645754814148,
-0.22568479180335999,
0.8622264862060547,
0.43461528420448303,
-0.52829909324646,
0.7012971639633179,
0.7915720343589783,
0.07618614286184311,
0.774603009223938,
0.2563217282295227,
-0.7852813005447388,
-0.22573819756507874,
-0.9104477167129517,
0.5715674161911011,
-... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
surrey-nlp/SAD | surrey-nlp | 2022-11-28T18:41:51Z | 20 | 0 | null | [
"task_categories:text-classification",
"annotations_creators:Jordan Painter, Diptesh Kanojia",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | 2022-11-28T18:41:51Z | 2022-11-28T15:26:38.000Z | 2022-11-28T15:26:38 | ---
annotations_creators:
- Jordan Painter, Diptesh Kanojia
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: 'Utilising Weak Supervision to create S3D: A Sarcasm Annotated Dataset'
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
---
# Utilising Weak Supervision to Create S3D: A Sarcasm Annotated Dataset
This is the repository for the S3D dataset published at EMNLP 2022. The dataset can help build sarcasm detection models.
# SAD
The SAD dataset is our gold standard dataset of tweets labelled for sarcasm. These tweets were scraped by observing a '#sarcasm' hashtag and then manually annotated by three annotators.
There are a total of 1170 pairs of a sarcastic and non-sarcastic tweets which were both posted by the same user, resulting in a total of 2340 tweets annotated for sarcasm.
These tweets can be accessed by using the Twitter API so that they can be used for other experiments.
# Data Fields
- Tweet ID: The ID of the labelled tweet
- Label: A label to denote if a given tweet is sarcastic
# Data Splits
- Train: 1638
- Valid: 351
- Test: 351 | [
-0.27309465408325195,
-0.5473299026489258,
0.4288478493690491,
0.5924935936927795,
-0.15261267125606537,
-0.1838938444852829,
0.191509410738945,
-0.2319335639476776,
0.34703144431114197,
0.43644243478775024,
-0.6057400107383728,
-0.5597267150878906,
-0.7332069277763367,
0.5021845102310181,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
surrey-nlp/S3D-v1 | surrey-nlp | 2022-11-28T18:46:48Z | 20 | 0 | null | [
"task_categories:text-classification",
"annotations_creators:Jordan Painter, Diptesh Kanojia",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | 2022-11-28T18:46:48Z | 2022-11-28T15:27:35.000Z | 2022-11-28T15:27:35 | ---
annotations_creators:
- Jordan Painter, Diptesh Kanojia
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: 'Utilising Weak Supervision to create S3D: A Sarcasm Annotated Dataset'
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
---
## Table of Contents
- [Dataset Description](#dataset-description)
-
# Utilising Weak Supervision to Create S3D: A Sarcasm Annotated Dataset
This is the repository for the S3D dataset published at EMNLP 2022. The dataset can help build sarcasm detection models.
# S3D Summary
The S3D dataset is our silver standard dataset of 100,000 tweets labelled for sarcasm using weak supervision by our **BERTweet-sarcasm-combined** model.
These tweets can be accessed by using the Twitter API so that they can be used for other experiments.
S3D contains 38879 tweets labelled as sarcastic, and 61211 tweets labelled as not being sarcastic.
# Data Fields
- Tweet ID: The ID of the labelled tweet
- Label: A label to denote if a given tweet is sarcastic
# Data Splits
- Train: 70,000
- Valid: 15,000
- Test: 15,000 | [
-0.24907031655311584,
-0.5344393253326416,
0.41252419352531433,
0.6000586748123169,
-0.08551713079214096,
-0.2797398269176483,
0.21937546133995056,
-0.1120394840836525,
0.25173789262771606,
0.5461306571960449,
-0.6001496315002441,
-0.6190741062164307,
-0.7324450016021729,
0.408611238002777... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
1aurent/individuality-of-handwriting | 1aurent | 2023-10-01T15:15:30Z | 20 | 0 | null | [
"task_categories:image-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:unknown",
"legal",
"signatures",
"CEDAR",
"region:us"
] | 2023-10-01T15:15:30Z | 2022-12-01T20:42:04.000Z | 2022-12-01T20:42:04 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
task_categories:
- image-classification
pretty_name: Individuality Of Handwriting (CEDAR)
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': original
'1': forgeries
- name: individual
dtype: uint8
- name: figure
dtype: uint8
splits:
- name: train
num_bytes: 195780898.8
num_examples: 2640
download_size: 252337526
dataset_size: 195780898.8
tags:
- legal
- signatures
- CEDAR
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Individuality Of Handwriting (CEDAR)
https://pubmed.ncbi.nlm.nih.gov/12136998/ \
https://cedar.buffalo.edu/NIJ/projectinfo.html
## Abstract
Motivated by several rulings in United States courts concerning expert testimony in general, and handwriting testimony in particular, we undertook a study to objectively validate the hypothesis that handwriting is individual. Handwriting samples of 1,500 individuals, representative of the U.S. population with respect to gender, age, ethnic groups, etc., were obtained. Analyzing differences in handwriting was done by using computer algorithms for extracting features from scanned images of handwriting. Attributes characteristic of the handwriting were obtained, e.g., line separation, slant, character shapes, etc. These attributes, which are a subset of attributes used by forensic document examiners (FDEs), were used to quantitatively establish individuality by using machine learning approaches. Using global attributes of handwriting and very few characters in the writing, the ability to determine the writer with a high degree of confidence was established. The work is a step towards providing scientific support for admitting handwriting evidence in court. The mathematical approach and the resulting software also have the promise of aiding the FDE.
Srihari SN, Cha SH, Arora H, Lee S. Individuality of handwriting. J Forensic Sci. 2002 Jul;47(4):856-72. PMID: 12136998. | [
0.04421984776854515,
-0.7644034624099731,
0.6052289605140686,
0.47379162907600403,
-0.14046306908130646,
0.03072410263121128,
-0.03537524491548538,
-0.3805275857448578,
0.3197396397590637,
0.29370686411857605,
-0.38299885392189026,
-0.7836165428161621,
-0.6927356123924255,
0.20846587419509... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ipipan/nkjp1m | ipipan | 2022-12-07T16:47:51Z | 20 | 2 | null | [
"task_categories:token-classification",
"task_ids:part-of-speech",
"task_ids:lemmatization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:pl",
"license:cc-by-4.0",
... | 2022-12-07T16:47:51Z | 2022-12-07T16:41:20.000Z | 2022-12-07T16:41:20 | ---
annotations_creators:
- expert-generated
language:
- pl
language_creators:
- expert-generated
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: NKJP1M
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- National Corpus of Polish
- Narodowy Korpus Języka Polskiego
task_categories:
- token-classification
task_ids:
- part-of-speech
- lemmatization
dataset_info:
features:
- name: nkjp_text
dtype: string
- name: nkjp_par
dtype: string
- name: nkjp_sent
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: cposes
sequence:
class_label:
names:
0: A
1: Adv
2: Comp
3: Conj
4: Dig
5: Interj
6: N
7: Num
8: Part
9: Prep
10: Punct
11: V
12: X
- name: poses
sequence:
class_label:
names:
0: adj
1: adja
2: adjc
3: adjp
4: adv
5: aglt
6: bedzie
7: brev
8: comp
9: conj
10: depr
11: dig
12: fin
13: frag
14: ger
15: imps
16: impt
17: inf
18: interj
19: interp
20: num
21: numcomp
22: pact
23: pacta
24: pant
25: part
26: pcon
27: ppas
28: ppron12
29: ppron3
30: praet
31: pred
32: prep
33: romandig
34: siebie
35: subst
36: sym
37: winien
38: xxs
39: xxx
- name: tags
sequence:
class_label:
names:
0: adj:pl:acc:f:com
1: adj:pl:acc:f:pos
2: adj:pl:acc:f:sup
3: adj:pl:acc:m1:com
4: adj:pl:acc:m1:pos
5: adj:pl:acc:m1:sup
6: adj:pl:acc:m2:com
7: adj:pl:acc:m2:pos
8: adj:pl:acc:m2:sup
9: adj:pl:acc:m3:com
10: adj:pl:acc:m3:pos
11: adj:pl:acc:m3:sup
12: adj:pl:acc:n:com
13: adj:pl:acc:n:pos
14: adj:pl:acc:n:sup
15: adj:pl:dat:f:com
16: adj:pl:dat:f:pos
17: adj:pl:dat:f:sup
18: adj:pl:dat:m1:com
19: adj:pl:dat:m1:pos
20: adj:pl:dat:m1:sup
21: adj:pl:dat:m2:pos
22: adj:pl:dat:m3:com
23: adj:pl:dat:m3:pos
24: adj:pl:dat:n:pos
25: adj:pl:dat:n:sup
26: adj:pl:gen:f:com
27: adj:pl:gen:f:pos
28: adj:pl:gen:f:sup
29: adj:pl:gen:m1:com
30: adj:pl:gen:m1:pos
31: adj:pl:gen:m1:sup
32: adj:pl:gen:m2:com
33: adj:pl:gen:m2:pos
34: adj:pl:gen:m2:sup
35: adj:pl:gen:m3:com
36: adj:pl:gen:m3:pos
37: adj:pl:gen:m3:sup
38: adj:pl:gen:n:com
39: adj:pl:gen:n:pos
40: adj:pl:gen:n:sup
41: adj:pl:inst:f:com
42: adj:pl:inst:f:pos
43: adj:pl:inst:f:sup
44: adj:pl:inst:m1:com
45: adj:pl:inst:m1:pos
46: adj:pl:inst:m1:sup
47: adj:pl:inst:m2:pos
48: adj:pl:inst:m3:com
49: adj:pl:inst:m3:pos
50: adj:pl:inst:m3:sup
51: adj:pl:inst:n:com
52: adj:pl:inst:n:pos
53: adj:pl:inst:n:sup
54: adj:pl:loc:f:com
55: adj:pl:loc:f:pos
56: adj:pl:loc:f:sup
57: adj:pl:loc:m1:com
58: adj:pl:loc:m1:pos
59: adj:pl:loc:m1:sup
60: adj:pl:loc:m2:pos
61: adj:pl:loc:m3:com
62: adj:pl:loc:m3:pos
63: adj:pl:loc:m3:sup
64: adj:pl:loc:n:com
65: adj:pl:loc:n:pos
66: adj:pl:loc:n:sup
67: adj:pl:nom:f:com
68: adj:pl:nom:f:pos
69: adj:pl:nom:f:sup
70: adj:pl:nom:m1:com
71: adj:pl:nom:m1:pos
72: adj:pl:nom:m1:sup
73: adj:pl:nom:m2:com
74: adj:pl:nom:m2:pos
75: adj:pl:nom:m2:sup
76: adj:pl:nom:m3:com
77: adj:pl:nom:m3:pos
78: adj:pl:nom:m3:sup
79: adj:pl:nom:n:com
80: adj:pl:nom:n:pos
81: adj:pl:nom:n:sup
82: adj:pl:voc:f:pos
83: adj:pl:voc:m1:pos
84: adj:pl:voc:m2:pos
85: adj:pl:voc:n:pos
86: adj:sg:acc:f:com
87: adj:sg:acc:f:pos
88: adj:sg:acc:f:sup
89: adj:sg:acc:m1:com
90: adj:sg:acc:m1:pos
91: adj:sg:acc:m1:sup
92: adj:sg:acc:m2:com
93: adj:sg:acc:m2:pos
94: adj:sg:acc:m2:sup
95: adj:sg:acc:m3:com
96: adj:sg:acc:m3:pos
97: adj:sg:acc:m3:sup
98: adj:sg:acc:n:com
99: adj:sg:acc:n:pos
100: adj:sg:acc:n:sup
101: adj:sg:dat:f:com
102: adj:sg:dat:f:pos
103: adj:sg:dat:f:sup
104: adj:sg:dat:m1:com
105: adj:sg:dat:m1:pos
106: adj:sg:dat:m1:sup
107: adj:sg:dat:m2:pos
108: adj:sg:dat:m3:com
109: adj:sg:dat:m3:pos
110: adj:sg:dat:m3:sup
111: adj:sg:dat:n:com
112: adj:sg:dat:n:pos
113: adj:sg:dat:n:sup
114: adj:sg:gen:f:com
115: adj:sg:gen:f:pos
116: adj:sg:gen:f:sup
117: adj:sg:gen:m1:com
118: adj:sg:gen:m1:pos
119: adj:sg:gen:m1:sup
120: adj:sg:gen:m2:pos
121: adj:sg:gen:m2:sup
122: adj:sg:gen:m3:com
123: adj:sg:gen:m3:pos
124: adj:sg:gen:m3:sup
125: adj:sg:gen:n:com
126: adj:sg:gen:n:pos
127: adj:sg:gen:n:sup
128: adj:sg:inst:f:com
129: adj:sg:inst:f:pos
130: adj:sg:inst:f:sup
131: adj:sg:inst:m1:com
132: adj:sg:inst:m1:pos
133: adj:sg:inst:m1:sup
134: adj:sg:inst:m2:com
135: adj:sg:inst:m2:pos
136: adj:sg:inst:m2:sup
137: adj:sg:inst:m3:com
138: adj:sg:inst:m3:pos
139: adj:sg:inst:m3:sup
140: adj:sg:inst:n:com
141: adj:sg:inst:n:pos
142: adj:sg:inst:n:sup
143: adj:sg:loc:f:com
144: adj:sg:loc:f:pos
145: adj:sg:loc:f:sup
146: adj:sg:loc:m1:com
147: adj:sg:loc:m1:pos
148: adj:sg:loc:m1:sup
149: adj:sg:loc:m2:com
150: adj:sg:loc:m2:pos
151: adj:sg:loc:m3:com
152: adj:sg:loc:m3:pos
153: adj:sg:loc:m3:sup
154: adj:sg:loc:n:com
155: adj:sg:loc:n:pos
156: adj:sg:loc:n:sup
157: adj:sg:nom:f:com
158: adj:sg:nom:f:pos
159: adj:sg:nom:f:sup
160: adj:sg:nom:m1:com
161: adj:sg:nom:m1:pos
162: adj:sg:nom:m1:sup
163: adj:sg:nom:m2:com
164: adj:sg:nom:m2:pos
165: adj:sg:nom:m2:sup
166: adj:sg:nom:m3:com
167: adj:sg:nom:m3:pos
168: adj:sg:nom:m3:sup
169: adj:sg:nom:n:com
170: adj:sg:nom:n:pos
171: adj:sg:nom:n:sup
172: adj:sg:voc:f:pos
173: adj:sg:voc:f:sup
174: adj:sg:voc:m1:pos
175: adj:sg:voc:m1:sup
176: adj:sg:voc:m2:pos
177: adj:sg:voc:m3:pos
178: adj:sg:voc:n:pos
179: adja
180: adjc
181: adjp:dat
182: adjp:gen
183: adv
184: adv:com
185: adv:pos
186: adv:sup
187: aglt:pl:pri:imperf:nwok
188: aglt:pl:sec:imperf:nwok
189: aglt:sg:pri:imperf:nwok
190: aglt:sg:pri:imperf:wok
191: aglt:sg:sec:imperf:nwok
192: aglt:sg:sec:imperf:wok
193: bedzie:pl:pri:imperf
194: bedzie:pl:sec:imperf
195: bedzie:pl:ter:imperf
196: bedzie:sg:pri:imperf
197: bedzie:sg:sec:imperf
198: bedzie:sg:ter:imperf
199: brev:npun
200: brev:pun
201: comp
202: conj
203: depr:pl:acc:m2
204: depr:pl:nom:m2
205: depr:pl:voc:m2
206: dig
207: fin:pl:pri:imperf
208: fin:pl:pri:perf
209: fin:pl:sec:imperf
210: fin:pl:sec:perf
211: fin:pl:ter:imperf
212: fin:pl:ter:perf
213: fin:sg:pri:imperf
214: fin:sg:pri:perf
215: fin:sg:sec:imperf
216: fin:sg:sec:perf
217: fin:sg:ter:imperf
218: fin:sg:ter:perf
219: frag
220: ger:pl:acc:n:imperf:aff
221: ger:pl:acc:n:perf:aff
222: ger:pl:dat:n:perf:aff
223: ger:pl:gen:n:imperf:aff
224: ger:pl:gen:n:perf:aff
225: ger:pl:inst:n:imperf:aff
226: ger:pl:inst:n:perf:aff
227: ger:pl:loc:n:imperf:aff
228: ger:pl:loc:n:perf:aff
229: ger:pl:nom:n:imperf:aff
230: ger:pl:nom:n:perf:aff
231: ger:sg:acc:n:imperf:aff
232: ger:sg:acc:n:imperf:neg
233: ger:sg:acc:n:perf:aff
234: ger:sg:acc:n:perf:neg
235: ger:sg:dat:n:imperf:aff
236: ger:sg:dat:n:perf:aff
237: ger:sg:dat:n:perf:neg
238: ger:sg:gen:n:imperf:aff
239: ger:sg:gen:n:imperf:neg
240: ger:sg:gen:n:perf:aff
241: ger:sg:gen:n:perf:neg
242: ger:sg:inst:n:imperf:aff
243: ger:sg:inst:n:imperf:neg
244: ger:sg:inst:n:perf:aff
245: ger:sg:inst:n:perf:neg
246: ger:sg:loc:n:imperf:aff
247: ger:sg:loc:n:imperf:neg
248: ger:sg:loc:n:perf:aff
249: ger:sg:loc:n:perf:neg
250: ger:sg:nom:n:imperf:aff
251: ger:sg:nom:n:imperf:neg
252: ger:sg:nom:n:perf:aff
253: ger:sg:nom:n:perf:neg
254: imps:imperf
255: imps:perf
256: impt:pl:pri:imperf
257: impt:pl:pri:perf
258: impt:pl:sec:imperf
259: impt:pl:sec:perf
260: impt:sg:pri:imperf
261: impt:sg:sec:imperf
262: impt:sg:sec:perf
263: inf:imperf
264: inf:perf
265: interj
266: interp
267: num:pl:acc:f:congr:ncol
268: num:pl:acc:f:rec
269: num:pl:acc:f:rec:ncol
270: num:pl:acc:m1:rec
271: num:pl:acc:m1:rec:col
272: num:pl:acc:m1:rec:ncol
273: num:pl:acc:m2:congr:ncol
274: num:pl:acc:m2:rec
275: num:pl:acc:m2:rec:ncol
276: num:pl:acc:m3:congr
277: num:pl:acc:m3:congr:ncol
278: num:pl:acc:m3:rec
279: num:pl:acc:m3:rec:ncol
280: num:pl:acc:n:congr:ncol
281: num:pl:acc:n:rec
282: num:pl:acc:n:rec:col
283: num:pl:acc:n:rec:ncol
284: num:pl:dat:f:congr
285: num:pl:dat:f:congr:ncol
286: num:pl:dat:m1:congr
287: num:pl:dat:m1:congr:col
288: num:pl:dat:m1:congr:ncol
289: num:pl:dat:m2:congr
290: num:pl:dat:m3:congr:ncol
291: num:pl:dat:n:congr
292: num:pl:dat:n:congr:ncol
293: num:pl:gen:f:congr
294: num:pl:gen:f:congr:ncol
295: num:pl:gen:f:rec
296: num:pl:gen:f:rec:ncol
297: num:pl:gen:m1:congr
298: num:pl:gen:m1:congr:ncol
299: num:pl:gen:m1:rec
300: num:pl:gen:m1:rec:col
301: num:pl:gen:m2:congr
302: num:pl:gen:m2:congr:ncol
303: num:pl:gen:m2:rec
304: num:pl:gen:m3:congr
305: num:pl:gen:m3:congr:ncol
306: num:pl:gen:m3:rec
307: num:pl:gen:m3:rec:ncol
308: num:pl:gen:n:congr
309: num:pl:gen:n:congr:ncol
310: num:pl:gen:n:rec
311: num:pl:gen:n:rec:col
312: num:pl:inst:f:congr
313: num:pl:inst:f:congr:ncol
314: num:pl:inst:m1:congr
315: num:pl:inst:m1:congr:ncol
316: num:pl:inst:m1:rec:col
317: num:pl:inst:m2:congr
318: num:pl:inst:m2:congr:ncol
319: num:pl:inst:m3:congr
320: num:pl:inst:m3:congr:ncol
321: num:pl:inst:n:congr
322: num:pl:inst:n:congr:ncol
323: num:pl:inst:n:rec:col
324: num:pl:loc:f:congr
325: num:pl:loc:f:congr:ncol
326: num:pl:loc:m1:congr
327: num:pl:loc:m1:congr:ncol
328: num:pl:loc:m2:congr
329: num:pl:loc:m2:congr:ncol
330: num:pl:loc:m3:congr
331: num:pl:loc:m3:congr:ncol
332: num:pl:loc:n:congr
333: num:pl:loc:n:congr:ncol
334: num:pl:nom:f:congr:ncol
335: num:pl:nom:f:rec
336: num:pl:nom:f:rec:ncol
337: num:pl:nom:m1:congr:ncol
338: num:pl:nom:m1:rec
339: num:pl:nom:m1:rec:col
340: num:pl:nom:m1:rec:ncol
341: num:pl:nom:m2:congr:ncol
342: num:pl:nom:m2:rec
343: num:pl:nom:m2:rec:ncol
344: num:pl:nom:m3:congr:ncol
345: num:pl:nom:m3:rec
346: num:pl:nom:m3:rec:ncol
347: num:pl:nom:n:congr
348: num:pl:nom:n:congr:ncol
349: num:pl:nom:n:rec
350: num:pl:nom:n:rec:col
351: num:pl:nom:n:rec:ncol
352: num:sg:acc:f:rec
353: num:sg:acc:f:rec:ncol
354: num:sg:acc:m1:rec:ncol
355: num:sg:acc:m2:rec
356: num:sg:acc:m3:rec
357: num:sg:acc:m3:rec:ncol
358: num:sg:acc:n:rec
359: num:sg:gen:f:rec
360: num:sg:gen:m3:rec
361: num:sg:gen:n:rec
362: num:sg:inst:m3:rec
363: num:sg:loc:f:rec
364: num:sg:loc:m3:congr
365: num:sg:loc:m3:rec
366: num:sg:nom:f:rec
367: num:sg:nom:m2:rec
368: num:sg:nom:m3:rec
369: num:sg:nom:m3:rec:ncol
370: num:sg:nom:n:rec
371: numcomp
372: pact:pl:acc:f:imperf:aff
373: pact:pl:acc:f:imperf:neg
374: pact:pl:acc:m1:imperf:aff
375: pact:pl:acc:m2:imperf:aff
376: pact:pl:acc:m3:imperf:aff
377: pact:pl:acc:m3:imperf:neg
378: pact:pl:acc:n:imperf:aff
379: pact:pl:acc:n:imperf:neg
380: pact:pl:dat:f:imperf:aff
381: pact:pl:dat:m1:imperf:aff
382: pact:pl:dat:m2:imperf:aff
383: pact:pl:dat:m3:imperf:aff
384: pact:pl:dat:n:imperf:aff
385: pact:pl:gen:f:imperf:aff
386: pact:pl:gen:f:imperf:neg
387: pact:pl:gen:m1:imperf:aff
388: pact:pl:gen:m1:imperf:neg
389: pact:pl:gen:m2:imperf:aff
390: pact:pl:gen:m3:imperf:aff
391: pact:pl:gen:m3:imperf:neg
392: pact:pl:gen:n:imperf:aff
393: pact:pl:inst:f:imperf:aff
394: pact:pl:inst:m1:imperf:aff
395: pact:pl:inst:m2:imperf:aff
396: pact:pl:inst:m3:imperf:aff
397: pact:pl:inst:m3:imperf:neg
398: pact:pl:inst:n:imperf:aff
399: pact:pl:inst:n:imperf:neg
400: pact:pl:loc:f:imperf:aff
401: pact:pl:loc:m1:imperf:aff
402: pact:pl:loc:m3:imperf:aff
403: pact:pl:loc:m3:imperf:neg
404: pact:pl:loc:n:imperf:aff
405: pact:pl:loc:n:imperf:neg
406: pact:pl:nom:f:imperf:aff
407: pact:pl:nom:f:imperf:neg
408: pact:pl:nom:m1:imperf:aff
409: pact:pl:nom:m2:imperf:aff
410: pact:pl:nom:m3:imperf:aff
411: pact:pl:nom:n:imperf:aff
412: pact:pl:nom:n:imperf:neg
413: pact:pl:voc:f:imperf:aff
414: pact:sg:acc:f:imperf:aff
415: pact:sg:acc:f:imperf:neg
416: pact:sg:acc:m1:imperf:aff
417: pact:sg:acc:m2:imperf:aff
418: pact:sg:acc:m3:imperf:aff
419: pact:sg:acc:n:imperf:aff
420: pact:sg:acc:n:imperf:neg
421: pact:sg:dat:f:imperf:aff
422: pact:sg:dat:m1:imperf:aff
423: pact:sg:dat:m2:imperf:aff
424: pact:sg:dat:m3:imperf:aff
425: pact:sg:dat:n:imperf:aff
426: pact:sg:gen:f:imperf:aff
427: pact:sg:gen:f:imperf:neg
428: pact:sg:gen:m1:imperf:aff
429: pact:sg:gen:m1:imperf:neg
430: pact:sg:gen:m2:imperf:aff
431: pact:sg:gen:m3:imperf:aff
432: pact:sg:gen:m3:imperf:neg
433: pact:sg:gen:n:imperf:aff
434: pact:sg:gen:n:imperf:neg
435: pact:sg:inst:f:imperf:aff
436: pact:sg:inst:f:imperf:neg
437: pact:sg:inst:m1:imperf:aff
438: pact:sg:inst:m1:imperf:neg
439: pact:sg:inst:m2:imperf:aff
440: pact:sg:inst:m2:imperf:neg
441: pact:sg:inst:m3:imperf:aff
442: pact:sg:inst:m3:imperf:neg
443: pact:sg:inst:n:imperf:aff
444: pact:sg:loc:f:imperf:aff
445: pact:sg:loc:f:imperf:neg
446: pact:sg:loc:m1:imperf:aff
447: pact:sg:loc:m2:imperf:aff
448: pact:sg:loc:m3:imperf:aff
449: pact:sg:loc:m3:imperf:neg
450: pact:sg:loc:n:imperf:aff
451: pact:sg:loc:n:imperf:neg
452: pact:sg:nom:f:imperf:aff
453: pact:sg:nom:f:imperf:neg
454: pact:sg:nom:m1:imperf:aff
455: pact:sg:nom:m1:imperf:neg
456: pact:sg:nom:m2:imperf:aff
457: pact:sg:nom:m3:imperf:aff
458: pact:sg:nom:m3:imperf:neg
459: pact:sg:nom:n:imperf:aff
460: pact:sg:nom:n:imperf:neg
461: pact:sg:voc:m1:imperf:aff
462: pacta
463: pant:perf
464: part
465: part:nwok
466: part:wok
467: pcon:imperf
468: ppas:pl:acc:f:imperf:aff
469: ppas:pl:acc:f:perf:aff
470: ppas:pl:acc:f:perf:neg
471: ppas:pl:acc:m1:imperf:aff
472: ppas:pl:acc:m1:imperf:neg
473: ppas:pl:acc:m1:perf:aff
474: ppas:pl:acc:m1:perf:neg
475: ppas:pl:acc:m2:imperf:aff
476: ppas:pl:acc:m2:perf:aff
477: ppas:pl:acc:m3:imperf:aff
478: ppas:pl:acc:m3:perf:aff
479: ppas:pl:acc:m3:perf:neg
480: ppas:pl:acc:n:imperf:aff
481: ppas:pl:acc:n:imperf:neg
482: ppas:pl:acc:n:perf:aff
483: ppas:pl:acc:n:perf:neg
484: ppas:pl:dat:f:imperf:aff
485: ppas:pl:dat:f:perf:aff
486: ppas:pl:dat:f:perf:neg
487: ppas:pl:dat:m1:imperf:aff
488: ppas:pl:dat:m1:perf:aff
489: ppas:pl:dat:m1:perf:neg
490: ppas:pl:dat:m2:imperf:aff
491: ppas:pl:dat:m3:imperf:aff
492: ppas:pl:dat:m3:perf:aff
493: ppas:pl:dat:n:imperf:aff
494: ppas:pl:dat:n:perf:aff
495: ppas:pl:gen:f:imperf:aff
496: ppas:pl:gen:f:imperf:neg
497: ppas:pl:gen:f:perf:aff
498: ppas:pl:gen:f:perf:neg
499: ppas:pl:gen:m1:imperf:aff
500: ppas:pl:gen:m1:imperf:neg
501: ppas:pl:gen:m1:perf:aff
502: ppas:pl:gen:m1:perf:neg
503: ppas:pl:gen:m2:imperf:aff
504: ppas:pl:gen:m2:perf:aff
505: ppas:pl:gen:m3:imperf:aff
506: ppas:pl:gen:m3:imperf:neg
507: ppas:pl:gen:m3:perf:aff
508: ppas:pl:gen:m3:perf:neg
509: ppas:pl:gen:n:imperf:aff
510: ppas:pl:gen:n:perf:aff
511: ppas:pl:gen:n:perf:neg
512: ppas:pl:inst:f:imperf:aff
513: ppas:pl:inst:f:perf:aff
514: ppas:pl:inst:m1:imperf:aff
515: ppas:pl:inst:m1:perf:aff
516: ppas:pl:inst:m2:perf:aff
517: ppas:pl:inst:m3:imperf:aff
518: ppas:pl:inst:m3:perf:aff
519: ppas:pl:inst:n:imperf:aff
520: ppas:pl:inst:n:perf:aff
521: ppas:pl:loc:f:imperf:aff
522: ppas:pl:loc:f:imperf:neg
523: ppas:pl:loc:f:perf:aff
524: ppas:pl:loc:f:perf:neg
525: ppas:pl:loc:m1:imperf:aff
526: ppas:pl:loc:m1:perf:aff
527: ppas:pl:loc:m2:imperf:aff
528: ppas:pl:loc:m3:imperf:aff
529: ppas:pl:loc:m3:perf:aff
530: ppas:pl:loc:m3:perf:neg
531: ppas:pl:loc:n:imperf:aff
532: ppas:pl:loc:n:perf:aff
533: ppas:pl:loc:n:perf:neg
534: ppas:pl:nom:f:imperf:aff
535: ppas:pl:nom:f:imperf:neg
536: ppas:pl:nom:f:perf:aff
537: ppas:pl:nom:f:perf:neg
538: ppas:pl:nom:m1:imperf:aff
539: ppas:pl:nom:m1:imperf:neg
540: ppas:pl:nom:m1:perf:aff
541: ppas:pl:nom:m1:perf:neg
542: ppas:pl:nom:m2:imperf:aff
543: ppas:pl:nom:m2:perf:aff
544: ppas:pl:nom:m3:imperf:aff
545: ppas:pl:nom:m3:imperf:neg
546: ppas:pl:nom:m3:perf:aff
547: ppas:pl:nom:m3:perf:neg
548: ppas:pl:nom:n:imperf:aff
549: ppas:pl:nom:n:perf:aff
550: ppas:pl:nom:n:perf:neg
551: ppas:pl:voc:f:imperf:aff
552: ppas:sg:acc:f:imperf:aff
553: ppas:sg:acc:f:imperf:neg
554: ppas:sg:acc:f:perf:aff
555: ppas:sg:acc:f:perf:neg
556: ppas:sg:acc:m1:imperf:aff
557: ppas:sg:acc:m1:perf:aff
558: ppas:sg:acc:m2:imperf:aff
559: ppas:sg:acc:m2:perf:aff
560: ppas:sg:acc:m3:imperf:aff
561: ppas:sg:acc:m3:imperf:neg
562: ppas:sg:acc:m3:perf:aff
563: ppas:sg:acc:m3:perf:neg
564: ppas:sg:acc:n:imperf:aff
565: ppas:sg:acc:n:perf:aff
566: ppas:sg:acc:n:perf:neg
567: ppas:sg:dat:f:imperf:aff
568: ppas:sg:dat:f:imperf:neg
569: ppas:sg:dat:f:perf:aff
570: ppas:sg:dat:f:perf:neg
571: ppas:sg:dat:m1:imperf:aff
572: ppas:sg:dat:m1:perf:aff
573: ppas:sg:dat:m2:perf:aff
574: ppas:sg:dat:m3:imperf:aff
575: ppas:sg:dat:m3:perf:aff
576: ppas:sg:dat:n:perf:aff
577: ppas:sg:gen:f:imperf:aff
578: ppas:sg:gen:f:imperf:neg
579: ppas:sg:gen:f:perf:aff
580: ppas:sg:gen:f:perf:neg
581: ppas:sg:gen:m1:imperf:aff
582: ppas:sg:gen:m1:perf:aff
583: ppas:sg:gen:m1:perf:neg
584: ppas:sg:gen:m2:imperf:aff
585: ppas:sg:gen:m2:perf:aff
586: ppas:sg:gen:m3:imperf:aff
587: ppas:sg:gen:m3:imperf:neg
588: ppas:sg:gen:m3:perf:aff
589: ppas:sg:gen:m3:perf:neg
590: ppas:sg:gen:n:imperf:aff
591: ppas:sg:gen:n:imperf:neg
592: ppas:sg:gen:n:perf:aff
593: ppas:sg:gen:n:perf:neg
594: ppas:sg:inst:f:imperf:aff
595: ppas:sg:inst:f:imperf:neg
596: ppas:sg:inst:f:perf:aff
597: ppas:sg:inst:f:perf:neg
598: ppas:sg:inst:m1:imperf:aff
599: ppas:sg:inst:m1:imperf:neg
600: ppas:sg:inst:m1:perf:aff
601: ppas:sg:inst:m1:perf:neg
602: ppas:sg:inst:m2:imperf:aff
603: ppas:sg:inst:m2:perf:aff
604: ppas:sg:inst:m3:imperf:aff
605: ppas:sg:inst:m3:imperf:neg
606: ppas:sg:inst:m3:perf:aff
607: ppas:sg:inst:m3:perf:neg
608: ppas:sg:inst:n:imperf:aff
609: ppas:sg:inst:n:imperf:neg
610: ppas:sg:inst:n:perf:aff
611: ppas:sg:inst:n:perf:neg
612: ppas:sg:loc:f:imperf:aff
613: ppas:sg:loc:f:perf:aff
614: ppas:sg:loc:f:perf:neg
615: ppas:sg:loc:m1:imperf:aff
616: ppas:sg:loc:m1:perf:aff
617: ppas:sg:loc:m2:imperf:aff
618: ppas:sg:loc:m3:imperf:aff
619: ppas:sg:loc:m3:imperf:neg
620: ppas:sg:loc:m3:perf:aff
621: ppas:sg:loc:m3:perf:neg
622: ppas:sg:loc:n:imperf:aff
623: ppas:sg:loc:n:perf:aff
624: ppas:sg:loc:n:perf:neg
625: ppas:sg:nom:f:imperf:aff
626: ppas:sg:nom:f:imperf:neg
627: ppas:sg:nom:f:perf:aff
628: ppas:sg:nom:f:perf:neg
629: ppas:sg:nom:m1:imperf:aff
630: ppas:sg:nom:m1:imperf:neg
631: ppas:sg:nom:m1:perf:aff
632: ppas:sg:nom:m1:perf:neg
633: ppas:sg:nom:m2:imperf:aff
634: ppas:sg:nom:m2:perf:aff
635: ppas:sg:nom:m3:imperf:aff
636: ppas:sg:nom:m3:imperf:neg
637: ppas:sg:nom:m3:perf:aff
638: ppas:sg:nom:m3:perf:neg
639: ppas:sg:nom:n:imperf:aff
640: ppas:sg:nom:n:imperf:neg
641: ppas:sg:nom:n:perf:aff
642: ppas:sg:nom:n:perf:neg
643: ppas:sg:voc:m1:perf:aff
644: ppas:sg:voc:m2:imperf:aff
645: ppas:sg:voc:m3:perf:aff
646: ppron12:pl:acc:f:pri
647: ppron12:pl:acc:f:sec
648: ppron12:pl:acc:m1:pri
649: ppron12:pl:acc:m1:sec
650: ppron12:pl:acc:m2:sec
651: ppron12:pl:acc:n:sec
652: ppron12:pl:dat:f:pri
653: ppron12:pl:dat:f:sec
654: ppron12:pl:dat:m1:pri
655: ppron12:pl:dat:m1:sec
656: ppron12:pl:dat:m3:sec
657: ppron12:pl:gen:f:pri
658: ppron12:pl:gen:f:sec
659: ppron12:pl:gen:m1:pri
660: ppron12:pl:gen:m1:sec
661: ppron12:pl:gen:m2:pri
662: ppron12:pl:inst:f:pri
663: ppron12:pl:inst:m1:pri
664: ppron12:pl:inst:m1:sec
665: ppron12:pl:inst:n:pri
666: ppron12:pl:loc:f:sec
667: ppron12:pl:loc:m1:pri
668: ppron12:pl:loc:m1:sec
669: ppron12:pl:loc:m3:sec
670: ppron12:pl:nom:f:pri
671: ppron12:pl:nom:f:sec
672: ppron12:pl:nom:m1:pri
673: ppron12:pl:nom:m1:sec
674: ppron12:pl:nom:m2:pri
675: ppron12:pl:nom:n:sec
676: ppron12:pl:voc:m1:sec
677: ppron12:pl:voc:m2:sec
678: ppron12:sg:acc:f:pri:akc
679: ppron12:sg:acc:f:sec:akc
680: ppron12:sg:acc:f:sec:nakc
681: ppron12:sg:acc:m1:pri:akc
682: ppron12:sg:acc:m1:pri:nakc
683: ppron12:sg:acc:m1:sec:akc
684: ppron12:sg:acc:m1:sec:nakc
685: ppron12:sg:acc:m2:pri:akc
686: ppron12:sg:acc:m2:sec:nakc
687: ppron12:sg:acc:m3:pri:akc
688: ppron12:sg:acc:m3:sec:nakc
689: ppron12:sg:acc:n:pri:akc
690: ppron12:sg:acc:n:sec:nakc
691: ppron12:sg:dat:f:pri:akc
692: ppron12:sg:dat:f:pri:nakc
693: ppron12:sg:dat:f:sec:akc
694: ppron12:sg:dat:f:sec:nakc
695: ppron12:sg:dat:m1:pri:akc
696: ppron12:sg:dat:m1:pri:nakc
697: ppron12:sg:dat:m1:sec:akc
698: ppron12:sg:dat:m1:sec:nakc
699: ppron12:sg:dat:m2:pri:nakc
700: ppron12:sg:dat:m2:sec:akc
701: ppron12:sg:dat:m2:sec:nakc
702: ppron12:sg:gen:f:pri:akc
703: ppron12:sg:gen:f:sec:akc
704: ppron12:sg:gen:f:sec:nakc
705: ppron12:sg:gen:m1:pri:akc
706: ppron12:sg:gen:m1:sec:akc
707: ppron12:sg:gen:m1:sec:nakc
708: ppron12:sg:gen:m2:pri:akc
709: ppron12:sg:gen:m2:sec:akc
710: ppron12:sg:gen:m2:sec:nakc
711: ppron12:sg:gen:n:pri:akc
712: ppron12:sg:inst:f:pri
713: ppron12:sg:inst:f:sec
714: ppron12:sg:inst:m1:pri
715: ppron12:sg:inst:m1:pri:nakc
716: ppron12:sg:inst:m1:sec
717: ppron12:sg:inst:n:sec
718: ppron12:sg:loc:f:pri
719: ppron12:sg:loc:f:sec
720: ppron12:sg:loc:m1:pri
721: ppron12:sg:loc:m1:sec
722: ppron12:sg:loc:m3:pri
723: ppron12:sg:nom:f:pri
724: ppron12:sg:nom:f:sec
725: ppron12:sg:nom:m1:pri
726: ppron12:sg:nom:m1:pri:nakc
727: ppron12:sg:nom:m1:sec
728: ppron12:sg:nom:m2:pri
729: ppron12:sg:nom:m2:sec
730: ppron12:sg:nom:m3:pri
731: ppron12:sg:nom:m3:sec
732: ppron12:sg:nom:n:sec
733: ppron12:sg:voc:f:sec
734: ppron12:sg:voc:m1:sec
735: ppron12:sg:voc:m2:sec
736: ppron12:sg:voc:n:sec
737: ppron3:pl:acc:f:ter:akc:npraep
738: ppron3:pl:acc:f:ter:akc:praep
739: ppron3:pl:acc:m1:ter:akc:npraep
740: ppron3:pl:acc:m1:ter:akc:praep
741: ppron3:pl:acc:m2:ter:akc:npraep
742: ppron3:pl:acc:m2:ter:akc:praep
743: ppron3:pl:acc:m3:ter:akc:npraep
744: ppron3:pl:acc:m3:ter:akc:praep
745: ppron3:pl:acc:n:ter:akc:npraep
746: ppron3:pl:acc:n:ter:akc:praep
747: ppron3:pl:dat:f:ter:akc:npraep
748: ppron3:pl:dat:f:ter:akc:praep
749: ppron3:pl:dat:m1:ter:akc:npraep
750: ppron3:pl:dat:m1:ter:akc:praep
751: ppron3:pl:dat:m2:ter:akc:npraep
752: ppron3:pl:dat:m3:ter:akc:npraep
753: ppron3:pl:dat:m3:ter:akc:praep
754: ppron3:pl:dat:n:ter:akc:npraep
755: ppron3:pl:gen:f:ter:akc:npraep
756: ppron3:pl:gen:f:ter:akc:praep
757: ppron3:pl:gen:m1:ter:akc:npraep
758: ppron3:pl:gen:m1:ter:akc:praep
759: ppron3:pl:gen:m2:ter:akc:npraep
760: ppron3:pl:gen:m2:ter:akc:praep
761: ppron3:pl:gen:m3:ter:akc:npraep
762: ppron3:pl:gen:m3:ter:akc:praep
763: ppron3:pl:gen:n:ter:akc:npraep
764: ppron3:pl:gen:n:ter:akc:praep
765: ppron3:pl:inst:f:ter:akc:npraep
766: ppron3:pl:inst:f:ter:akc:praep
767: ppron3:pl:inst:m1:ter:akc:npraep
768: ppron3:pl:inst:m1:ter:akc:praep
769: ppron3:pl:inst:m2:ter:akc:npraep
770: ppron3:pl:inst:m2:ter:akc:praep
771: ppron3:pl:inst:m3:ter:akc:npraep
772: ppron3:pl:inst:m3:ter:akc:praep
773: ppron3:pl:inst:n:ter:akc:npraep
774: ppron3:pl:inst:n:ter:akc:praep
775: ppron3:pl:loc:f:ter:akc:praep
776: ppron3:pl:loc:m1:ter:akc:praep
777: ppron3:pl:loc:m2:ter:akc:praep
778: ppron3:pl:loc:m3:ter:akc:praep
779: ppron3:pl:loc:n:ter:akc:praep
780: ppron3:pl:nom:f:ter:akc:npraep
781: ppron3:pl:nom:f:ter:nakc:npraep
782: ppron3:pl:nom:m1:ter:akc:npraep
783: ppron3:pl:nom:m2:ter:akc:npraep
784: ppron3:pl:nom:m3:ter:akc:npraep
785: ppron3:pl:nom:n:ter:akc:npraep
786: ppron3:sg:acc:f:ter:akc:npraep
787: ppron3:sg:acc:f:ter:akc:praep
788: ppron3:sg:acc:m1:ter:akc:npraep
789: ppron3:sg:acc:m1:ter:akc:praep
790: ppron3:sg:acc:m1:ter:nakc:npraep
791: ppron3:sg:acc:m1:ter:nakc:praep
792: ppron3:sg:acc:m2:ter:akc:praep
793: ppron3:sg:acc:m2:ter:nakc:npraep
794: ppron3:sg:acc:m2:ter:nakc:praep
795: ppron3:sg:acc:m3:ter:akc:npraep
796: ppron3:sg:acc:m3:ter:akc:praep
797: ppron3:sg:acc:m3:ter:nakc:npraep
798: ppron3:sg:acc:m3:ter:nakc:praep
799: ppron3:sg:acc:n:ter:akc:npraep
800: ppron3:sg:acc:n:ter:akc:praep
801: ppron3:sg:dat:f:ter:akc:npraep
802: ppron3:sg:dat:f:ter:akc:praep
803: ppron3:sg:dat:m1:ter:akc:npraep
804: ppron3:sg:dat:m1:ter:akc:praep
805: ppron3:sg:dat:m1:ter:nakc:npraep
806: ppron3:sg:dat:m2:ter:akc:npraep
807: ppron3:sg:dat:m2:ter:nakc:npraep
808: ppron3:sg:dat:m3:ter:akc:npraep
809: ppron3:sg:dat:m3:ter:akc:praep
810: ppron3:sg:dat:m3:ter:nakc:npraep
811: ppron3:sg:dat:n:ter:akc:npraep
812: ppron3:sg:dat:n:ter:akc:praep
813: ppron3:sg:dat:n:ter:nakc:npraep
814: ppron3:sg:gen:f:ter:akc:npraep
815: ppron3:sg:gen:f:ter:akc:praep
816: ppron3:sg:gen:m1:ter:akc:npraep
817: ppron3:sg:gen:m1:ter:akc:praep
818: ppron3:sg:gen:m1:ter:nakc:npraep
819: ppron3:sg:gen:m1:ter:nakc:praep
820: ppron3:sg:gen:m2:ter:akc:npraep
821: ppron3:sg:gen:m2:ter:akc:praep
822: ppron3:sg:gen:m2:ter:nakc:npraep
823: ppron3:sg:gen:m3:ter:akc:npraep
824: ppron3:sg:gen:m3:ter:akc:praep
825: ppron3:sg:gen:m3:ter:nakc:npraep
826: ppron3:sg:gen:m3:ter:nakc:praep
827: ppron3:sg:gen:n:ter:akc:npraep
828: ppron3:sg:gen:n:ter:akc:praep
829: ppron3:sg:gen:n:ter:nakc:npraep
830: ppron3:sg:inst:f:ter:akc:praep
831: ppron3:sg:inst:m1:ter:akc:npraep
832: ppron3:sg:inst:m1:ter:akc:praep
833: ppron3:sg:inst:m2:ter:akc:npraep
834: ppron3:sg:inst:m2:ter:akc:praep
835: ppron3:sg:inst:m3:ter:akc:npraep
836: ppron3:sg:inst:m3:ter:akc:praep
837: ppron3:sg:inst:n:ter:akc:npraep
838: ppron3:sg:inst:n:ter:akc:praep
839: ppron3:sg:loc:f:ter:akc:praep
840: ppron3:sg:loc:m1:ter:akc:praep
841: ppron3:sg:loc:m2:ter:akc:praep
842: ppron3:sg:loc:m3:ter:akc:praep
843: ppron3:sg:loc:n:ter:akc:praep
844: ppron3:sg:nom:f:ter:akc:npraep
845: ppron3:sg:nom:f:ter:akc:praep
846: ppron3:sg:nom:m1:ter:akc:npraep
847: ppron3:sg:nom:m2:ter:akc:npraep
848: ppron3:sg:nom:m2:ter:akc:praep
849: ppron3:sg:nom:m3:ter:akc:npraep
850: ppron3:sg:nom:n:ter:akc:npraep
851: praet:pl:f:imperf
852: praet:pl:f:perf
853: praet:pl:m1:imperf
854: praet:pl:m1:imperf:agl
855: praet:pl:m1:perf
856: praet:pl:m2:imperf
857: praet:pl:m2:perf
858: praet:pl:m3:imperf
859: praet:pl:m3:perf
860: praet:pl:n:imperf
861: praet:pl:n:perf
862: praet:sg:f:imperf
863: praet:sg:f:imperf:agl
864: praet:sg:f:imperf:nagl
865: praet:sg:f:perf
866: praet:sg:m1:imperf
867: praet:sg:m1:imperf:agl
868: praet:sg:m1:imperf:nagl
869: praet:sg:m1:perf
870: praet:sg:m1:perf:agl
871: praet:sg:m1:perf:nagl
872: praet:sg:m2:imperf
873: praet:sg:m2:imperf:nagl
874: praet:sg:m2:perf
875: praet:sg:m2:perf:nagl
876: praet:sg:m3:imperf
877: praet:sg:m3:imperf:nagl
878: praet:sg:m3:perf
879: praet:sg:m3:perf:nagl
880: praet:sg:n:imperf
881: praet:sg:n:perf
882: pred
883: prep:acc
884: prep:acc:nwok
885: prep:acc:wok
886: prep:dat
887: prep:gen
888: prep:gen:nwok
889: prep:gen:wok
890: prep:inst
891: prep:inst:nwok
892: prep:inst:wok
893: prep:loc
894: prep:loc:nwok
895: prep:loc:wok
896: prep:nom
897: romandig
898: siebie:acc
899: siebie:dat
900: siebie:gen
901: siebie:inst
902: siebie:loc
903: subst:pl:acc:f
904: subst:pl:acc:m1
905: subst:pl:acc:m1:pt
906: subst:pl:acc:m2
907: subst:pl:acc:m3
908: subst:pl:acc:n:col
909: subst:pl:acc:n:ncol
910: subst:pl:acc:n:pt
911: subst:pl:dat:f
912: subst:pl:dat:m1
913: subst:pl:dat:m1:pt
914: subst:pl:dat:m2
915: subst:pl:dat:m3
916: subst:pl:dat:n:col
917: subst:pl:dat:n:ncol
918: subst:pl:dat:n:pt
919: subst:pl:gen:f
920: subst:pl:gen:m1
921: subst:pl:gen:m1:pt
922: subst:pl:gen:m2
923: subst:pl:gen:m3
924: subst:pl:gen:n:col
925: subst:pl:gen:n:ncol
926: subst:pl:gen:n:pt
927: subst:pl:inst:f
928: subst:pl:inst:m1
929: subst:pl:inst:m1:pt
930: subst:pl:inst:m2
931: subst:pl:inst:m3
932: subst:pl:inst:n:col
933: subst:pl:inst:n:ncol
934: subst:pl:inst:n:pt
935: subst:pl:loc:f
936: subst:pl:loc:m1
937: subst:pl:loc:m1:pt
938: subst:pl:loc:m2
939: subst:pl:loc:m3
940: subst:pl:loc:n:col
941: subst:pl:loc:n:ncol
942: subst:pl:loc:n:pt
943: subst:pl:nom:f
944: subst:pl:nom:m1
945: subst:pl:nom:m1:pt
946: subst:pl:nom:m2
947: subst:pl:nom:m3
948: subst:pl:nom:n:col
949: subst:pl:nom:n:ncol
950: subst:pl:nom:n:pt
951: subst:pl:voc:f
952: subst:pl:voc:m1
953: subst:pl:voc:m1:pt
954: subst:pl:voc:m3
955: subst:pl:voc:n:col
956: subst:pl:voc:n:ncol
957: subst:pl:voc:n:pt
958: subst:sg:acc:f
959: subst:sg:acc:m1
960: subst:sg:acc:m2
961: subst:sg:acc:m3
962: subst:sg:acc:n:col
963: subst:sg:acc:n:ncol
964: subst:sg:dat:f
965: subst:sg:dat:m1
966: subst:sg:dat:m2
967: subst:sg:dat:m3
968: subst:sg:dat:n:col
969: subst:sg:dat:n:ncol
970: subst:sg:gen:f
971: subst:sg:gen:m1
972: subst:sg:gen:m2
973: subst:sg:gen:m3
974: subst:sg:gen:n:col
975: subst:sg:gen:n:ncol
976: subst:sg:inst:f
977: subst:sg:inst:m1
978: subst:sg:inst:m2
979: subst:sg:inst:m3
980: subst:sg:inst:n:col
981: subst:sg:inst:n:ncol
982: subst:sg:loc:f
983: subst:sg:loc:m1
984: subst:sg:loc:m2
985: subst:sg:loc:m3
986: subst:sg:loc:n:col
987: subst:sg:loc:n:ncol
988: subst:sg:nom:f
989: subst:sg:nom:m1
990: subst:sg:nom:m2
991: subst:sg:nom:m3
992: subst:sg:nom:n:col
993: subst:sg:nom:n:ncol
994: subst:sg:voc:f
995: subst:sg:voc:m1
996: subst:sg:voc:m2
997: subst:sg:voc:m3
998: subst:sg:voc:n:col
999: subst:sg:voc:n:ncol
1000: sym
1001: winien:pl:f:imperf
1002: winien:pl:m1:imperf
1003: winien:pl:m2:imperf
1004: winien:pl:m3:imperf
1005: winien:pl:n:imperf
1006: winien:sg:f:imperf
1007: winien:sg:m1:imperf
1008: winien:sg:m2:imperf
1009: winien:sg:m3:imperf
1010: winien:sg:n:imperf
1011: xxs:acc
1012: xxs:dat
1013: xxs:gen
1014: xxs:inst
1015: xxs:loc
1016: xxs:nom
1017: xxs:voc
1018: xxx
- name: nps
sequence: bool
- name: nkjp_ids
sequence: string
config_name: nkjp1m
splits:
- name: test
num_bytes: 8324533
num_examples: 8964
- name: train
num_bytes: 65022406
num_examples: 68943
- name: validation
num_bytes: 7465442
num_examples: 7755
download_size: 16167009
dataset_size: 80812381
---
# Dataset Card for NKJP1M – The manually annotated subcorpus of the National Corpus of Polish
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [NKJP1M](http://clip.ipipan.waw.pl/NationalCorpusOfPolish)
- **Repository:** [NKJP1M-SGJP](http://download.sgjp.pl/morfeusz/current/)
- **Paper:** [NKJP book](http://nkjp.pl/settings/papers/NKJP_ksiazka.pdf)
- **Point of Contact:** mailto:morfeusz@ipipan.waw.pl
### Dataset Summary
This is the official dataset for NKJP1M – the 1-million token balanced subcorpus of the National Corpus of Polish (Narodowy Korpus Języka Polskiego)
Besides the text (divided into paragraphs/samples and sentences) the set contains lemmas and morpho-syntactic tags for all tokens in the corpus.
This release, known as NKJP1M-SGJP, corresponds to the version 1.2 of the corpus with later corrections and improvements. In particular the morpho-syntactic annotation has been aligned with the present version of Morfeusz2 SGJP morphological analyser (as of 2022.12.04).
### Supported Tasks and Leaderboards
The main use of this resource lays in training models for lemmatisation and part of speech tagging of Polish.
### Languages
Polish (monolingual)
## Dataset Structure
### Data Instances
```
{'nkjp_text': 'NKJP_1M_1102000002',
'nkjp_par': 'morph_1-p',
'nkjp_sent': 'morph_1.18-s',
'tokens': ['-', 'Nie', 'mam', 'pieniędzy', ',', 'da', 'mi', 'pani', 'wywiad', '?'],
'lemmas': ['-', 'nie', 'mieć', 'pieniądz', ',', 'dać', 'ja', 'pani', 'wywiad', '?'],
'cposes': [8, 11, 10, 9, 8, 10, 9, 9, 9, 8],
'poses': [19, 25, 12, 35, 19, 12, 28, 35, 35, 19],
'tags': [266, 464, 213, 923, 266, 218, 692, 988, 961, 266],
'nps': [False, False, False, False, True, False, False, False, False, True],
'nkjp_ids': ['morph_1.9-seg', 'morph_1.10-seg', 'morph_1.11-seg', 'morph_1.12-seg', 'morph_1.13-seg', 'morph_1.14-seg', 'morph_1.15-seg', 'morph_1.16-seg', 'morph_1.17-seg', 'morph_1.18-seg']}
```
### Data Fields
- `nkjp_text`, `nkjp_par`, `nkjp_sent` (strings): XML identifiers of the present text (document), paragraph and sentence in NKJP. (These allow to map the data point back to the source corpus and to identify paragraphs/samples.)
- `tokens` (sequence of strings): tokens of the text defined as in NKJP.
- `lemmas` (sequence of strings): lemmas corresponding to the tokens.
- `tags` (sequence of labels): morpho-syntactic tags according to Morfeusz2 tagset (1019 distinct tags).
- `poses` (sequence of labels): flexemic class (detailed part of speech, 40 classes) – the first element of the corresponding tag.
- `cposes` (sequence of labels): coarse part of speech (13 classes): all verbal and deverbal flexemic classes get mapped to a `V`, nominal – `N`, adjectival – `A`, “strange” (abbreviations, alien elements, symbols, emojis…) – `X`, rest as in `poses`.
- `nps` (sequence of booleans): `True` means that the corresponding token is not preceded by a space in the source text.
- `nkjp_ids` (sequence of strings): XML identifiers of particular tokens in NKJP (probably an overkill).
### Data Splits
| | Train | Validation | Test |
| ----- | ------ | ----- | ---- |
| sentences | 68943 | 7755 | 8964 |
| tokens | 978368 | 112454 | 125059 |
## Dataset Creation
### Curation Rationale
The National Corpus of Polish (NKJP) was envisioned as the reference corpus of contemporary Polish.
The manually annotated subcorpus (NKJP1M) was thought of as the training data for various NLP tasks.
### Source Data
NKJP is balanced with respect to Polish readership. The detailed rationale is described in Chapter 3 of the [NKJP book](http://nkjp.pl/settings/papers/NKJP_ksiazka.pdf) (roughly: 50% press, 30% books, 10% speech, 10% other). The corpus contains texts from the years 1945–2010 (with 80% of the text in the range 1990–2010). Only original Polish texts were gathered (no translations from other languages). The composition of NKJP1M follows this schema (see Chapter 5).
### Annotations
The rules of morphosyntactic annotation used for NKJP are discussed in Chapter 6 of the [NKJP book](http://nkjp.pl/settings/papers/NKJP_ksiazka.pdf). Presently (2020), the corpus uses a common tagset with the morphological analyzer [Morfeusz 2](http://morfeusz.sgjp.pl/).
#### Annotation process
The texts were processed with Morfeusz and then the resulting annotations were manually disambiguated and validated/corrected. Each text sample was independently processed by two annotators. In case of annotation conflicts an adjudicator stepped in.
### Licensing Information
 This work is licensed under a [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/).
### Citation Information
Info on the source corpus: [link](http://nkjp.pl/settings/papers/NKJP_ksiazka.pdf)
```
@Book{nkjp:12,
editor = "Adam Przepiórkowski and Mirosław Bańko and Rafał
L. Górski and Barbara Lewandowska-Tomaszczyk",
title = "Narodowy Korpus Języka Polskiego",
year = 2012,
address = "Warszawa",
pdf = "http://nkjp.pl/settings/papers/NKJP_ksiazka.pdf",
publisher = "Wydawnictwo Naukowe PWN"}
```
Current annotation scheme: [link](https://jezyk-polski.pl/index.php/jp/article/view/72)
```
@article{
kie:etal:21,
author = "Kieraś, Witold and Woliński, Marcin and Nitoń, Bartłomiej",
doi = "https://doi.org/10.31286/JP.101.2.5",
title = "Nowe wielowarstwowe znakowanie lingwistyczne zrównoważonego {N}arodowego {K}orpusu {J}ęzyka {P}olskiego",
url = "https://jezyk-polski.pl/index.php/jp/article/view/72",
journal = "Język Polski",
number = "2",
volume = "CI",
year = "2021",
pages = "59--70"
}
```
<!--
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
--> | [
-0.6058295369148254,
-0.45649945735931396,
0.17273905873298645,
0.32855093479156494,
-0.4540281593799591,
-0.08586779981851578,
-0.3434707522392273,
-0.2921648621559143,
0.5758476853370667,
0.5446606874465942,
-0.7265482544898987,
-0.8499050140380859,
-0.5758131146430969,
0.410302877426147... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
MauroLeidi/OCT_balanced | MauroLeidi | 2023-01-03T22:00:22Z | 20 | 0 | null | [
"region:us"
] | 2023-01-03T22:00:22Z | 2023-01-03T21:34:57.000Z | 2023-01-03T21:34:57 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
0: DRUSEN
1: NORMAL
splits:
- name: train
num_bytes: 1037539349.736
num_examples: 17232
- name: test
num_bytes: 21771538.0
num_examples: 500
download_size: 1080333714
dataset_size: 1059310887.736
---
# Dataset Card for "OCT_balanced"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7005234956741333,
-0.0970991775393486,
0.07923999428749084,
0.44758331775665283,
-0.5139588713645935,
-0.0598175935447216,
0.44291070103645325,
-0.35343360900878906,
0.9151650667190552,
0.7132685780525208,
-0.6214371919631958,
-0.6708921790122986,
-0.47253143787384033,
0.018636468797922... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Cohere/wikipedia-22-12-de-embeddings | Cohere | 2023-03-22T16:52:49Z | 20 | 0 | null | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:de",
"license:apache-2.0",
"region:us"
] | 2023-03-22T16:52:49Z | 2023-01-14T13:41:14.000Z | 2023-01-14T13:41:14 | ---
annotations_creators:
- expert-generated
language:
- de
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# Wikipedia (de) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (de)](https://de.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-de-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-de-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-de-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) | [
-0.7153791189193726,
-0.7075083255767822,
0.1829407960176468,
0.015261955559253693,
-0.18207010626792908,
-0.08926142752170563,
-0.3211948871612549,
-0.26061105728149414,
0.6018097400665283,
-0.023600637912750244,
-0.5135478377342224,
-0.8814945220947266,
-0.6538326740264893,
0.23113861680... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
svjack/bloom-dialogue-generate-ds-en | svjack | 2023-01-26T03:08:24Z | 20 | 1 | null | [
"region:us"
] | 2023-01-26T03:08:24Z | 2023-01-26T03:05:06.000Z | 2023-01-26T03:05:06 | ---
dataset_info:
features:
- name: question
dtype: string
- name: dialogue_text
dtype: string
- name: dialogue
sequence: string
- name: repo
dtype: string
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 33783729
num_examples: 8378
download_size: 34957337
dataset_size: 33783729
---
# Dataset Card for "bloom-dialogue-generate-ds-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.666190505027771,
-0.3997196853160858,
0.4849531054496765,
0.32131242752075195,
-0.06809122115373611,
0.026646360754966736,
0.04152046889066696,
-0.056756094098091125,
0.8424320220947266,
0.47419416904449463,
-1.3715019226074219,
-0.8157076835632324,
-0.4556712508201599,
-0.2318193912506... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
relbert/conceptnet | relbert | 2023-03-31T10:34:46Z | 20 | 1 | null | [
"multilinguality:monolingual",
"size_categories:n<1K",
"language:en",
"license:other",
"region:us"
] | 2023-03-31T10:34:46Z | 2023-01-30T21:16:07.000Z | 2023-01-30T21:16:07 | ---
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- n<1K
pretty_name: relbert/conceptnet
---
# Dataset Card for "relbert/conceptnet"
## Dataset Description
- **Repository:** [RelBERT](https://github.com/asahi417/relbert)
- **Paper:** [https://home.ttic.edu/~kgimpel/commonsense.html](https://home.ttic.edu/~kgimpel/commonsense.html)
- **Dataset:** High Confidence Subset of ConceptNet for link prediction
### Dataset Summary
The selected subset of ConceptNet used in [this work](https://home.ttic.edu/~kgimpel/commonsense.html).
We removed `NotCapableOf` and `NotDesires` to keep the positive relation only.
We consider the original test set as test set, dev1 as the training set, and dev2 as the validation set.
- Number of instances
| | train | validation | test |
|:--------------------------------|--------:|-------------:|-------:|
| number of pairs | 583082 | 1184 | 1187 |
| number of unique relation types | 28 | 20 | 19 |
- Number of pairs in each relation type
| | number of pairs (train) | number of pairs (validation) | number of pairs (test) |
|:-----------------|--------------------------:|-------------------------------:|-------------------------:|
| AtLocation | 69838 | 230 | 250 |
| CapableOf | 71840 | 124 | 144 |
| Causes | 34732 | 52 | 45 |
| CausesDesire | 9616 | 15 | 5 |
| CreatedBy | 534 | 1 | 2 |
| DefinedAs | 11048 | 2 | 1 |
| DesireOf | 28 | 0 | 0 |
| Desires | 8960 | 20 | 8 |
| HasA | 19234 | 43 | 41 |
| HasFirstSubevent | 7350 | 2 | 1 |
| HasLastSubevent | 5916 | 5 | 0 |
| HasPainCharacter | 2 | 0 | 0 |
| HasPainIntensity | 2 | 0 | 0 |
| HasPrerequisite | 47298 | 116 | 109 |
| HasProperty | 36610 | 63 | 70 |
| HasSubevent | 52468 | 82 | 83 |
| InheritsFrom | 112 | 0 | 0 |
| InstanceOf | 138 | 0 | 0 |
| IsA | 71034 | 197 | 211 |
| LocatedNear | 6 | 0 | 0 |
| LocationOfAction | 6 | 0 | 0 |
| MadeOf | 1518 | 10 | 14 |
| MotivatedByGoal | 23668 | 17 | 8 |
| PartOf | 5402 | 19 | 22 |
| ReceivesAction | 20656 | 15 | 11 |
| RelatedTo | 178 | 0 | 1 |
| SymbolOf | 328 | 2 | 0 |
| UsedFor | 84560 | 169 | 161 |
## Dataset Structure
An example of `train` looks as follows.
```shell
{
"relation": "IsA",
"head": "baseball",
"tail": "sport"
}
```
## Citation Information
```
@InProceedings{P16-1137,
author = "Li, Xiang
and Taheri, Aynaz
and Tu, Lifu
and Gimpel, Kevin",
title = "Commonsense Knowledge Base Completion",
booktitle = "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) ",
year = "2016",
publisher = "Association for Computational Linguistics",
pages = "1445--1455",
location = "Berlin, Germany",
doi = "10.18653/v1/P16-1137",
url = "http://aclweb.org/anthology/P16-1137"
}
``` | [
-0.5703604817390442,
-0.3781009912490845,
0.15797565877437592,
0.09103205800056458,
-0.03684685006737709,
-0.21890977025032043,
-0.10705479234457016,
-0.3538774251937866,
0.639671802520752,
0.25385230779647827,
-0.7729880809783936,
-0.7351258993148804,
-0.5666667819023132,
0.10676018893718... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ml4pubmed/pubmed-classification-20k | ml4pubmed | 2023-02-17T06:31:13Z | 20 | 0 | null | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"pubmed",
"region:us"
] | 2023-02-17T06:31:13Z | 2023-02-06T16:16:31.000Z | 2023-02-06T16:16:31 | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
tags:
- pubmed
size_categories:
- 10K<n<100K
---
# ml4pubmed/pubmed-classification-20k
- 20k subset of pubmed text classification from course | [
0.007733544800430536,
-0.06733199208974838,
0.34948235750198364,
0.06562776118516922,
-0.2802107632160187,
0.4207967519760132,
0.2740418016910553,
-0.15505415201187134,
0.1784258484840393,
1.1631914377212524,
-0.2380477637052536,
-0.6321415901184082,
-0.18148772418498993,
0.289570182561874... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
FredZhang7/anime-prompts-180K | FredZhang7 | 2023-02-19T07:56:16Z | 20 | 14 | null | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:creativeml-openrail-m",
"region:us"
] | 2023-02-19T07:56:16Z | 2023-02-09T07:55:28.000Z | 2023-02-09T07:55:28 | ---
license: creativeml-openrail-m
task_categories:
- text-generation
language:
- en
size_categories:
- 100K<n<1M
---
For more info on data collection and the preprocessing algorithm, please see [Fast Anime PromptGen](https://huggingface.co/FredZhang7/anime-anything-promptgen-v2).
## 80K unique prompts
- `safebooru_clean`: Cleaned prompts with upscore ≥ 8 from the Safebooru API
---
For disclaimers about the Danbooru data, please see [Danbooru Tag Generator](https://huggingface.co/FredZhang7/danbooru-tag-generator).
## 100K unique prompts (each)
- `danbooru_raw`: Raw prompts with upscore ≥ 3 from Danbooru API
- `danbooru_clean`: Cleaned prompts with upscore ≥ 3 from Danbooru API
---
## Python
Download and save the dataset to anime_prompts.csv locally.
```bash
pip install datasets
```
```python
import csv
import datasets
dataset = datasets.load_dataset("FredZhang7/anime-prompts-180K")
train = dataset["train"]
safebooru_clean = train["safebooru_clean"]
danbooru_clean = train["danbooru_clean"]
danbooru_raw = train["danbooru_raw"]
with open("anime_prompts.csv", "w") as f:
writer = csv.writer(f)
writer.writerow(["safebooru_clean", "danbooru_clean", "danbooru_raw"])
for i in range(len(safebooru_clean)):
writer.writerow([safebooru_clean[i], danbooru_clean[i], danbooru_raw[i]])
``` | [
-0.601261556148529,
-0.7403818368911743,
0.38296929001808167,
0.651995837688446,
-0.3515360355377197,
-0.03336253762245178,
-0.2851922810077667,
-0.0688009038567543,
0.21370510756969452,
0.40962448716163635,
-1.0735106468200684,
-0.6972230672836304,
-0.3211415708065033,
0.5539233088493347,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
NeelNanda/pile-small-tokenized-2b | NeelNanda | 2023-02-12T16:25:43Z | 20 | 0 | null | [
"region:us"
] | 2023-02-12T16:25:43Z | 2023-02-12T12:20:37.000Z | 2023-02-12T12:20:37 | ---
dataset_info:
features:
- name: tokens
sequence: int32
splits:
- name: train
num_bytes: 44263497500
num_examples: 10795975
download_size: 19763664789
dataset_size: 44263497500
---
# Dataset Card for "pile-small-tokenized-2b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.5223063826560974,
-0.4244532883167267,
0.02347174473106861,
0.329393595457077,
-0.4946264922618866,
-0.008535430766642094,
0.36957526206970215,
-0.21620507538318634,
0.9560582637786865,
0.567068874835968,
-0.5408010482788086,
-0.49288028478622437,
-0.7837863564491272,
-0.378271341323852... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
LFBMS/class_dataset2 | LFBMS | 2023-02-12T16:15:51Z | 20 | 0 | null | [
"region:us"
] | 2023-02-12T16:15:51Z | 2023-02-12T16:12:48.000Z | 2023-02-12T16:12:48 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': bilanz_datev
'1': bilanz_lexware
'2': guv
'3': other
splits:
- name: train
num_bytes: 13700431777.0
num_examples: 4000
- name: validation
num_bytes: 548626720.0
num_examples: 500
- name: test
num_bytes: 559045772.0
num_examples: 500
download_size: 5407648855
dataset_size: 14808104269.0
---
# Dataset Card for "class_dataset2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.45182883739471436,
-0.263095498085022,
0.06155217066407204,
0.11832855641841888,
-0.13731268048286438,
0.0938417911529541,
0.3364052176475525,
-0.2599346935749054,
0.5672246813774109,
0.37272909283638,
-0.5967028141021729,
-0.55611652135849,
-0.6533098816871643,
-0.38793236017227173,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AIARTCHAN/lora-kimhongdo | AIARTCHAN | 2023-02-24T03:26:06Z | 20 | 0 | null | [
"license:creativeml-openrail-m",
"lora",
"aiartchan",
"stable-diffusion",
"region:us"
] | 2023-02-24T03:26:06Z | 2023-02-24T03:20:16.000Z | 2023-02-24T03:20:16 | ---
license: creativeml-openrail-m
tags:
- lora
- aiartchan
- stable-diffusion
---
# Lora - KimHongDo
## Dataset Description
- **원본** [김홍도 로라 공유](https://arca.live/b/aiart/70311638)
[huggingface](https://huggingface.co/datasets/Toraong/Hypernetwork)
전통화 그림으로 학습한 로라 파일
[다운로드](https://huggingface.co/datasets/AIARTCHAN/lora-kimhongdo/resolve/main/KimHongDo.safetensors) | [
-0.38501814007759094,
-0.6758264303207397,
0.0014877980574965477,
0.7161190509796143,
-0.5162245631217957,
-0.14071717858314514,
0.21672183275222778,
-0.28787028789520264,
1.107231855392456,
0.8044223785400391,
-0.6556012034416199,
-0.9244934320449829,
-0.6601772904396057,
0.13242484629154... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
timbrooks/instructpix2pix-clip-filtered | timbrooks | 2023-03-02T11:19:16Z | 20 | 13 | null | [
"size_categories:100K<n<1M",
"language:en",
"arxiv:2211.09800",
"region:us"
] | 2023-03-02T11:19:16Z | 2023-02-24T14:55:53.000Z | 2023-02-24T14:55:53 | ---
dataset_info:
features:
- name: original_prompt
dtype: string
- name: original_image
dtype: image
- name: edit_prompt
dtype: string
- name: edited_prompt
dtype: string
- name: edited_image
dtype: image
splits:
- name: train
num_bytes: 130930966429.88
num_examples: 313010
download_size: 63067247926
dataset_size: 130930966429.88
language:
- en
size_categories:
- 100K<n<1M
---
# Dataset Card for InstructPix2Pix CLIP-filtered
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.timothybrooks.com/instruct-pix2pix
- **Repository:** https://github.com/timothybrooks/instruct-pix2pix
- **Paper:** https://arxiv.org/abs/2211.09800
## Dataset Summary
The dataset can be used to train models to follow edit instructions. Edit instructions
are available in the `edit_prompt`. `original_image` can be used with the `edit_prompt` and
`edited_image` denotes the image after applying the `edit_prompt` on the `original_image`.
Refer to the [GitHub repository](https://github.com/timothybrooks/instruct-pix2pix) to know more about
how this dataset can be used to train a model that can follow instructions.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text descriptions are in English.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The license for this dataset is a custom license. Refer to the licensing file to know more.
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@sayakpaul](https://github.com/sayakpaul) for contributing this dataset. | [
-0.4005802273750305,
-0.37213563919067383,
0.36070147156715393,
0.06056734174489975,
-0.268216073513031,
-0.06936584413051605,
-0.22715826332569122,
-0.39690157771110535,
0.2598527669906616,
0.5833355188369751,
-0.7866098880767822,
-0.8096526265144348,
-0.6726529002189636,
-0.2697131037712... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
KonradSzafer/stackoverflow_python_preprocessed | KonradSzafer | 2023-03-04T23:35:06Z | 20 | 7 | null | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | 2023-03-04T23:35:06Z | 2023-02-25T17:32:31.000Z | 2023-02-25T17:32:31 | ---
dataset_info:
features:
- name: title
dtype: string
- name: answer
dtype: string
- name: question
dtype: string
splits:
- name: train
num_bytes: 5119086
num_examples: 3296
download_size: 1939470
dataset_size: 5119086
task_categories:
- question-answering
language:
- en
pretty_name: Stack Overflow Python - Preprocessed
size_categories:
- 1K<n<10K
---
# Dataset Card for "stackoverflow_python_preprocessed"
This is a preprocessed version of the [stackoverflow_python] dataset.
Questions and answers were filtered to only include questions with more than 100 votes and answers with more than 5 votes.
The dataset has been converted from HTML to plain text and only includes the title, question, and answer columns.
## Additional Information
### License
All Stack Overflow user contributions are licensed under CC-BY-SA 3.0 with attribution required.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7238186597824097,
-0.6981274485588074,
0.22301360964775085,
0.30538421869277954,
-0.3269127607345581,
0.0515984445810318,
-0.06355175375938416,
-0.19149014353752136,
0.4841369390487671,
0.8901936411857605,
-0.7957499623298645,
-0.626153290271759,
-0.376029908657074,
0.21305838227272034,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Isotonic/human_assistant_conversation | Isotonic | 2023-08-31T07:31:15Z | 20 | 4 | null | [
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:100K<n<1M",
"language:en",
"language:es",
"language:zh",
"license:afl-3.0",
"region:us"
] | 2023-08-31T07:31:15Z | 2023-02-28T20:59:35.000Z | 2023-02-28T20:59:35 | ---
license: afl-3.0
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2724591096.91667
num_examples: 1494223
- name: test
num_bytes: 681148230.08333
num_examples: 373556
download_size: 1996990227
dataset_size: 3405739327.0
task_categories:
- text-generation
- conversational
language:
- en
- es
- zh
size_categories:
- 100K<n<1M
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Francesco/leaf-disease-nsdsr | Francesco | 2023-03-30T09:31:29Z | 20 | 0 | null | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc",
"rf100",
"region:us"
] | 2023-03-30T09:31:29Z | 2023-03-30T09:30:59.000Z | 2023-03-30T09:30:59 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': leaf-disease
'1': mildew
'2': rose_P01
'3': rose_R02
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: leaf-disease-nsdsr
tags:
- rf100
---
# Dataset Card for leaf-disease-nsdsr
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/leaf-disease-nsdsr
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
leaf-disease-nsdsr
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/leaf-disease-nsdsr
### Citation Information
```
@misc{ leaf-disease-nsdsr,
title = { leaf disease nsdsr Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/leaf-disease-nsdsr } },
url = { https://universe.roboflow.com/object-detection/leaf-disease-nsdsr },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. | [
-0.38375237584114075,
-0.5384299159049988,
0.16033975780010223,
-0.21061460673809052,
-0.3408554494380951,
-0.16796961426734924,
0.08292172849178314,
-0.5326555371284485,
0.402384877204895,
0.4654349684715271,
-0.6789141297340393,
-1.0463199615478516,
-0.48775333166122437,
0.31181415915489... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Francesco/wine-labels | Francesco | 2023-03-30T09:38:32Z | 20 | 1 | null | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc",
"rf100",
"region:us"
] | 2023-03-30T09:38:32Z | 2023-03-30T09:37:41.000Z | 2023-03-30T09:37:41 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': wine-labels
'1': AlcoholPercentage
'2': Appellation AOC DOC AVARegion
'3': Appellation QualityLevel
'4': CountryCountry
'5': Distinct Logo
'6': Established YearYear
'7': Maker-Name
'8': Organic
'9': Sustainable
'10': Sweetness-Brut-SecSweetness-Brut-Sec
'11': TypeWine Type
'12': VintageYear
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: wine-labels
tags:
- rf100
---
# Dataset Card for wine-labels
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/wine-labels
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
wine-labels
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/wine-labels
### Citation Information
```
@misc{ wine-labels,
title = { wine labels Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/wine-labels } },
url = { https://universe.roboflow.com/object-detection/wine-labels },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. | [
-0.6428291201591492,
-0.545708954334259,
0.10108543187379837,
-0.09169566631317139,
-0.4356290400028229,
-0.20111000537872314,
-0.16147351264953613,
-0.6112490892410278,
0.25359511375427246,
0.501728892326355,
-0.7759566307067871,
-1.0744351148605347,
-0.4801926612854004,
0.278061062097549... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
marekk/testing_dataset_article_category | marekk | 2023-04-04T06:29:35Z | 20 | 0 | null | [
"task_categories:text-classification",
"size_categories:n<1K",
"region:us"
] | 2023-04-04T06:29:35Z | 2023-04-03T20:39:25.000Z | 2023-04-03T20:39:25 | ---
task_categories:
- text-classification
pretty_name: Testing dataset Article Category
size_categories:
- n<1K
--- | [
-0.1285335123538971,
-0.1861683875322342,
0.6529128551483154,
0.49436232447624207,
-0.19319400191307068,
0.23607441782951355,
0.36072009801864624,
0.05056373029947281,
0.5793656706809998,
0.7400146722793579,
-0.650810182094574,
-0.23784008622169495,
-0.7102247476577759,
-0.0478255338966846... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
AlekseyKorshuk/roleplay-io | AlekseyKorshuk | 2023-04-05T21:44:58Z | 20 | 9 | null | [
"region:us"
] | 2023-04-05T21:44:58Z | 2023-04-05T21:44:55.000Z | 2023-04-05T21:44:55 | ---
dataset_info:
features:
- name: input_text
dtype: string
- name: output_text
dtype: string
splits:
- name: train
num_bytes: 2495441
num_examples: 3146
download_size: 1543319
dataset_size: 2495441
---
# Dataset Card for "roleplay-io"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.42651820182800293,
-0.25154414772987366,
0.16816078126430511,
0.3402712345123291,
-0.03209429234266281,
-0.12833496928215027,
0.4179019629955292,
-0.3028034269809723,
0.9051595330238342,
0.6062858700752258,
-1.0389540195465088,
-0.7404325008392334,
-0.4941537082195282,
-0.41257134079933... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
arbml/tashkeela | arbml | 2023-04-06T19:09:05Z | 20 | 2 | null | [
"region:us"
] | 2023-04-06T19:09:05Z | 2023-04-06T19:07:05.000Z | 2023-04-06T19:07:05 | ---
dataset_info:
features:
- name: diacratized
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1419585102
num_examples: 979982
- name: test
num_bytes: 78869542
num_examples: 54444
- name: dev
num_bytes: 78863352
num_examples: 54443
download_size: 747280703
dataset_size: 1577317996
---
# Dataset Card for "tashkeela"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3466772437095642,
-0.4234638214111328,
0.141256183385849,
0.12103936821222305,
-0.30627062916755676,
0.1243261769413948,
0.2969411611557007,
-0.24740512669086456,
0.932397723197937,
0.44576239585876465,
-0.7532410621643066,
-0.8908345103263855,
-0.6698815822601318,
-0.14629511535167694,... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Heerak/ko_en_parallel_dataset | Heerak | 2023-04-20T08:51:52Z | 20 | 0 | null | [
"region:us"
] | 2023-04-20T08:51:52Z | 2023-04-20T08:27:44.000Z | 2023-04-20T08:27:44 | ---
dataset_info:
features:
- name: ko
dtype: string
- name: en
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 4112684317
num_examples: 11800415
- name: validation
num_bytes: 20767480
num_examples: 59299
- name: test
num_bytes: 419935
num_examples: 1982
download_size: 2691575595
dataset_size: 4133871732
---
# Dataset Card for "ko_en_parallel_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.7377461194992065,
-0.22928591072559357,
0.42225906252861023,
0.48305749893188477,
-0.29337888956069946,
0.07759369164705276,
0.1734113097190857,
-0.08607015013694763,
0.8793255090713501,
0.5145129561424255,
-0.7879447340965271,
-0.8577008843421936,
-0.6772792935371399,
-0.08110810071229... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
llm-book/ner-wikinews-dataset | llm-book | 2023-09-30T09:55:56Z | 20 | 0 | null | [
"task_categories:token-classification",
"size_categories:n<1K",
"language:ja",
"license:cc-by-2.5",
"news",
"region:us"
] | 2023-09-30T09:55:56Z | 2023-04-22T14:32:21.000Z | 2023-04-22T14:32:21 | ---
license:
- cc-by-2.5
task_categories:
- token-classification
language:
- ja
tags:
- news
pretty_name: ner-wikinews-dataset
size_categories:
- n<1K
---
# Dataset Card for llm-book/ner-wikinews-dataset
書籍『大規模言語モデル入門』で使用する、[Wikinews](https://ja.wikinews.org/wiki/%E3%83%A1%E3%82%A4%E3%83%B3%E3%83%9A%E3%83%BC%E3%82%B8)の記事に固有表現ラベルを付与したデータセットです。
固有表現ラベルは[llm-book/ner-wikipedia-dataset](https://huggingface.co/datasets/llm-book/ner-wikipedia-dataset)と同様のものを採用しており、全部で8種類 (人名、法人名、地名、製品名、政治的組織名、施設名、その他の組織名、イベント名)あります。
テストセットのみのデータセットとなっています。
## Licence
ウィキニュース日本語版の記事を使用しているため、そのライセンスに従い、「クリエイティブ・コモンズ 表示 2.5 (CC BY 2.5)」とします。
| [
-0.5230106711387634,
-0.6761738061904907,
-0.051232513040304184,
-0.04151662066578865,
-0.5894931554794312,
-0.3762660026550293,
-0.013489206321537495,
-0.1930871158838272,
0.5199601650238037,
0.5931056141853333,
-0.7165990471839905,
-1.0195578336715698,
-0.4756922721862793,
0.195950612425... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
blastwind/github-code-haskell-function | blastwind | 2023-11-11T20:41:47Z | 20 | 0 | null | [
"task_categories:text-generation",
"size_categories:1M<n<10M",
"code",
"haskell",
"region:us"
] | 2023-11-11T20:41:47Z | 2023-05-14T05:17:31.000Z | 2023-05-14T05:17:31 | ---
dataset_info:
features:
- name: repo_name
dtype: string
- name: path
dtype: string
- name: license
dtype: string
- name: full_code
dtype: string
- name: full_size
dtype: int64
- name: uncommented_code
dtype: string
- name: uncommented_size
dtype: int64
- name: function_only_code
dtype: string
- name: function_only_size
dtype: int64
- name: is_commented
dtype: bool
- name: is_signatured
dtype: bool
- name: n_ast_errors
dtype: int64
- name: ast_max_depth
dtype: int64
- name: n_whitespaces
dtype: int64
- name: n_ast_nodes
dtype: int64
- name: n_ast_terminals
dtype: int64
- name: n_ast_nonterminals
dtype: int64
- name: loc
dtype: int64
- name: cycloplexity
dtype: int64
splits:
- name: train
num_bytes: 2166157579
num_examples: 2284385
- name: valid
num_bytes: 307778276
num_examples: 326341
- name: test
num_bytes: 620756348
num_examples: 652682
download_size: 1597070903
dataset_size: 3094692203
task_categories:
- text-generation
tags:
- code
- haskell
size_categories:
- 1M<n<10M
---
# Dataset Card for "github-code-haskell-function"
Rows: 3.26M
Download Size: 1.17GB
This dataset is extracted from [github-code-haskell-file](https://huggingface.co/datasets/blastwind/github-code-haskell-file).
Each row has 3 flavors of the same function:
`uncommented_code`: Includes the function and its closest signature.
`function_only_code`: Includes the function only.
`full_code`: Includes the function and its closest [signature](https://wiki.haskell.org/Type_signature) and comment.
The heuristic for finding the closest signature and comment follows: If the immediate previous neighbor of the function
is neither a signature nor comment, `full_code` is just the function. If the previous neighbor is one though, include
them appropriately, then search the previous neighbor for the other node with the same logic.
Further, each row also contains attribute values for my personal analysis project. The attributes are calculated from the code in column `uncommented_code`.
7% (225k) of the rows have cyclomatic complexity and LOC valued at `-1` because [`homplexity`](https://github.com/BlastWind/homplexity) failed in parsing the row's `uncommented_code`.
| [
-0.34226059913635254,
-0.36269596219062805,
0.5381359457969666,
0.04207577183842659,
-0.36455467343330383,
0.280940443277359,
-0.22591044008731842,
-0.18567460775375366,
0.4307824373245239,
0.6717274785041809,
-0.41231152415275574,
-0.731103241443634,
-0.49753108620643616,
0.17498685419559... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
Nan-Do/instructional_code-search-net-javacript | Nan-Do | 2023-05-20T05:26:15Z | 20 | 0 | null | [
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:text2text-generation",
"language:en",
"license:apache-2.0",
"JavaScript",
"Code Generation",
"Instruction Response",
"region:us"
] | 2023-05-20T05:26:15Z | 2023-05-19T03:34:38.000Z | 2023-05-19T03:34:38 | ---
dataset_info:
features:
- name: INSTRUCTION
dtype: string
- name: RESPONSE
dtype: string
- name: SOURCE
dtype: string
splits:
- name: train
num_bytes: 126970947
num_examples: 121323
download_size: 49942966
dataset_size: 126970947
license: apache-2.0
task_categories:
- conversational
- text-generation
- text2text-generation
language:
- en
tags:
- JavaScript
- Code Generation
- Instruction Response
pretty_name: Instructional JavaScript Dataset
---
# Dataset Card for "instructional_code-search-net-javacript"
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/Nan-Do/instructional_code-search-net-javascript
- **Paper:** None
- **Leaderboard:** None
- **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do)
### Dataset Summary
This is an instructional dataset for JavaScript.
The dataset contains two different kind of tasks:
- Given a piece of code generate a description of what it does.
- Given a description generate a piece of code that fulfils the description.
### Languages
The dataset is in English.
### Data Splits
There are no splits.
## Dataset Creation
May of 2023
### Curation Rationale
This dataset was created to improve the coding capabilities of LLMs.
### Source Data
The summarized version of the code-search-net dataset can be found at https://huggingface.co/datasets/Nan-Do/code-search-net-javascript
### Annotations
The dataset includes an instruction and response columns.
#### Annotation process
The annotation procedure was done using templates and NLP techniques to generate human-like instructions and responses.
A sample notebook of the process can be found at https://github.com/Nan-Do/OpenAssistantInstructionResponsePython
The annontations have been cleaned to make sure there are no repetitions and/or meaningless summaries.
### Licensing Information
Apache 2.0 | [
-0.20227259397506714,
-0.5536946058273315,
-0.0026572609785944223,
0.31007641553878784,
0.04680565372109413,
0.06303218752145767,
-0.3061467409133911,
-0.06610192358493805,
0.47255098819732666,
0.5002049803733826,
-0.6677131056785583,
-1.0251684188842773,
-0.41162869334220886,
0.0993064492... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
ml6team/the-stack-smol-python | ml6team | 2023-05-24T12:42:37Z | 20 | 1 | null | [
"region:us"
] | 2023-05-24T12:42:37Z | 2023-05-24T12:42:06.000Z | 2023-05-24T12:42:06 | ---
dataset_info:
features:
- name: content
dtype: string
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: licenses
sequence: string
- name: repository_name
dtype: string
- name: path
dtype: string
- name: size
dtype: int64
- name: lang
dtype: string
splits:
- name: train
num_bytes: 82161631
num_examples: 10000
download_size: 28757440
dataset_size: 82161631
---
# Dataset Card for "the-stack-smol-python"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.48589420318603516,
-0.16201430559158325,
-0.030965358018875122,
0.29968130588531494,
-0.022575223818421364,
0.019172731786966324,
0.31807050108909607,
-0.009420600719749928,
0.7562189102172852,
0.595660388469696,
-0.8410854339599609,
-0.4708890914916992,
-0.579488217830658,
-0.196652337... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
openaccess-ai-collective/oasst1-guanaco-extended | openaccess-ai-collective | 2023-05-26T12:19:26Z | 20 | 3 | null | [
"region:us"
] | 2023-05-26T12:19:26Z | 2023-05-26T05:40:11.000Z | 2023-05-26T05:40:11 | This is the Guanaco Extended dataset derived from [OpenAssistant/oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1).
Guanaco only uses the first (highest rank; rank 0) response from the assistant at each reply level as their dataset.
| [
-0.13933394849300385,
-0.6068545579910278,
0.3144969344139099,
0.10825259983539581,
0.24026499688625336,
-0.0637202039361,
0.2959004342556,
-0.3474414050579071,
0.6244580149650574,
0.5768564343452454,
-1.0331860780715942,
-0.5930507183074951,
-0.5000565052032471,
-0.10199085623025894,
-0... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
linhtran92/viet_cv13 | linhtran92 | 2023-05-29T03:12:27Z | 20 | 0 | null | [
"region:us"
] | 2023-05-29T03:12:27Z | 2023-05-29T03:11:37.000Z | 2023-05-29T03:11:37 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 55369201.125
num_examples: 1671
- name: validation
num_bytes: 5898500.0
num_examples: 256
- name: test
num_bytes: 25318281.0
num_examples: 870
download_size: 85167538
dataset_size: 86585982.125
---
# Dataset Card for "viet_cv13"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.4642113745212555,
-0.2152828723192215,
0.317156583070755,
0.34582048654556274,
-0.3423076570034027,
-0.11820673197507858,
0.3165627121925354,
-0.0570392832159996,
0.6271202564239502,
0.7143065929412842,
-0.8065921068191528,
-1.0388810634613037,
-0.6070693135261536,
-0.1680677980184555,
... | null | null | null | null | null | null | null | null | null | null | null | null | null | |
linhtran92/viet_vlsp | linhtran92 | 2023-05-29T04:27:52Z | 20 | 0 | null | [
"region:us"
] | 2023-05-29T04:27:52Z | 2023-05-29T03:54:01.000Z | 2023-05-29T03:54:01 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 24081636306.031
num_examples: 171441
- name: validation
num_bytes: 1046661092.259
num_examples: 7501
download_size: 25080683463
dataset_size: 25128297398.289997
---
# Dataset Card for "viet_vlsp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | [
-0.3975939154624939,
-0.08775841444730759,
0.3463245928287506,
0.24180270731449127,
-0.3683014512062073,
-0.09029576182365417,
0.287852019071579,
-0.21513748168945312,
0.7883987426757812,
0.8229510188102722,
-0.7340919375419617,
-0.9708175659179688,
-0.6323045492172241,
-0.2664449214935303... | null | null | null | null | null | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.