author stringlengths 2 29 ⌀ | cardData null | citation stringlengths 0 9.58k ⌀ | description stringlengths 0 5.93k ⌀ | disabled bool 1 class | downloads float64 1 1M ⌀ | gated bool 2 classes | id stringlengths 2 108 | lastModified stringlengths 24 24 | paperswithcode_id stringlengths 2 45 ⌀ | private bool 2 classes | sha stringlengths 40 40 | siblings list | tags list | readme_url stringlengths 57 163 | readme stringlengths 0 977k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
lovelearn | null | null | null | false | null | false | lovelearn/Ye | 2022-11-08T12:46:56.000Z | null | false | 19c52b91b46cdf9782cb6126ecd2d060cd1da4f0 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/lovelearn/Ye/resolve/main/README.md | ---
license: openrail
---
|
MarkGG | null | null | null | false | 24 | false | MarkGG/Romance-cleaned-2 | 2022-10-28T07:20:20.000Z | null | false | a4d47050c1f1a90dc09c8920cd66ebc1e1523ca0 | [] | [] | https://huggingface.co/datasets/MarkGG/Romance-cleaned-2/resolve/main/README.md | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 3407789.8839248433
num_examples: 6466
- name: validation
num_bytes: 378936.11607515655
num_examples: 719
download_size: 2403265
dataset_size: 3786726.0
---
# Dataset Card for "Romance-cleaned-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tglcourse | null | null | null | false | 1 | false | tglcourse/latent_lsun_church_256px | 2022-10-28T07:57:35.000Z | null | false | b600bc01160467f3102f821deadf0e130637f94e | [] | [] | https://huggingface.co/datasets/tglcourse/latent_lsun_church_256px/resolve/main/README.md | ---
dataset_info:
features:
- name: label
dtype:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: '5'
6: '6'
7: '7'
8: '8'
9: '9'
10: a
11: b
12: c
13: d
14: e
15: f
- name: latent
sequence:
sequence:
sequence: float32
splits:
- name: test
num_bytes: 106824288
num_examples: 6312
- name: train
num_bytes: 2029441460
num_examples: 119915
download_size: 2082210019
dataset_size: 2136265748
---
# Dataset Card for "latent_lsun_church_256px"
This is derived from https://huggingface.co/datasets/tglcourse/lsun_church_train
Each image is cropped to 256px square and encoded to a 4x32x32 latent representation using the same VAE as that employed by Stable Diffusion
Decoding
```python
from diffusers import AutoencoderKL
from datasets import load_dataset
from PIL import Image
import numpy as np
import torch
# load the dataset
dataset = load_dataset('tglcourse/latent_lsun_church_256px')
# Load the VAE (requires access - see repo model card for info)
vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae")
latent = torch.tensor([dataset['train'][0]['latent']]) # To tensor (bs, 4, 32, 32)
latent = (1 / 0.18215) * latent # Scale to match SD implementation
with torch.no_grad():
image = vae.decode(latent).sample[0] # Decode
image = (image / 2 + 0.5).clamp(0, 1) # To (0, 1)
image = image.detach().cpu().permute(1, 2, 0).numpy() # To numpy, channels lsat
image = (image * 255).round().astype("uint8") # (0, 255) and type uint8
image = Image.fromarray(image) # To PIL
image # The resulting PIL image
```
|
davanstrien | null | null | null | false | 9 | false | davanstrien/maps_test | 2022-10-28T09:12:40.000Z | null | true | 39104fa2e90047f36353dec688ed68e6bff1af24 | [] | [] | https://huggingface.co/datasets/davanstrien/maps_test/resolve/main/README.md | |
wesleywt | null | null | null | false | 32 | false | wesleywt/uniprot_sprot | 2022-10-30T12:44:58.000Z | null | false | 30044e415f19965e2435434396f050322bca523f | [] | [] | https://huggingface.co/datasets/wesleywt/uniprot_sprot/resolve/main/README.md | ---
dataset_info:
features:
- name: uniprot_id
dtype: string
- name: sequences
dtype: string
splits:
- name: test
num_bytes: 21314102.893347207
num_examples: 56801
- name: train
num_bytes: 191823924.1066528
num_examples: 511201
download_size: 211969427
dataset_size: 213138027.0
---
# Dataset Card for "uniprot_sprot"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
siberspace | null | null | null | false | null | false | siberspace/femmeaigle | 2022-10-28T09:17:35.000Z | null | false | 1a1563b4008cc29d8011a10272e286eac923d63c | [] | [] | https://huggingface.co/datasets/siberspace/femmeaigle/resolve/main/README.md | |
tglcourse | null | null | null | false | null | false | tglcourse/latent_afhqv2_256px | 2022-10-28T11:51:36.000Z | null | false | e40d5764be1040bac56f49cea5df9d243e8d904b | [] | [] | https://huggingface.co/datasets/tglcourse/latent_afhqv2_256px/resolve/main/README.md | ---
dataset_info:
features:
- name: label
dtype:
class_label:
names:
0: cat
1: dog
2: wild
- name: latent
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 267449972
num_examples: 15803
download_size: 260672854
dataset_size: 267449972
---
# Dataset Card for "latent_afhqv2_256px"
Each image is cropped to 256px square and encoded to a 4x32x32 latent representation using the same VAE as that employed by Stable Diffusion
Decoding
```python
from diffusers import AutoencoderKL
from datasets import load_dataset
from PIL import Image
import numpy as np
import torch
# load the dataset
dataset = load_dataset('tglcourse/latent_afhqv2_256px')
# Load the VAE (requires access - see repo model card for info)
vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae")
latent = torch.tensor([dataset['train'][0]['latent']]) # To tensor (bs, 4, 32, 32)
latent = (1 / 0.18215) * latent # Scale to match SD implementation
with torch.no_grad():
image = vae.decode(latent).sample[0] # Decode
image = (image / 2 + 0.5).clamp(0, 1) # To (0, 1)
image = image.detach().cpu().permute(1, 2, 0).numpy() # To numpy, channels lsat
image = (image * 255).round().astype("uint8") # (0, 255) and type uint8
image = Image.fromarray(image) # To PIL
image # The resulting PIL image
``` |
havens2 | null | \ | NACL22 is a dataset labelled for Science Entity Recognition task, which is a subtask of NER task.
The text is from 2022 conference papers collected from ACL anthology.
The dataset is collected by Haotian Teng and Xiaoyue Cui.
Annotation standard can be found here https://github.com/neubig/nlp-from-scratch-assignment-2022/blob/main/annotation_standard.md | false | 61 | false | havens2/naacl2022 | 2022-10-28T11:37:16.000Z | null | false | 39c63d396a8b291a2387b8499c84e7a3c4f3f451 | [] | [
"annotations_creators:expert-generated",
"language:en",
"language_creators:crowdsourced",
"license:afl-3.0",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"tags:acl",
"tags:sciBERT",
"tags:sci",
"tags:11711",
"task_categories:token-classification",
"task_ids:named-entity-recognition"
] | https://huggingface.co/datasets/havens2/naacl2022/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- crowdsourced
license:
- afl-3.0
multilinguality:
- monolingual
pretty_name: sci_NER_naacl
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- acl
- sciBERT
- sci
- acl
- '11711'
task_categories:
- token-classification
task_ids:
- named-entity-recognition
---
# Dataset Card for [naacl2022]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is a named entity recognition dataset annotated for the science entity recognition task, a [project](https://github.com/neubig/nlp-from-scratch-assignment-2022) from the CMU 11-711 course.
### Supported Tasks and Leaderboards
NER task.
### Languages
English
## Dataset Structure
### Data Instances
A sample of the dataset
{'id': '0',
'tokens': ['We', 'sample', '50', 'negative', 'cases', 'from', 'T5LARGE', '+', 'GenMC', 'for', 'each', 'dataset'],
'ner_tags':['O', 'O', 'O', 'O', 'O', 'O', 'B-MethodName', 'O', 'B-MethodName', 'O', 'O', 'O']}
### Data Fields
id,tokens,ner_tags
- `id`: a `string` feature give the sample index.
- `tokens`: a `list` of `string` features give the sequence.
- `ner_tags`: a `list` of classification labels for each token in the sentence, with possible values including
`O` (0), `B-MethodName` (1), `I-MethodName` (2), `B-HyperparameterName` (3),`I-HyperparameterName` (4),`B-HyperparameterValue` (5),`I-HyperparameterValue` (6),`B-MetricName` (7),`I-MetricName` (8),`B-MetricValue` (9),`I-MetricValue` (10),`B-TaskName` (11),`I-TaskName` (12),`B-DatasetName` (13),`I-DatasetName` (14).
### Data Splits
Data split into
train.txt
dev.txt
test.txt
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
The data is annotated by using labelstudio, the papers are collected from TACL and ACL 2022 conferences.
#### Who are the annotators?
Xiaoyue Cui and Haotian Teng annotated the datasets.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@xcui297](https://github.com/xcui297); [@haotianteng](https://github.com/haotianteng) for adding this dataset.
|
puellacurae | null | null | null | false | null | false | puellacurae/x | 2022-10-28T09:44:17.000Z | null | false | b1496cd7a3be1e9b1d7f1301c0df7353c17cc48b | [] | [
"doi:10.57967/hf/0067",
"license:openrail"
] | https://huggingface.co/datasets/puellacurae/x/resolve/main/README.md | ---
license: openrail
---
|
tglcourse | null | null | null | false | null | false | tglcourse/latent_afhqv2_512px | 2022-10-28T11:52:19.000Z | null | false | eac45f711beabc481045075e3066be32ed55dc8e | [] | [] | https://huggingface.co/datasets/tglcourse/latent_afhqv2_512px/resolve/main/README.md | ---
dataset_info:
features:
- name: label
dtype:
class_label:
names:
0: cat
1: dog
2: wild
- name: latent
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1052290164
num_examples: 15803
download_size: 1038619876
dataset_size: 1052290164
---
# Dataset Card for "latent_afhqv2_512px"
Each image is cropped to 512px square and encoded to a 4x64x64 latent representation using the same VAE as that employed by Stable Diffusion
Decoding
```python
from diffusers import AutoencoderKL
from datasets import load_dataset
from PIL import Image
import numpy as np
import torch
# load the dataset
dataset = load_dataset('tglcourse/latent_lsun_church_256px')
# Load the VAE (requires access - see repo model card for info)
vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae")
latent = torch.tensor([dataset['train'][0]['latent']]) # To tensor (bs, 4, 64, 3264
latent = (1 / 0.18215) * latent # Scale to match SD implementation
with torch.no_grad():
image = vae.decode(latent).sample[0] # Decode
image = (image / 2 + 0.5).clamp(0, 1) # To (0, 1)
image = image.detach().cpu().permute(1, 2, 0).numpy() # To numpy, channels lsat
image = (image * 255).round().astype("uint8") # (0, 255) and type uint8
image = Image.fromarray(image) # To PIL
image # The resulting PIL image
``` |
PlanTL-GOB-ES | null | @misc{11234/1-3424,
title = {Universal Dependencies 2.7},
author = {Zeman, Daniel and Nivre, Joakim and Abrams, Mitchell and Ackermann, Elia and Aepli, No{\"e}mi and Aghaei, Hamid and Agi{\'c}, {\v Z}eljko and Ahmadi, Amir and Ahrenberg, Lars and Ajede, Chika Kennedy and Aleksandravi{\v c}i{\=u}t{\.e}, Gabriel{\.e} and Alfina, Ika and Antonsen, Lene and Aplonova, Katya and Aquino, Angelina and Aragon, Carolina and Aranzabe, Maria Jesus and Arnard{\'o}ttir, {\t H}{\'o}runn and Arutie, Gashaw and Arwidarasti, Jessica Naraiswari and Asahara, Masayuki and Ateyah, Luma and Atmaca, Furkan and Attia, Mohammed and Atutxa, Aitziber and Augustinus, Liesbeth and Badmaeva, Elena and Balasubramani, Keerthana and Ballesteros, Miguel and Banerjee, Esha and Bank, Sebastian and Barbu Mititelu, Verginica and Basmov, Victoria and Batchelor, Colin and Bauer, John and Bedir, Seyyit Talha and Bengoetxea, Kepa and Berk, G{\"o}zde and Berzak, Yevgeni and Bhat, Irshad Ahmad and Bhat, Riyaz Ahmad and Biagetti, Erica and Bick, Eckhard and Bielinskien{\.e}, Agn{\.e} and Bjarnad{\'o}ttir, Krist{\'{\i}}n and Blokland, Rogier and Bobicev, Victoria and Boizou, Lo{\"{\i}}c and Borges V{\"o}lker, Emanuel and B{\"o}rstell, Carl and Bosco, Cristina and Bouma, Gosse and Bowman, Sam and Boyd, Adriane and Brokait{\.e}, Kristina and Burchardt, Aljoscha and Candito, Marie and Caron, Bernard and Caron, Gauthier and Cavalcanti, Tatiana and Cebiroglu Eryigit, Gulsen and Cecchini, Flavio Massimiliano and Celano, Giuseppe G. A. and Ceplo, Slavomir and Cetin, Savas and Cetinoglu, Ozlem and Chalub, Fabricio and Chi, Ethan and Cho, Yongseok and Choi, Jinho and Chun, Jayeol and Cignarella, Alessandra T. and Cinkova, Silvie and Collomb, Aurelie and Coltekin, Cagr{\i} and Connor, Miriam and Courtin, Marine and Davidson, Elizabeth and de Marneffe, Marie-Catherine and de Paiva, Valeria and Derin, Mehmet Oguz and de Souza, Elvis and Diaz de Ilarraza, Arantza and Dickerson, Carly and Dinakaramani, Arawinda and Dione, Bamba and Dirix, Peter and Dobrovoljc, Kaja and Dozat, Timothy and Droganova, Kira and Dwivedi, Puneet and Eckhoff, Hanne and Eli, Marhaba and Elkahky, Ali and Ephrem, Binyam and Erina, Olga and Erjavec, Tomaz and Etienne, Aline and Evelyn, Wograine and Facundes, Sidney and Farkas, Rich{\'a}rd and Fernanda, Mar{\'{\i}}lia and Fernandez Alcalde, Hector and Foster, Jennifer and Freitas, Cl{\'a}udia and Fujita, Kazunori and Gajdosov{\'a}, Katar{\'{\i}}na and Galbraith, Daniel and Garcia, Marcos and G{\"a}rdenfors, Moa and Garza, Sebastian and Gerardi, Fabr{\'{\i}}cio Ferraz and Gerdes, Kim and Ginter, Filip and Goenaga, Iakes and Gojenola, Koldo and G{\"o}k{\i}rmak, Memduh and Goldberg, Yoav and G{\'o}mez Guinovart, Xavier and Gonz{\'a}lez Saavedra,
Berta and Grici{\=u}t{\.e}, Bernadeta and Grioni, Matias and Grobol, Lo{\"{\i}}c and Gr{\=u}z{\={\i}}tis, Normunds and Guillaume, Bruno and Guillot-Barbance, C{\'e}line and G{\"u}ng{\"o}r, Tunga and Habash, Nizar and Hafsteinsson, Hinrik and Haji{\v c}, Jan and Haji{\v c} jr., Jan and H{\"a}m{\"a}l{\"a}inen, Mika and H{\`a} M{\~y}, Linh and Han, Na-Rae and Hanifmuti, Muhammad Yudistira and Hardwick, Sam and Harris, Kim and Haug, Dag and Heinecke, Johannes and Hellwig, Oliver and Hennig, Felix and Hladk{\'a}, Barbora and Hlav{\'a}{\v c}ov{\'a}, Jaroslava and Hociung, Florinel and Hohle, Petter and Huber, Eva and Hwang, Jena and Ikeda, Takumi and Ingason, Anton Karl and Ion, Radu and Irimia, Elena and Ishola, {\d O}l{\'a}j{\'{\i}}d{\'e} and Jel{\'{\i}}nek, Tom{\'a}{\v s} and Johannsen, Anders and J{\'o}nsd{\'o}ttir, Hildur and J{\o}rgensen, Fredrik and Juutinen, Markus and K, Sarveswaran and Ka{\c s}{\i}kara, H{\"u}ner and Kaasen, Andre and Kabaeva, Nadezhda and Kahane, Sylvain and Kanayama, Hiroshi and Kanerva, Jenna and Katz, Boris and Kayadelen, Tolga and Kenney, Jessica and Kettnerov{\'a}, V{\'a}clava and Kirchner, Jesse and Klementieva, Elena and K{\"o}hn, Arne and K{\"o}ksal, Abdullatif and Kopacewicz, Kamil and Korkiakangas, Timo and Kotsyba, Natalia and Kovalevskait{\.e}, Jolanta and Krek, Simon and Krishnamurthy, Parameswari and Kwak, Sookyoung and Laippala, Veronika and Lam, Lucia and Lambertino, Lorenzo and Lando, Tatiana and Larasati, Septina Dian and Lavrentiev, Alexei and Lee, John and L{\^e} H{\`{\^o}}ng, Phương and Lenci, Alessandro and Lertpradit, Saran and Leung, Herman and Levina, Maria and Li, Cheuk Ying and Li, Josie and Li, Keying and Li, Yuan and Lim, {KyungTae} and Linden, Krister and Ljubesic, Nikola and Loginova, Olga and Luthfi, Andry and Luukko, Mikko and Lyashevskaya, Olga and Lynn, Teresa and Macketanz, Vivien and Makazhanov, Aibek and Mandl, Michael and Manning, Christopher and Manurung, Ruli and Maranduc, Catalina and Marcek, David and Marheinecke, Katrin and Mart{\'{\i}}nez Alonso, H{\'e}ctor and Martins, Andr{\'e} and Masek, Jan and Matsuda, Hiroshi and Matsumoto, Yuji and {McDonald}, Ryan and {McGuinness}, Sarah and Mendonca, Gustavo and Miekka, Niko and Mischenkova, Karina and Misirpashayeva, Margarita and Missil{\"a}, Anna and Mititelu, Catalin and Mitrofan, Maria and Miyao, Yusuke and Mojiri Foroushani, {AmirHossein} and Moloodi, Amirsaeid and Montemagni, Simonetta and More, Amir and Moreno Romero, Laura and Mori, Keiko Sophie and Mori, Shinsuke and Morioka, Tomohiko and Moro, Shigeki and Mortensen, Bjartur and Moskalevskyi, Bohdan and Muischnek, Kadri and Munro, Robert and Murawaki, Yugo and M{\"u}{\"u}risep, Kaili and Nainwani, Pinkey and Nakhl{\'e}, Mariam and Navarro Hor{\~n}iacek, Juan Ignacio and Nedoluzhko,
Anna and Ne{\v s}pore-B{\=e}rzkalne, Gunta and Nguy{\~{\^e}}n Th{\d i}, Lương and Nguy{\~{\^e}}n Th{\d i} Minh, Huy{\`{\^e}}n and Nikaido, Yoshihiro and Nikolaev, Vitaly and Nitisaroj, Rattima and Nourian, Alireza and Nurmi, Hanna and Ojala, Stina and Ojha, Atul Kr. and Ol{\'u}{\`o}kun, Ad{\'e}day{\d o}̀ and Omura, Mai and Onwuegbuzia, Emeka and Osenova, Petya and {\"O}stling, Robert and {\O}vrelid, Lilja and {\"O}zate{\c s}, {\c S}aziye Bet{\"u}l and {\"O}zg{\"u}r, Arzucan and {\"O}zt{\"u}rk Ba{\c s}aran, Balk{\i}z and Partanen, Niko and Pascual, Elena and Passarotti, Marco and Patejuk, Agnieszka and Paulino-Passos, Guilherme and Peljak-{\L}api{\'n}ska, Angelika and Peng, Siyao and Perez, Cenel-Augusto and Perkova, Natalia and Perrier, Guy and Petrov, Slav and Petrova, Daria and Phelan, Jason and Piitulainen, Jussi and Pirinen, Tommi A and Pitler, Emily and Plank, Barbara and Poibeau, Thierry and Ponomareva, Larisa and Popel, Martin and Pretkalnina, Lauma and Pr{\'e}vost, Sophie and Prokopidis, Prokopis and Przepi{\'o}rkowski, Adam and Puolakainen, Tiina and Pyysalo, Sampo and Qi, Peng and R{\"a}{\"a}bis, Andriela and Rademaker, Alexandre and Rama, Taraka and Ramasamy, Loganathan and Ramisch, Carlos and Rashel, Fam and Rasooli, Mohammad Sadegh and Ravishankar, Vinit and Real, Livy and Rebeja, Petru and Reddy, Siva and Rehm, Georg and Riabov, Ivan and Rie{\ss}ler, Michael and Rimkut{\.e}, Erika and Rinaldi, Larissa and Rituma, Laura and Rocha, Luisa and R{\"o}gnvaldsson, Eir{\'{\i}}kur and Romanenko, Mykhailo and Rosa, Rudolf and Roșca, Valentin and Rovati, Davide and Rudina, Olga and Rueter, Jack and R{\'u}narsson, Kristjan and Sadde, Shoval and Safari, Pegah and Sagot, Benoit and Sahala, Aleksi and Saleh, Shadi and Salomoni, Alessio and Samardzi{\'c}, Tanja and Samson, Stephanie and Sanguinetti, Manuela and S{\"a}rg,
Dage and Saul{\={\i}}te, Baiba and Sawanakunanon, Yanin and Scannell, Kevin and Scarlata, Salvatore and Schneider, Nathan and Schuster, Sebastian and Seddah, Djam{\'e} and Seeker, Wolfgang and Seraji, Mojgan and Shen, Mo and Shimada, Atsuko and Shirasu, Hiroyuki and Shohibussirri, Muh and Sichinava, Dmitry and Sigurðsson, Einar Freyr and Silveira, Aline and Silveira, Natalia and Simi, Maria and Simionescu, Radu and Simk{\'o}, Katalin and {\v S}imkov{\'a}, M{\'a}ria and Simov, Kiril and Skachedubova, Maria and Smith, Aaron and Soares-Bastos, Isabela and Spadine, Carolyn and Steingr{\'{\i}}msson, Stein{\t h}{\'o}r and Stella, Antonio and Straka, Milan and Strickland, Emmett and Strnadov{\'a}, Jana and Suhr, Alane and Sulestio, Yogi Lesmana and Sulubacak, Umut and Suzuki, Shingo and Sz{\'a}nt{\'o}, Zsolt and Taji, Dima and Takahashi, Yuta and Tamburini, Fabio and Tan, Mary Ann C. and Tanaka, Takaaki and Tella, Samson and Tellier, Isabelle and Thomas, Guillaume and Torga, Liisi and Toska, Marsida and Trosterud, Trond and Trukhina, Anna and Tsarfaty, Reut and T{\"u}rk, Utku and Tyers, Francis and Uematsu, Sumire and Untilov, Roman and Uresov{\'a}, Zdenka and Uria, Larraitz and Uszkoreit, Hans and Utka, Andrius and Vajjala, Sowmya and van Niekerk, Daniel and van Noord, Gertjan and Varga, Viktor and Villemonte de la Clergerie, Eric and Vincze, Veronika and Wakasa, Aya and Wallenberg, Joel C. and Wallin, Lars and Walsh, Abigail and Wang, Jing Xian and Washington, Jonathan North and Wendt, Maximilan and Widmer, Paul and Williams, Seyi and Wir{\'e}n, Mats and Wittern, Christian and Woldemariam, Tsegay and Wong, Tak-sum and Wr{\'o}blewska, Alina and Yako, Mary and Yamashita, Kayo and Yamazaki, Naoki and Yan, Chunxiao and Yasuoka, Koichi and Yavrumyan, Marat M. and Yu, Zhuoran and Zabokrtsk{\'y}, Zdenek and Zahra, Shorouq and Zeldes, Amir and Zhu, Hanzhi and Zhuravleva, Anna},
url = {http://hdl.handle.net/11234/1-3424},
note = {{LINDAT}/{CLARIAH}-{CZ} digital library at the Institute of Formal and Applied Linguistics ({{\'U}FAL}), Faculty of Mathematics and Physics, Charles University},
copyright = {Licence Universal Dependencies v2.7},
year = {2020} } | Universal Dependencies is a project that seeks to develop cross-linguistically consistent treebank annotation for many languages, with the goal of facilitating multilingual parser development, cross-lingual learning, and parsing research from a language typology perspective. The annotation scheme is based on (universal) Stanford dependencies (de Marneffe et al., 2006, 2008, 2014), Google universal part-of-speech tags (Petrov et al., 2012), and the Interset interlingua for morphosyntactic tagsets (Zeman, 2008). | false | null | false | PlanTL-GOB-ES/UD_Spanish-AnCora | 2022-10-28T10:37:07.000Z | null | false | b76556ca3af80cdaa6f131011f23f088f85c188d | [] | [
"annotations_creators:expert-generated",
"language:es",
"language_creators:found",
"license:cc-by-4.0",
"multilinguality:monolingual",
"task_categories:token-classification",
"task_ids:part-of-speech"
] | https://huggingface.co/datasets/PlanTL-GOB-ES/UD_Spanish-AnCora/resolve/main/README.md | ---
YAML tags:
annotations_creators:
- expert-generated
language:
- es
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: UD_Spanish-AnCora
size_categories: []
source_datasets: []
tags: []
task_categories:
- token-classification
task_ids:
- part-of-speech
---
# UD_Spanish-AnCora
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Website:** https://github.com/UniversalDependencies/UD_Spanish-AnCora
- **Point of Contact:** [Daniel Zeman](zeman@ufal.mff.cuni.cz)
### Dataset Summary
This dataset is composed of the annotations from the [AnCora corpus](http://clic.ub.edu/corpus/), projected on the [Universal Dependencies treebank](https://universaldependencies.org/). We use the POS annotations of this corpus as part of the EvalEs Spanish language benchmark.
### Supported Tasks and Leaderboards
POS tagging
### Languages
The dataset is in Spanish (`es-ES`)
## Dataset Structure
### Data Instances
Three conllu files.
Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines:
1) Word lines containing the annotation of a word/token in 10 fields separated by single tab characters (see below).
2) Blank lines marking sentence boundaries.
3) Comment lines starting with hash (#).
### Data Fields
Word lines contain the following fields:
1) ID: Word index, integer starting at 1 for each new sentence; may be a range for multiword tokens; may be a decimal number for empty nodes (decimal numbers can be lower than 1 but must be greater than 0).
2) FORM: Word form or punctuation symbol.
3) LEMMA: Lemma or stem of word form.
4) UPOS: Universal part-of-speech tag.
5) XPOS: Language-specific part-of-speech tag; underscore if not available.
6) FEATS: List of morphological features from the universal feature inventory or from a defined language-specific extension; underscore if not available.
7) HEAD: Head of the current word, which is either a value of ID or zero (0).
8) DEPREL: Universal dependency relation to the HEAD (root iff HEAD = 0) or a defined language-specific subtype of one.
9) DEPS: Enhanced dependency graph in the form of a list of head-deprel pairs.
10) MISC: Any other annotation.
From: [https://universaldependencies.org](https://universaldependencies.org/guidelines.html)
### Data Splits
- es_ancora-ud-train.conllu
- es_ancora-ud-dev.conllu
- es_ancora-ud-test.conllu
## Dataset Creation
### Curation Rationale
[N/A]
### Source Data
- [UD_Spanish-AnCora](https://github.com/UniversalDependencies/UD_Spanish-AnCora)
#### Initial Data Collection and Normalization
The original annotation was done in a constituency framework as a part of the [AnCora project](http://clic.ub.edu/corpus/) at the University of Barcelona. It was converted to dependencies by the [Universal Dependencies team](https://universaldependencies.org/) and used in the CoNLL 2009 shared task. The CoNLL 2009 version was later converted to HamleDT and to Universal Dependencies.
For more information on the AnCora project, visit the [AnCora site](http://clic.ub.edu/corpus/).
To learn about the Universal Dependences, visit the webpage [https://universaldependencies.org](https://universaldependencies.org)
#### Who are the source language producers?
For more information on the AnCora corpus and its sources, visit the [AnCora site](http://clic.ub.edu/corpus/).
### Annotations
#### Annotation process
For more information on the first AnCora annotation, visit the [AnCora site](http://clic.ub.edu/corpus/).
#### Who are the annotators?
For more information on the AnCora annotation team, visit the [AnCora site](http://clic.ub.edu/corpus/).
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contributes to the development of language models in Spanish.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
### Licensing Information
This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by/4.0/">CC Attribution 4.0 International License</a>.
### Citation Information
The following paper must be cited when using this corpus:
Taulé, M., M.A. Martí, M. Recasens (2008) 'Ancora: Multilevel Annotated Corpora for Catalan and Spanish', Proceedings of 6th International Conference on Language Resources and Evaluation. Marrakesh (Morocco).
To cite the Universal Dependencies project:
Rueter, J. (Creator), Erina, O. (Contributor), Klementeva, J. (Contributor), Ryabov, I. (Contributor), Tyers, F. M. (Contributor), Zeman, D. (Contributor), Nivre, J. (Creator) (15 Nov 2020). Universal Dependencies version 2.7 Erzya JR. Universal Dependencies Consortium.
|
PlanTL-GOB-ES | null | @inproceedings{tjong-kim-sang-2002-introduction,
title = "Introduction to the {C}o{NLL}-2002 Shared Task: Language-Independent Named Entity Recognition",
author = "Tjong Kim Sang, Erik F.",
booktitle = "{COLING}-02: The 6th Conference on Natural Language Learning 2002 ({C}o{NLL}-2002)",
year = "2002",
url = "https://www.aclweb.org/anthology/W02-2024",
} | Named entities are phrases that contain the names of persons, organizations, locations, times and quantities.
Example:
[PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] .
The shared task of CoNLL-2002 concerns language-independent named entity recognition.
We will concentrate on four types of named entities: persons, locations, organizations and names of miscellaneous entities that do not belong to the previous three groups.
The participants of the shared task will be offered training and test data for at least two languages.
They will use the data for developing a named-entity recognition system that includes a machine learning component.
Information sources other than the training data may be used in this shared task.
We are especially interested in methods that can use additional unannotated data for improving their performance (for example co-training).
The train/validation/test sets are available in Spanish and Dutch.
For more details see https://www.clips.uantwerpen.be/conll2002/ner/ and https://www.aclweb.org/anthology/W02-2024/ | false | 4 | false | PlanTL-GOB-ES/CoNLL-NERC-es | 2022-11-02T10:34:16.000Z | null | false | eb715b89f5d0caf8cf7267475a112d67f34b2597 | [] | [
"annotations_creators:expert-generated",
"language:es",
"language_creators:found",
"multilinguality:monolingual",
"task_categories:token-classification",
"task_ids:part-of-speech"
] | https://huggingface.co/datasets/PlanTL-GOB-ES/CoNLL-NERC-es/resolve/main/README.md | ---
YAML tags:
annotations_creators:
- expert-generated
language:
- es
language_creators:
- found
multilinguality:
- monolingual
pretty_name: CoNLL-NERC-es
size_categories: []
source_datasets: []
tags: []
task_categories:
- token-classification
task_ids:
- part-of-speech
---
# CoNLL-NERC-es
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Website:** https://www.cs.upc.edu/~nlp/tools/nerc/nerc.html
- **Point of Contact:** [Xavier Carreras](carreras@lsi.upc.es)
### Dataset Summary
CoNLL-NERC is the Spanish dataset of the CoNLL-2002 Shared Task [(Tjong Kim Sang, 2002)](https://aclanthology.org/W02-2024.pdf). The dataset is annotated with four types of named entities --persons, locations, organizations, and other miscellaneous entities-- formatted in the standard Beginning-Inside-Outside (BIO) format. The corpus consists of 8,324 train sentences with 19,400 named entities, 1,916 development sentences with 4,568 named entities, and 1,518 test sentences with 3,644 named entities.
We use this corpus as part of the EvalEs Spanish language benchmark.
### Supported Tasks and Leaderboards
Named Entity Recognition and Classification
### Languages
The dataset is in Spanish (`es-ES`)
## Dataset Structure
### Data Instances
<pre>
El DA O
Abogado NC B-PER
General AQ I-PER
del SP I-PER
Estado NC I-PER
, Fc O
Daryl VMI B-PER
Williams NC I-PER
, Fc O
subrayó VMI O
hoy RG O
la DA O
necesidad NC O
de SP O
tomar VMN O
medidas NC O
para SP O
proteger VMN O
al SP O
sistema NC O
judicial AQ O
australiano AQ O
frente RG O
a SP O
una DI O
página NC O
de SP O
internet NC O
que PR O
imposibilita VMI O
el DA O
cumplimiento NC O
de SP O
los DA O
principios NC O
básicos AQ O
de SP O
la DA O
Ley NC B-MISC
. Fp O
</pre>
### Data Fields
Every file has two columns, with the word form or punctuation symbol in the first one and the corresponding IOB tag in the second one. The different files are separated by an empty line.
### Data Splits
- esp.train: 273037 lines
- esp.testa: 54837 lines (used as dev)
- esp.testb: 53049 lines (used as test)
## Dataset Creation
### Curation Rationale
[N/A]
### Source Data
The data is a collection of news wire articles made available by the Spanish EFE News Agency. The articles are from May 2000.
#### Initial Data Collection and Normalization
For more information visit the paper from the CoNLL-2002 Shared Task [(Tjong Kim Sang, 2002)](https://aclanthology.org/W02-2024.pdf).
#### Who are the source language producers?
For more information visit the paper from the CoNLL-2002 Shared Task [(Tjong Kim Sang, 2002)](https://aclanthology.org/W02-2024.pdf).
### Annotations
#### Annotation process
For more information visit the paper from the CoNLL-2002 Shared Task [(Tjong Kim Sang, 2002)](https://aclanthology.org/W02-2024.pdf).
#### Who are the annotators?
The annotation was carried out by the TALP Research Center2 of the Technical University of Catalonia (UPC) and the Center of Language and Computation (CLiC3 ) of the University of Barcelona (UB), and funded by the European Commission through the NAMIC pro ject (IST-1999-12392).
For more information visit the paper from the CoNLL-2002 Shared Task [(Tjong Kim Sang, 2002)](https://aclanthology.org/W02-2024.pdf).
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contributes to the development of language models in Spanish.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
[N/A]
### Licensing Information
[N/A]
### Citation Information
The following paper must be cited when using this corpus:
Erik F. Tjong Kim Sang. 2002. Introduction to the CoNLL-2002 Shared Task: Language-Independent Named Entity Recognition. In COLING-02: The 6th Conference on Natural Language Learning 2002 (CoNLL-2002).
|
tglcourse | null | null | null | false | null | false | tglcourse/latent_lsun_church_128px | 2022-10-28T11:50:20.000Z | null | false | 8ebccbfbb024e9f07a36c44ca2ddea0165d2c261 | [] | [] | https://huggingface.co/datasets/tglcourse/latent_lsun_church_128px/resolve/main/README.md | ---
dataset_info:
features:
- name: label
dtype:
class_label:
names:
0: '0'
1: '1'
2: '2'
3: '3'
4: '4'
5: '5'
6: '6'
7: '7'
8: '8'
9: '9'
10: a
11: b
12: c
13: d
14: e
15: f
- name: latent
sequence:
sequence:
sequence: float32
splits:
- name: test
num_bytes: 27646560
num_examples: 6312
- name: train
num_bytes: 525227700
num_examples: 119915
download_size: 527167710
dataset_size: 552874260
---
# Dataset Card for "latent_lsun_church_128px"
Each image is cropped to 128px square and encoded to a 4x16x16 latent representation using the same VAE as that employed by Stable Diffusion
Decoding
```python
from diffusers import AutoencoderKL
from datasets import load_dataset
from PIL import Image
import numpy as np
import torch
# load the dataset
dataset = load_dataset('tglcourse/latent_lsun_church_128px')
# Load the VAE (requires access - see repo model card for info)
vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae")
latent = torch.tensor([dataset['train'][0]['latent']]) # To tensor (bs, 4, 16, 16)
latent = (1 / 0.18215) * latent # Scale to match SD implementation
with torch.no_grad():
image = vae.decode(latent).sample[0] # Decode
image = (image / 2 + 0.5).clamp(0, 1) # To (0, 1)
image = image.detach().cpu().permute(1, 2, 0).numpy() # To numpy, channels lsat
image = (image * 255).round().astype("uint8") # (0, 255) and type uint8
image = Image.fromarray(image) # To PIL
image # The resulting PIL image
``` |
Dremy | null | null | null | false | null | false | Dremy/test | 2022-10-28T10:50:29.000Z | null | false | 7d65bbde119c5f1fb64d27bdc8aafbcd65fd37dc | [] | [
"license:openrail"
] | https://huggingface.co/datasets/Dremy/test/resolve/main/README.md | ---
license: openrail
---
|
web2write | null | null | null | false | null | false | web2write/kicowrite | 2022-10-28T11:00:26.000Z | null | false | 7101673114b7f0b3f6dd1d57e9e480ba0cedee5a | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/web2write/kicowrite/resolve/main/README.md | ---
license: cc-by-4.0
---
|
bonadossou | null | null | null | false | null | false | bonadossou/afrolm_active_learning_dataset | 2022-11-08T13:08:41.000Z | null | false | daaa87d517ba233cfa4d3cea303d632d8071be60 | [] | [
"arxiv:2211.03263",
"annotations_creators:crowdsourced",
"language:amh",
"language:orm",
"language:lin",
"language:hau",
"language:ibo",
"language:kin",
"language:lug",
"language:luo",
"language:pcm",
"language:swa",
"language:wol",
"language:yor",
"language:bam",
"language:bbj",
"language:ewe",
"language:fon",
"language:mos",
"language:nya",
"language:sna",
"language:tsn",
"language:twi",
"language:xho",
"language:zul",
"language_creators:crowdsourced",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"tags:afrolm",
"tags:active learning",
"tags:language modeling",
"tags:research papers",
"tags:natural language processing",
"tags:self-active learning",
"task_categories:fill-mask",
"task_ids:masked-language-modeling"
] | https://huggingface.co/datasets/bonadossou/afrolm_active_learning_dataset/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language:
- amh
- orm
- lin
- hau
- ibo
- kin
- lug
- luo
- pcm
- swa
- wol
- yor
- bam
- bbj
- ewe
- fon
- mos
- nya
- sna
- tsn
- twi
- xho
- zul
language_creators:
- crowdsourced
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: afrolm-dataset
size_categories:
- 1M<n<10M
source_datasets:
- original
tags:
- afrolm
- active learning
- language modeling
- research papers
- natural language processing
- self-active learning
task_categories:
- fill-mask
task_ids:
- masked-language-modeling
---
# AfroLM: A Self-Active Learning-based Multilingual Pretrained Language Model for 23 African Languages
- [GitHub Repository of the Paper](https://github.com/bonaventuredossou/MLM_AL)
This repository contains the dataset for our paper [`AfroLM: A Self-Active Learning-based Multilingual Pretrained Language Model for 23 African Languages`](https://arxiv.org/pdf/2211.03263.pdf) which will appear at the third Simple and Efficient Natural Language Processing, at EMNLP 2022.
## Our self-active learning framework

## Languages Covered
AfroLM has been pretrained from scratch on 23 African Languages: Amharic, Afan Oromo, Bambara, Ghomalá, Éwé, Fon, Hausa, Ìgbò, Kinyarwanda, Lingala, Luganda, Luo, Mooré, Chewa, Naija, Shona, Swahili, Setswana, Twi, Wolof, Xhosa, Yorùbá, and Zulu.
## Evaluation Results
AfroLM was evaluated on MasakhaNER1.0 (10 African Languages) and MasakhaNER2.0 (21 African Languages) datasets; on text classification and sentiment analysis. AfroLM outperformed AfriBERTa, mBERT, and XLMR-base, and was very competitive with AfroXLMR. AfroLM is also very data efficient because it was pretrained on a dataset 14x+ smaller than its competitors' datasets. Below the average F1-score performances of various models, across various datasets. Please consult our paper for more language-level performance.
Model | MasakhaNER | MasakhaNER2.0* | Text Classification (Yoruba/Hausa) | Sentiment Analysis (YOSM) | OOD Sentiment Analysis (Twitter -> YOSM) |
|:---: |:---: |:---: | :---: |:---: | :---: |
`AfroLM-Large` | **80.13** | **83.26** | **82.90/91.00** | **85.40** | **68.70** |
`AfriBERTa` | 79.10 | 81.31 | 83.22/90.86 | 82.70 | 65.90 |
`mBERT` | 71.55 | 80.68 | --- | --- | --- |
`XLMR-base` | 79.16 | 83.09 | --- | --- | --- |
`AfroXLMR-base` | `81.90` | `84.55` | --- | --- | --- |
- (*) The evaluation was made on the 11 additional languages of the dataset.
- Bold numbers represent the performance of the model with the **smallest pretrained data**.
## Pretrained Models and Dataset
**Models:**: [AfroLM-Large](https://huggingface.co/bonadossou/afrolm_active_learning) and **Dataset**: [AfroLM Dataset](https://huggingface.co/datasets/bonadossou/afrolm_active_learning_dataset)
## HuggingFace usage of AfroLM-large
```python
from transformers import XLMRobertaModel, XLMRobertaTokenizer
model = XLMRobertaModel.from_pretrained("bonadossou/afrolm_active_learning")
tokenizer = XLMRobertaTokenizer.from_pretrained("bonadossou/afrolm_active_learning")
tokenizer.model_max_length = 256
```
`Autotokenizer` class does not successfully load our tokenizer. So we recommend to use directly the `XLMRobertaTokenizer` class. Depending on your task, you will load the according mode of the model. Read the [XLMRoberta Documentation](https://huggingface.co/docs/transformers/model_doc/xlm-roberta)
## Reproducing our result: Training and Evaluation
- To train the network, run `python active_learning.py`. You can also wrap it around a `bash` script.
- For the evaluation:
- NER Classification: `bash ner_experiments.sh`
- Text Classification & Sentiment Analysis: `bash text_classification_all.sh`
## Citation
- **Arxiv Citation**:
``@misc{dossou2022afrolm,
title={AfroLM: A Self-Active Learning-based Multilingual Pretrained Language Model for 23 African Languages},
author={Bonaventure F. P. Dossou and Atnafu Lambebo Tonja and Oreen Yousuf and Salomey Osei and Abigail Oppong and Iyanuoluwa Shode and Oluwabusayo Olufunke Awoyomi and Chris Chinenye Emezue},
year={2022},
eprint={2211.03263},
archivePrefix={arXiv},
primaryClass={cs.CL}}``
We will share the official proceeding citation as soon as possible. Stay tuned, and if you have liked our work, give it a star.
## Reach out
Do you have a question? Please create an issue and we will reach out as soon as possible |
PlanTL-GOB-ES | null | null | null | false | 1 | false | PlanTL-GOB-ES/MLDoc | 2022-11-03T09:24:03.000Z | null | false | e986b088ae469d2ba32caba321dbf911902ec8b7 | [] | [
"annotations_creators:expert-generated",
"language:es",
"language_creators:found",
"multilinguality:multilingual",
"license:cc-by-nc-4.0",
"task_categories:text-classification"
] | https://huggingface.co/datasets/PlanTL-GOB-ES/MLDoc/resolve/main/README.md |
---
YAML tags:
annotations_creators:
- expert-generated
language:
- es
language_creators:
- found
multilinguality:
- multilingual
pretty_name: MLDoc
license: cc-by-nc-4.0
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-classification
task_ids: []
---
# MLDoc
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Website:** https://github.com/facebookresearch/MLDoc
### Dataset Summary
For document classification, we use the Multilingual Document Classification Corpus (MLDoc) [(Schwenk and Li, 2018)](http://www.lrec-conf.org/proceedings/lrec2018/pdf/658.pdf), a cross-lingual document classification dataset covering 8 languages. We use the Spanish portion to evaluate our models on monolingual classification as part of the EvalEs Spanish language benchmark. The corpus consists of 14,458 news articles from Reuters classified in four categories: Corporate/Industrial, Economics, Government/Social and Markets.
This dataset can't be downloaded straight from HuggingFace as it requires signing specific agreements. The detailed instructions on how to download it can be found in this [repository](https://github.com/facebookresearch/MLDoc).
### Supported Tasks and Leaderboards
Text Classification
### Languages
The dataset is in English, German, French, Spanish, Italian, Russian, Japanese and Chinese.
## Dataset Structure
### Data Instances
<pre>
MCAT b' FRANCFORT, 17 feb (Reuter) - La Bolsa de Francfort abri\xc3\xb3 la sesi\xc3\xb3n de corros con baja por la ca\xc3\xadda del viernes en Wall Street y una toma de beneficios. El d\xc3\xb3lar ayudaba a apuntalar al mercado, que pronto podr\xc3\xada reanudar su tendencia alcista. Volkswagen bajaba por los da\xc3\xb1os ocasionados por la huelga de camioneros en Espa\xc3\xb1a. Preussag participaba en un joint venture de exploraci\xc3\xb3n petrol\xc3\xadfera en Filipinas con Atlantic Richfield Co. A las 0951 GMT, el Dax 30 bajaba 10,49 puntos, un 0,32 pct, a 3.237,69 tras abrir a un m\xc3\xa1ximo de 3.237,69. (c) Reuters Limited 1997. '
</pre>
### Data Fields
- Label: CCAT (Corporate/Industrial), ECAT (Economics), GCAT (Government/Social) and MCAT (Markets)
- Text
### Data Splits
- train.tsv: 9,458 lines
- valid.tsv: 1,000 lines
- test.tsv: 4,000 lines
## Dataset Creation
### Curation Rationale
[N/A]
### Source Data
The source data is from the Reuters Corpus. In 2000, Reuters Ltd made available a large collection of Reuters News stories for use in research and development of natural language processing, information retrieval, and machine learning systems. This corpus, known as "Reuters Corpus, Volume 1" or RCV1, is significantly larger than the older, well-known Reuters-21578 collection heavily used in the text classification community.
For more information visit the paper [(Lewis et al., 2004)](https://www.jmlr.org/papers/volume5/lewis04a/lewis04a.pdf).
#### Initial Data Collection and Normalization
For more information visit the paper [(Lewis et al., 2004)](https://www.jmlr.org/papers/volume5/lewis04a/lewis04a.pdf).
#### Who are the source language producers?
For more information visit the paper [(Lewis et al., 2004)](https://www.jmlr.org/papers/volume5/lewis04a/lewis04a.pdf).
### Annotations
#### Annotation process
For more information visit the paper [(Schwenk and Li, 2018; Lewis et al., 2004)](http://www.lrec-conf.org/proceedings/lrec2018/pdf/658.pdf).
#### Who are the annotators?
For more information visit the paper [(Schwenk and Li, 2018; Lewis et al., 2004)](http://www.lrec-conf.org/proceedings/lrec2018/pdf/658.pdf).
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
This dataset contributes to the development of language models in Spanish.
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
[N/A]
### Licensing Information
Access to the actual news stories of the Reuters Corpus (both RCV1 and RCV2) requires a NIST agreement. The stories in the Reuters Corpus are under the copyright of Reuters Ltd and/or Thomson Reuters, and their use is governed by the following agreements:
- Organizational agreement: This agreement must be signed by the person responsible for the data at your organization, and sent to NIST.
- Individual agreement: This agreement must be signed by all researchers using the Reuters Corpus at your organization, and kept on file at your organization.
For more information about the agreement see [here](https://trec.nist.gov/data/reuters/reuters.html)
### Citation Information
The following paper must be cited when using this corpus:
```
@InProceedings{SCHWENK18.658,
author = {Holger Schwenk and Xian Li},
title = {A Corpus for Multilingual Document Classification in Eight Languages},
booktitle = {Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)},
year = {2018},
month = {may},
date = {7-12},
location = {Miyazaki, Japan},
editor = {Nicoletta Calzolari (Conference chair) and Khalid Choukri and Christopher Cieri and Thierry Declerck and Sara Goggi and Koiti Hasida and Hitoshi Isahara and Bente Maegaard and Joseph Mariani and Hélène Mazo and Asuncion Moreno and Jan Odijk and Stelios Piperidis and Takenobu Tokunaga},
publisher = {European Language Resources Association (ELRA)},
address = {Paris, France},
isbn = {979-10-95546-00-9},
language = {english}
}
@inproceedings{schwenk-li-2018-corpus,
title = "A Corpus for Multilingual Document Classification in Eight Languages",
author = "Schwenk, Holger and
Li, Xian",
booktitle = "Proceedings of the Eleventh International Conference on Language Resources and Evaluation ({LREC} 2018)",
month = may,
year = "2018",
address = "Miyazaki, Japan",
publisher = "European Language Resources Association (ELRA)",
url = "https://aclanthology.org/L18-1560",
}
```
|
KETI-AIR | null | There is no citation information | # 한국어 대화 요약
## 소개
일상 대화, 토론 등 다양한 유형의 한국어 대화 원문 텍스트 데이터를 기반으로 한국어 대화 요약 AI 기술 개발을 위한 한국어 대화 요약 텍스트 데이터
## 구축목적
뉴스, 기사 등의 문어체에 비해 생략이나 변형이 많고, 대화의 문맥을 고려해야 하는 특수성이 있는 대화 요약 기술 개발을 위한 한국어 학습 데이터 구축
## Usage
```python
from datasets import load_dataset
raw_datasets = load_dataset(
"aihub_dialog_summarization.py",
"roberta_prepended_single_punct",
cache_dir="huggingface_datasets",
data_dir="data",
ignore_verifications=True,
)
dataset_train = raw_datasets["train"]
for item in dataset_train:
print(item)
exit()
```
## 데이터 관련 문의처
| 담당자명 | 전화번호 | 이메일 |
| ------------- | ------------- | ------------- |
| 김수경(바이브컴퍼니) | 02-565-0531 | ckaskan@vaiv.kr |
## Copyright
### 데이터 소개
AI 허브에서 제공되는 인공지능 학습용 데이터(이하 ‘AI데이터’라고 함)는 과학기술정보통신부와 한국지능정보사회진흥원의 「지능정보산업 인프라 조성」 사업의 일환으로 구축되었으며, 본 사업의 유‧무형적 결과물인 데이터, AI 응용모델 및 데이터 저작도구의 소스, 각종 매뉴얼 등(이하 ‘AI데이터 등’)에 대한 일체의 권리는 AI데이터 등의 구축 수행기관 및 참여기관(이하 ‘수행기관 등’)과 한국지능정보사회진흥원에 있습니다.
본 AI데이터 등은 인공지능 기술 및 제품·서비스 발전을 위하여 구축하였으며, 지능형 제품・서비스, 챗봇 등 다양한 분야에서 영리적・비영리적 연구・개발 목적으로 활용할 수 있습니다.
### 데이터 이용정책
- 본 AI데이터 등을 이용하기 위해서 다음 사항에 동의하며 준수해야 함을 고지합니다.
1. 본 AI데이터 등을 이용할 때에는 반드시 한국지능정보사회진흥원의 사업결과임을 밝혀야 하며, 본 AI데이터 등을 이용한 2차적 저작물에도 동일하게 밝혀야 합니다.
2. 국외에 소재하는 법인, 단체 또는 개인이 AI데이터 등을 이용하기 위해서는 수행기관 등 및 한국지능정보사회진흥원과 별도로 합의가 필요합니다.
3. 본 AI데이터 등의 국외 반출을 위해서는 수행기관 등 및 한국지능정보사회진흥원과 별도로 합의가 필요합니다.
4. 본 AI데이터는 인공지능 학습모델의 학습용으로만 사용할 수 있습니다. 한국지능정보사회진흥원은 AI데이터 등의 이용의 목적이나 방법, 내용 등이 위법하거나 부적합하다고 판단될 경우 제공을 거부할 수 있으며, 이미 제공한 경우 이용의 중지와 AI 데이터 등의 환수, 폐기 등을 요구할 수 있습니다.
5. 제공 받은 AI데이터 등을 수행기관 등과 한국지능정보사회진흥원의 승인을 받지 않은 다른 법인, 단체 또는 개인에게 열람하게 하거나 제공, 양도, 대여, 판매하여서는 안됩니다.
6. AI데이터 등에 대해서 제 4항에 따른 목적 외 이용, 제5항에 따른 무단 열람, 제공, 양도, 대여, 판매 등의 결과로 인하여 발생하는 모든 민・형사 상의 책임은 AI데이터 등을 이용한 법인, 단체 또는 개인에게 있습니다.
7. 이용자는 AI 허브 제공 데이터셋 내에 개인정보 등이 포함된 것이 발견된 경우, 즉시 AI 허브에 해당 사실을 신고하고 다운로드 받은 데이터셋을 삭제하여야 합니다.
8. AI 허브로부터 제공받은 비식별 정보(재현정보 포함)를 인공지능 서비스 개발 등의 목적으로 안전하게 이용하여야 하며, 이를 이용해서 개인을 재식별하기 위한 어떠한 행위도 하여서는 안됩니다.
9. 향후 한국지능정보사회진흥원에서 활용사례・성과 등에 관한 실태조사를 수행 할 경우 이에 성실하게 임하여야 합니다.
### 데이터 다운로드 신청방법
1. AI 허브를 통해 제공 중인 AI데이터 등을 다운로드 받기 위해서는 별도의 신청자 본인 확인과 정보 제공, 목적을 밝히는 절차가 필요합니다.
2. AI데이터를 제외한 데이터 설명, 저작 도구 등은 별도의 신청 절차나 로그인 없이 이용이 가능합니다.
3. 한국지능정보사회진흥원이 권리자가 아닌 AI데이터 등은 해당 기관의 이용정책과 다운로드 절차를 따라야 하며 이는 AI 허브와 관련이 없음을 알려 드립니다. | false | 19 | false | KETI-AIR/aihub_dialog_summarization | 2022-10-31T06:10:39.000Z | null | false | d861d01d303d7a171b319d0e8dc01ff87ac3b2e0 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/KETI-AIR/aihub_dialog_summarization/resolve/main/README.md | ---
license: apache-2.0
---
|
tglcourse | null | null | null | false | null | false | tglcourse/latent_celebA_256px | 2022-10-28T11:49:27.000Z | null | false | 9a1c7f132e7b9066c18722c97c7dbf06b85012de | [] | [] | https://huggingface.co/datasets/tglcourse/latent_celebA_256px/resolve/main/README.md | ---
dataset_info:
features:
- name: latent
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 3427164684
num_examples: 202599
download_size: 3338993120
dataset_size: 3427164684
---
# Dataset Card for "latent_celebA_256px"
Each image is cropped to 256px square and encoded to a 4x32x32 latent representation using the same VAE as that employed by Stable Diffusion
Decoding
```python
from diffusers import AutoencoderKL
from datasets import load_dataset
from PIL import Image
import numpy as np
import torch
# load the dataset
dataset = load_dataset('tglcourse/latent_celebA_256px')
# Load the VAE (requires access - see repo model card for info)
vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae")
latent = torch.tensor([dataset['train'][0]['latent']]) # To tensor (bs, 4, 32, 32)
latent = (1 / 0.18215) * latent # Scale to match SD implementation
with torch.no_grad():
image = vae.decode(latent).sample[0] # Decode
image = (image / 2 + 0.5).clamp(0, 1) # To (0, 1)
image = image.detach().cpu().permute(1, 2, 0).numpy() # To numpy, channels lsat
image = (image * 255).round().astype("uint8") # (0, 255) and type uint8
image = Image.fromarray(image) # To PIL
image # The resulting PIL image
``` |
efederici | null | null | null | false | null | false | efederici/mt_nap_it | 2022-10-28T14:32:26.000Z | null | false | 9f23ec8ffc93cae32ae3c203ffa6d6610bbbd6c8 | [] | [
"language:it",
"license:unknown",
"size_categories:unknown",
"task_categories:translation",
"tags:conditional-text-generation"
] | https://huggingface.co/datasets/efederici/mt_nap_it/resolve/main/README.md | ---
language:
- it
license:
- unknown
size_categories:
- unknown
task_categories:
- translation
task_ids: []
pretty_name: mt_nap_it
tags:
- conditional-text-generation
---
# Dataset Card for mt_en_it
## Table of Contents
- [Dataset Card for mt_en_it](#dataset-card-for-mt-en-it)
- [Table of Contents](#table-of-contents)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
### Dataset Summary
This dataset comprises traditional Neapolitan songs from [napoligrafia](https://www.napoligrafia.it) translated into Italian.
### Languages
- italian-to-neapolitan
### Data Instances
A sample from the dataset.
```python
{
'url': "url",
'napoletano': "o, quacche ghiuorno, 'a frennesia mme piglia",
'italiano': "o, qualche giorno, la rabbia mi prende"
}
```
The text is provided without further preprocessing or tokenization.
### Data Fields
- `url`: source URL.
- `napoletano`: Neapolitan text.
- `italiano`: Italian text.
### Dataset Creation
The dataset was created by scraping [napoligrafia](https://www.napoligrafia.it) songs. |
zZWipeoutZz | null | null | null | false | null | false | zZWipeoutZz/spectral_wind | 2022-10-28T14:53:12.000Z | null | false | 30b32ca54b7c38130a1bcbf0b5f534904af9971f | [] | [
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/zZWipeoutZz/spectral_wind/resolve/main/README.md | ---
license: creativeml-openrail-m
---
<h4> Disclosure </h4>
<p> While its not perfect i hope that you are able to create some nice pictures, i am working on improving for the next embedding coming soon, if you have any suggestions or issues please let me know </p>
<h4> Usage </h4>
To use this embedding you have to download the file and put it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt add
<em style="font-weight:600">art by spectral_wind </em>
add <b>[ ]</b> around it to reduce its weight.
<h4> Included Files </h4>
<ul>
<li>6500 steps <em>Usage: art by spectral_wind- 6500</em></li>
<li>10,000 steps <em>Usage: art by spectral_wind-10000</em> </li>
<li>15,000 steps <em>Usage: art by spectral_wind</em></li>
</ul>
cheers<br>
Wipeout
<h4> Example Pictures </h4>
<table>
<tbody>
<tr>
<td><img height="100%/" width="100%" src="https://i.imgur.com/BJNFbAf.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/nKig2lQ.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/ElF2xde.png"></td>
</tr>
</tbody>
</table>
<h4> prompt comparison </h4>
<a href="https://i.imgur.com/QSEM4jU.jpg" target="_blank"><img height="100%" width="100%" src="https://i.imgur.com/QSEM4jU.jpg"></a>
<h4> Licence </h4>
<p><span>This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:</span> </p>
<ol>
<li>You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content </li>
<li>The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license</li>
<li>You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
<a rel="noopener nofollow" href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">Please read the full license here</a></li>
</ol> |
KETI-AIR | null | There is no citation information | # 도서자료 요약
## 소개
도서를 기반으로 한 원문의 핵심 내용, 의미 전달을 적절히 포함하는 요약문을 자동으로 생성하는 AI기술 개발을 위한 도서 요약 텍스트 데이터
## 구축목적
다양한 주제의 한국어 도서 원문으로부터 생성요약문을 도출해낼 수 있도록 인공지능을 훈련하기 위한 데이터셋
## Usage
```python
from datasets import load_dataset
raw_datasets = load_dataset(
"aihub_book_summarization.py",
"base",
cache_dir="huggingface_datasets",
data_dir="data",
ignore_verifications=True,
)
dataset_train = raw_datasets["train"]
for item in dataset_train:
print(item)
exit()
```
## 데이터 관련 문의처
| 담당자명 | 전화번호 | 이메일 |
| ------------- | ------------- | ------------- |
| 전영민(딥네츄럴) | 02-6952-0588 | youngmin@deepnatural.ai |
## Copyright
### 데이터 소개
AI 허브에서 제공되는 인공지능 학습용 데이터(이하 ‘AI데이터’라고 함)는 과학기술정보통신부와 한국지능정보사회진흥원의 「지능정보산업 인프라 조성」 사업의 일환으로 구축되었으며, 본 사업의 유‧무형적 결과물인 데이터, AI 응용모델 및 데이터 저작도구의 소스, 각종 매뉴얼 등(이하 ‘AI데이터 등’)에 대한 일체의 권리는 AI데이터 등의 구축 수행기관 및 참여기관(이하 ‘수행기관 등’)과 한국지능정보사회진흥원에 있습니다.
본 AI데이터 등은 인공지능 기술 및 제품·서비스 발전을 위하여 구축하였으며, 지능형 제품・서비스, 챗봇 등 다양한 분야에서 영리적・비영리적 연구・개발 목적으로 활용할 수 있습니다.
### 데이터 이용정책
- 본 AI데이터 등을 이용하기 위해서 다음 사항에 동의하며 준수해야 함을 고지합니다.
1. 본 AI데이터 등을 이용할 때에는 반드시 한국지능정보사회진흥원의 사업결과임을 밝혀야 하며, 본 AI데이터 등을 이용한 2차적 저작물에도 동일하게 밝혀야 합니다.
2. 국외에 소재하는 법인, 단체 또는 개인이 AI데이터 등을 이용하기 위해서는 수행기관 등 및 한국지능정보사회진흥원과 별도로 합의가 필요합니다.
3. 본 AI데이터 등의 국외 반출을 위해서는 수행기관 등 및 한국지능정보사회진흥원과 별도로 합의가 필요합니다.
4. 본 AI데이터는 인공지능 학습모델의 학습용으로만 사용할 수 있습니다. 한국지능정보사회진흥원은 AI데이터 등의 이용의 목적이나 방법, 내용 등이 위법하거나 부적합하다고 판단될 경우 제공을 거부할 수 있으며, 이미 제공한 경우 이용의 중지와 AI 데이터 등의 환수, 폐기 등을 요구할 수 있습니다.
5. 제공 받은 AI데이터 등을 수행기관 등과 한국지능정보사회진흥원의 승인을 받지 않은 다른 법인, 단체 또는 개인에게 열람하게 하거나 제공, 양도, 대여, 판매하여서는 안됩니다.
6. AI데이터 등에 대해서 제 4항에 따른 목적 외 이용, 제5항에 따른 무단 열람, 제공, 양도, 대여, 판매 등의 결과로 인하여 발생하는 모든 민・형사 상의 책임은 AI데이터 등을 이용한 법인, 단체 또는 개인에게 있습니다.
7. 이용자는 AI 허브 제공 데이터셋 내에 개인정보 등이 포함된 것이 발견된 경우, 즉시 AI 허브에 해당 사실을 신고하고 다운로드 받은 데이터셋을 삭제하여야 합니다.
8. AI 허브로부터 제공받은 비식별 정보(재현정보 포함)를 인공지능 서비스 개발 등의 목적으로 안전하게 이용하여야 하며, 이를 이용해서 개인을 재식별하기 위한 어떠한 행위도 하여서는 안됩니다.
9. 향후 한국지능정보사회진흥원에서 활용사례・성과 등에 관한 실태조사를 수행 할 경우 이에 성실하게 임하여야 합니다.
### 데이터 다운로드 신청방법
1. AI 허브를 통해 제공 중인 AI데이터 등을 다운로드 받기 위해서는 별도의 신청자 본인 확인과 정보 제공, 목적을 밝히는 절차가 필요합니다.
2. AI데이터를 제외한 데이터 설명, 저작 도구 등은 별도의 신청 절차나 로그인 없이 이용이 가능합니다.
3. 한국지능정보사회진흥원이 권리자가 아닌 AI데이터 등은 해당 기관의 이용정책과 다운로드 절차를 따라야 하며 이는 AI 허브와 관련이 없음을 알려 드립니다. | false | 26 | false | KETI-AIR/aihub_book_summarization | 2022-10-31T06:10:02.000Z | null | false | 2c4c21cd368119bf57d5fce72dedf0f1476df226 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/KETI-AIR/aihub_book_summarization/resolve/main/README.md | ---
license: apache-2.0
---
|
KETI-AIR | null | There is no citation information | # 문서요약 텍스트
## 소개
AI가 텍스트를 이해하고 핵심내용을 자동으로 요약하는 기술개발을 위한 텍스트 및 요약내용을 생성하는 텍스트 데이터
## 구축목적
다양한 주제의 한국어 원문으로부터 추출요약문과 생성요약문을 도출해낼 수 있도록 인공지능을 훈련하기 위한 데이터셋
## Usage
```python
from datasets import load_dataset
raw_datasets = load_dataset(
"aihub_document_summarization.py",
"base",
cache_dir="huggingface_datasets",
data_dir="data",
ignore_verifications=True,
)
dataset_train = raw_datasets["train"]
for item in dataset_train:
print(item)
exit()
```
## 데이터 관련 문의처
| 담당자명 | 전화번호 | 이메일 |
| ------------- | ------------- | ------------- |
| 최재웅(비플라이소프트) | 070-7091-8560 | jw.choi@bflysoft.com |
## Copyright
### 데이터 소개
AI 허브에서 제공되는 인공지능 학습용 데이터(이하 ‘AI데이터’라고 함)는 과학기술정보통신부와 한국지능정보사회진흥원의 「지능정보산업 인프라 조성」 사업의 일환으로 구축되었으며, 본 사업의 유‧무형적 결과물인 데이터, AI 응용모델 및 데이터 저작도구의 소스, 각종 매뉴얼 등(이하 ‘AI데이터 등’)에 대한 일체의 권리는 AI데이터 등의 구축 수행기관 및 참여기관(이하 ‘수행기관 등’)과 한국지능정보사회진흥원에 있습니다.
본 AI데이터 등은 인공지능 기술 및 제품·서비스 발전을 위하여 구축하였으며, 지능형 제품・서비스, 챗봇 등 다양한 분야에서 영리적・비영리적 연구・개발 목적으로 활용할 수 있습니다.
### 데이터 이용정책
- 본 AI데이터 등을 이용하기 위해서 다음 사항에 동의하며 준수해야 함을 고지합니다.
1. 본 AI데이터 등을 이용할 때에는 반드시 한국지능정보사회진흥원의 사업결과임을 밝혀야 하며, 본 AI데이터 등을 이용한 2차적 저작물에도 동일하게 밝혀야 합니다.
2. 국외에 소재하는 법인, 단체 또는 개인이 AI데이터 등을 이용하기 위해서는 수행기관 등 및 한국지능정보사회진흥원과 별도로 합의가 필요합니다.
3. 본 AI데이터 등의 국외 반출을 위해서는 수행기관 등 및 한국지능정보사회진흥원과 별도로 합의가 필요합니다.
4. 본 AI데이터는 인공지능 학습모델의 학습용으로만 사용할 수 있습니다. 한국지능정보사회진흥원은 AI데이터 등의 이용의 목적이나 방법, 내용 등이 위법하거나 부적합하다고 판단될 경우 제공을 거부할 수 있으며, 이미 제공한 경우 이용의 중지와 AI 데이터 등의 환수, 폐기 등을 요구할 수 있습니다.
5. 제공 받은 AI데이터 등을 수행기관 등과 한국지능정보사회진흥원의 승인을 받지 않은 다른 법인, 단체 또는 개인에게 열람하게 하거나 제공, 양도, 대여, 판매하여서는 안됩니다.
6. AI데이터 등에 대해서 제 4항에 따른 목적 외 이용, 제5항에 따른 무단 열람, 제공, 양도, 대여, 판매 등의 결과로 인하여 발생하는 모든 민・형사 상의 책임은 AI데이터 등을 이용한 법인, 단체 또는 개인에게 있습니다.
7. 이용자는 AI 허브 제공 데이터셋 내에 개인정보 등이 포함된 것이 발견된 경우, 즉시 AI 허브에 해당 사실을 신고하고 다운로드 받은 데이터셋을 삭제하여야 합니다.
8. AI 허브로부터 제공받은 비식별 정보(재현정보 포함)를 인공지능 서비스 개발 등의 목적으로 안전하게 이용하여야 하며, 이를 이용해서 개인을 재식별하기 위한 어떠한 행위도 하여서는 안됩니다.
9. 향후 한국지능정보사회진흥원에서 활용사례・성과 등에 관한 실태조사를 수행 할 경우 이에 성실하게 임하여야 합니다.
### 데이터 다운로드 신청방법
1. AI 허브를 통해 제공 중인 AI데이터 등을 다운로드 받기 위해서는 별도의 신청자 본인 확인과 정보 제공, 목적을 밝히는 절차가 필요합니다.
2. AI데이터를 제외한 데이터 설명, 저작 도구 등은 별도의 신청 절차나 로그인 없이 이용이 가능합니다.
3. 한국지능정보사회진흥원이 권리자가 아닌 AI데이터 등은 해당 기관의 이용정책과 다운로드 절차를 따라야 하며 이는 AI 허브와 관련이 없음을 알려 드립니다. | false | 46 | false | KETI-AIR/aihub_document_summarization | 2022-10-31T06:09:35.000Z | null | false | 38ad02a2f5fe6817a0a4e820a8fb94ba2c8cfb3d | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/KETI-AIR/aihub_document_summarization/resolve/main/README.md | ---
license: apache-2.0
---
|
KETI-AIR | null | There is no citation information | # 논문자료 요약
## 소개
요약문을 자동으로 생성하는 자료 요약 AI 기술 개발을 위한 논문 요약, 특허 명세서 요약 텍스트 데이터
## 구축목적
다양한 주제의 한국어 학술논문 및 특허명세서에서 전체 요약문과 섹션별요약문을 도출해낼 수 있도록 인공지능을 훈련하기 위한 데이터셋
## Usage
```python
from datasets import load_dataset
raw_datasets = load_dataset(
"aihub_paper_summarization.py",
"base",
cache_dir="huggingface_datasets",
data_dir="data",
ignore_verifications=True,
)
dataset_train = raw_datasets["train"]
for item in dataset_train:
print(item)
exit()
```
## 데이터 관련 문의처
| 담당자명 | 전화번호 | 이메일 |
| ------------- | ------------- | ------------- |
| 정규상(나라지식정보) | 02-3141-7644 | qsang.j@gmail.com |
## Copyright
### 데이터 소개
AI 허브에서 제공되는 인공지능 학습용 데이터(이하 ‘AI데이터’라고 함)는 과학기술정보통신부와 한국지능정보사회진흥원의 「지능정보산업 인프라 조성」 사업의 일환으로 구축되었으며, 본 사업의 유‧무형적 결과물인 데이터, AI 응용모델 및 데이터 저작도구의 소스, 각종 매뉴얼 등(이하 ‘AI데이터 등’)에 대한 일체의 권리는 AI데이터 등의 구축 수행기관 및 참여기관(이하 ‘수행기관 등’)과 한국지능정보사회진흥원에 있습니다.
본 AI데이터 등은 인공지능 기술 및 제품·서비스 발전을 위하여 구축하였으며, 지능형 제품・서비스, 챗봇 등 다양한 분야에서 영리적・비영리적 연구・개발 목적으로 활용할 수 있습니다.
### 데이터 이용정책
- 본 AI데이터 등을 이용하기 위해서 다음 사항에 동의하며 준수해야 함을 고지합니다.
1. 본 AI데이터 등을 이용할 때에는 반드시 한국지능정보사회진흥원의 사업결과임을 밝혀야 하며, 본 AI데이터 등을 이용한 2차적 저작물에도 동일하게 밝혀야 합니다.
2. 국외에 소재하는 법인, 단체 또는 개인이 AI데이터 등을 이용하기 위해서는 수행기관 등 및 한국지능정보사회진흥원과 별도로 합의가 필요합니다.
3. 본 AI데이터 등의 국외 반출을 위해서는 수행기관 등 및 한국지능정보사회진흥원과 별도로 합의가 필요합니다.
4. 본 AI데이터는 인공지능 학습모델의 학습용으로만 사용할 수 있습니다. 한국지능정보사회진흥원은 AI데이터 등의 이용의 목적이나 방법, 내용 등이 위법하거나 부적합하다고 판단될 경우 제공을 거부할 수 있으며, 이미 제공한 경우 이용의 중지와 AI 데이터 등의 환수, 폐기 등을 요구할 수 있습니다.
5. 제공 받은 AI데이터 등을 수행기관 등과 한국지능정보사회진흥원의 승인을 받지 않은 다른 법인, 단체 또는 개인에게 열람하게 하거나 제공, 양도, 대여, 판매하여서는 안됩니다.
6. AI데이터 등에 대해서 제 4항에 따른 목적 외 이용, 제5항에 따른 무단 열람, 제공, 양도, 대여, 판매 등의 결과로 인하여 발생하는 모든 민・형사 상의 책임은 AI데이터 등을 이용한 법인, 단체 또는 개인에게 있습니다.
7. 이용자는 AI 허브 제공 데이터셋 내에 개인정보 등이 포함된 것이 발견된 경우, 즉시 AI 허브에 해당 사실을 신고하고 다운로드 받은 데이터셋을 삭제하여야 합니다.
8. AI 허브로부터 제공받은 비식별 정보(재현정보 포함)를 인공지능 서비스 개발 등의 목적으로 안전하게 이용하여야 하며, 이를 이용해서 개인을 재식별하기 위한 어떠한 행위도 하여서는 안됩니다.
9. 향후 한국지능정보사회진흥원에서 활용사례・성과 등에 관한 실태조사를 수행 할 경우 이에 성실하게 임하여야 합니다.
### 데이터 다운로드 신청방법
1. AI 허브를 통해 제공 중인 AI데이터 등을 다운로드 받기 위해서는 별도의 신청자 본인 확인과 정보 제공, 목적을 밝히는 절차가 필요합니다.
2. AI데이터를 제외한 데이터 설명, 저작 도구 등은 별도의 신청 절차나 로그인 없이 이용이 가능합니다.
3. 한국지능정보사회진흥원이 권리자가 아닌 AI데이터 등은 해당 기관의 이용정책과 다운로드 절차를 따라야 하며 이는 AI 허브와 관련이 없음을 알려 드립니다. | false | 70 | false | KETI-AIR/aihub_paper_summarization | 2022-10-31T06:09:11.000Z | null | false | a60791cb04316a54dba05589959c132a4cdeae1d | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/KETI-AIR/aihub_paper_summarization/resolve/main/README.md | ---
license: apache-2.0
---
|
Accomplice | null | null | null | false | null | false | Accomplice/watercolor-stable-diffusion | 2022-10-28T15:17:32.000Z | null | false | 00bcb416704a2c2133d61bf42a90c2547bd34d14 | [] | [
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/Accomplice/watercolor-stable-diffusion/resolve/main/README.md | ---
license: creativeml-openrail-m
---
|
maximedb | null | null | null | false | null | false | maximedb/massive_generated | 2022-10-28T13:43:16.000Z | null | false | 1f94d69bdf8e059de2b2163f99816d21d8efa413 | [] | [
"license:mit"
] | https://huggingface.co/datasets/maximedb/massive_generated/resolve/main/README.md | ---
license: mit
---
|
MahdiA | null | null | null | false | null | false | MahdiA/Iran-protests-media | 2022-10-28T14:59:06.000Z | null | false | 422fa1b362f44da776232e5c6d79ef0e9d9d665e | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/MahdiA/Iran-protests-media/resolve/main/README.md | ---
license: apache-2.0
---
# Media Dataset for IRAN Protests
Following recent protests in Iran corresponding to [__Mahsa Amini__](https://en.wikipedia.org/wiki/Death_of_Mahsa_Amini)'s death, her name has been a trend on social media like Twitter( [#MahsaAmini](https://twitter.com/search?q=%23MahsaAmini) , [#مهسا_امینی](https://twitter.com/search?q=%23%D9%85%D9%87%D8%B3%D8%A7_%D8%A7%D9%85%DB%8C%D9%86%DB%8C)).
Untile Octore 15, 2022, there have been 300+ million tweets on Twitter and among them, there are many posts including media files like images and videos.
It will be helpful for Media Companies, Developers or whoever is interested in reviewing and assessing these files. Our data has been collected since September 14, 2022.
More than __3.1M records__ (including(unique) 2.5M images and 600 thousands videos) is available in current dataset.
### Dataset:
1. created_at: datetime when the tweet posted
2. md_url: URL of the media
3. md_type: show media type (image or video)
4. tw_id: tweet id
## Disclaimer:
The dataset includes any type of media based on what is published by users on Twitter. So, there will be no accusation against the publisher of this dataset.
For more information about dataset and the way that able to download the read media files, please refer to [Github](https://github.com/M-Amrollahi/Iran-protests-media). |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v1-math-1bbcaf-1917164990 | 2022-10-28T14:26:30.000Z | null | false | 91ca4a0e810217bb1ac2e440805ccb3514bf2637 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_test_cot_v1"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v1-math-1bbcaf-1917164990/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test_cot_v1
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-1.3b_eval
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test_cot_v1
dataset_config: mathemakitten--winobias_antistereotype_test_cot_v1
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1
* Config: mathemakitten--winobias_antistereotype_test_cot_v1
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v1-math-1bbcaf-1917164991 | 2022-10-28T14:28:19.000Z | null | false | 08173c5722c09727379f8ec5f538618236827272 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_test_cot_v1"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v1-math-1bbcaf-1917164991/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test_cot_v1
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-2.7b_eval
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test_cot_v1
dataset_config: mathemakitten--winobias_antistereotype_test_cot_v1
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1
* Config: mathemakitten--winobias_antistereotype_test_cot_v1
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v1-math-1bbcaf-1917164992 | 2022-10-28T14:50:20.000Z | null | false | 4a74d8864b2f0617d0e7e1e09d6e294c709b339d | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:mathemakitten/winobias_antistereotype_test_cot_v1"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v1-math-1bbcaf-1917164992/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- mathemakitten/winobias_antistereotype_test_cot_v1
eval_info:
task: text_zero_shot_classification
model: inverse-scaling/opt-13b_eval
metrics: []
dataset_name: mathemakitten/winobias_antistereotype_test_cot_v1
dataset_config: mathemakitten--winobias_antistereotype_test_cot_v1
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v1
* Config: mathemakitten--winobias_antistereotype_test_cot_v1
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. |
awacke1 | null | null | null | false | 1 | false | awacke1/ChatbotMemory.csv | 2022-11-02T23:40:24.000Z | null | false | fee836d899f9df6889ebbb087cb83ac28c79f7ad | [] | [
"license:mit"
] | https://huggingface.co/datasets/awacke1/ChatbotMemory.csv/resolve/main/README.md | ---
license: mit
---
|
DJSoft | null | null | null | false | null | false | DJSoft/maccha_artist_style | 2022-10-28T15:42:35.000Z | null | false | e6a55222be038bcba0337e130e77fa35ff45474a | [] | [
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/DJSoft/maccha_artist_style/resolve/main/README.md | ---
license: creativeml-openrail-m
---
<h4> Description </h4>
Use this Stable Diffusion embedding to achieve style of Matcha_ / maccha_(mochancc) <a rel="noopener nofollow" href="https://www.pixiv.net/en/users/2583663">Pixiv</a>
<br>
<h4> Usage </h4>
To use this embedding you have to download the file and put it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt add
<em style="font-weight:600">art by maccha</em>
add <b>( :1.0)</b> around it to modify its weight.
<h4> Included Files </h4>
<ul>
<li>8000 steps <em>Usage: art by maccha-8000</em></li>
<li>15,000 steps <em>Usage: art by maccha</em></li>
</ul>
<h4> Example Pictures </h4>
<table>
<tbody>
<tr>
<td><a href="https://ibb.co/jvNwT0g"><img src="https://i.ibb.co/zhM2r9G/tmpnpbpx-zo.png" alt="tmpnpbpx-zo" border="0"></a></td>
<td><a href="https://imgbb.com/"><img src="https://i.ibb.co/VjFK1mL/tmp2tjaezq9.png" alt="tmp2tjaezq9" border="0"></a></td>
<td><a href="https://imgbb.com/"><img src="https://i.ibb.co/sK9s3Ys/tmp2h81fifw.png" alt="tmp2h81fifw" border="0"></a></td>
</tr>
</tbody>
</table>
<h4> prompt comparison </h4>
<a href="https://ibb.co/QnxwB2s"><img src="https://i.ibb.co/bLG9hSp/tmp39c9eejm.png" alt="tmp39c9eejm" border="0"></a>
<h4> Licence </h4>
<p><span>This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:</span> </p>
<ol>
<li>You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content </li>
<li>The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license</li>
<li>You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
<a rel="noopener nofollow" href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">Please read the full license here</a></li>
</ol> |
siberspace | null | null | null | false | null | false | siberspace/eric | 2022-10-28T15:14:27.000Z | null | false | 955a3de11d4490cdcc998065f1bbf956c6f7b5ad | [] | [] | https://huggingface.co/datasets/siberspace/eric/resolve/main/README.md | |
jhonparra18 | null | null | null | false | 1 | false | jhonparra18/urivoldemort | 2022-10-28T15:36:57.000Z | null | false | a3e1b74a5b17e7cde43a854aa46553ebfa016a4c | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/jhonparra18/urivoldemort/resolve/main/README.md | ---
license: afl-3.0
---
|
DJSoft | null | null | null | false | null | false | DJSoft/yuki_miku_2017_outfit | 2022-10-28T16:03:02.000Z | null | false | 1e9ff971a14cea907b9d5d0061521b07cdd31f8a | [] | [
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/DJSoft/yuki_miku_2017_outfit/resolve/main/README.md | ---
license: creativeml-openrail-m
---
<h4> Description </h4>
Use this Stable Diffusion embedding to achieve the Hatsune Miku Yuki Style 2017 outfit
<br>
<h4> Usage </h4>
To use this embedding you have to download the file and put it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt add
<em style="font-weight:600">yuki_miku_2017-*</em>
add <b>( :1.0)</b> around it to modify its weight.
<h4> Included Files </h4>
<ul>
<li>8000 steps <em>Usage: art by yuki_miku_2017-8000</em></li>
<li>10000 steps <em>Usage: art by yuki_miku_2017-10000</em></li>
<li>15,000 steps <em>Usage: art by yuki_miku_2017-15000</em></li>
</ul>
<h4> Example Pictures </h4>
<table>
<tbody>
<tr>
<td><a href="https://ibb.co/bQRS92F"><img src="https://i.ibb.co/fqGhwn9/tmphyit5puq.png" alt="tmphyit5puq" border="0"></a></td>
<td><a href="https://ibb.co/C0JtZGr"><img src="https://i.ibb.co/nk30vJW/tmp1adic03y.png" alt="tmp1adic03y" border="0"></a></td>
<td><a href="https://ibb.co/3r2nsjW"><img src="https://i.ibb.co/9867v12/tmpm4ctsgp5.png" alt="tmpm4ctsgp5" border="0"></a></td>
</tr>
</tbody>
</table>
<h4> prompt comparison </h4>
<a href="https://ibb.co/DzbTtTb"><img src="https://i.ibb.co/FmWCKCW/tmpir8qtnzc.png" alt="tmpir8qtnzc" border="0"></a>
<h4> Licence </h4>
<p><span>This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:</span> </p>
<ol>
<li>You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content </li>
<li>The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license</li>
<li>You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
<a rel="noopener nofollow" href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">Please read the full license here</a></li>
</ol> |
davanstrien | null | @dataset{kasra_hosseini_2022_7147906,
author = {Kasra Hosseini and
Daniel C.S. Wilson and
Kaspar Beelen and
Katherine McDonough},
title = {MapReader_Data_SIGSPATIAL_2022},
month = oct,
year = 2022,
publisher = {Zenodo},
version = {v0.3.3},
doi = {10.5281/zenodo.7147906},
url = {https://doi.org/10.5281/zenodo.7147906}
} | TODO | false | 1 | false | davanstrien/maps_nls | 2022-11-14T17:04:04.000Z | null | true | 8db019c9ac4a8cfe62e3c6f0a874574ebfaf564b | [] | [] | https://huggingface.co/datasets/davanstrien/maps_nls/resolve/main/README.md | |
Accomplice | null | null | null | false | null | false | Accomplice/space-stable-diffusion | 2022-10-28T19:06:20.000Z | null | false | 5b5c17fa6b0fa0fa135b613d7aa38886d05087d9 | [] | [
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/Accomplice/space-stable-diffusion/resolve/main/README.md | ---
license: creativeml-openrail-m
---
|
Accomplice | null | null | null | false | null | false | Accomplice/cool-cats-stable-diffusion | 2022-10-28T15:50:24.000Z | null | false | f904be1d118ce70d1f6a87dd4cfa17225fedf788 | [] | [
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/Accomplice/cool-cats-stable-diffusion/resolve/main/README.md | ---
license: creativeml-openrail-m
---
|
Accomplice | null | null | null | false | null | false | Accomplice/pixelart-stable-diffusion | 2022-10-28T18:18:02.000Z | null | false | 2f74a267610071cde90358678ddecb17f60ba8e0 | [] | [
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/Accomplice/pixelart-stable-diffusion/resolve/main/README.md | ---
license: creativeml-openrail-m
---
|
nixjoe | null | null | null | false | null | false | nixjoe/mylover1 | 2022-10-28T15:58:32.000Z | null | false | e4e0bfacafaf6c10eb3f6c1f862ce10f525a65e3 | [] | [
"license:unknown"
] | https://huggingface.co/datasets/nixjoe/mylover1/resolve/main/README.md | ---
license: unknown
---
|
kkchi | null | null | null | false | null | false | kkchi/123123 | 2022-10-28T18:48:57.000Z | null | false | 12776b41f447dfaec6bb9fb9ad83e641a994e5ea | [] | [] | https://huggingface.co/datasets/kkchi/123123/resolve/main/README.md | |
Toveline | null | null | null | false | null | false | Toveline/images | 2022-10-28T19:08:39.000Z | null | false | 3f8acaa1c5617254e9be52b421e7e9eafbc517d2 | [] | [
"license:unknown"
] | https://huggingface.co/datasets/Toveline/images/resolve/main/README.md | ---
license: unknown
---
|
kanak8278 | null | null | null | false | 11 | false | kanak8278/focus_test | 2022-10-28T18:42:53.000Z | null | false | 115a522e89601c99a3ee2b4f9622b8df0a19639f | [] | [] | https://huggingface.co/datasets/kanak8278/focus_test/resolve/main/README.md | ---
dataset_info:
features:
- name: dialogID
dtype: string
- name: utterance
dtype: int64
- name: query
dtype: string
- name: hit_knowledge
dtype: string
- name: ground_knowledge
dtype: string
- name: ground_persona
dtype: string
- name: similarity_score
dtype: float64
- name: persona1
dtype: string
- name: persona2
dtype: string
- name: persona3
dtype: string
- name: persona4
dtype: string
- name: persona5
dtype: string
- name: persona_grounding1
dtype: bool
- name: persona_grounding2
dtype: bool
- name: persona_grounding3
dtype: bool
- name: persona_grounding4
dtype: bool
- name: persona_grounding5
dtype: bool
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 6713468
num_examples: 9035
download_size: 2783764
dataset_size: 6713468
---
# Dataset Card for "focus_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
severo | null | @inproceedings{wang2019glue,
title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},
author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},
note={In the Proceedings of ICLR.},
year={2019}
} | GLUE, the General Language Understanding Evaluation benchmark
(https://gluebenchmark.com/) is a collection of resources for training,
evaluating, and analyzing natural language understanding systems. | false | 55 | false | severo/glue | 2022-10-28T16:35:04.000Z | glue | false | 84f973e948620e38b0c7e9fa880c20ab0eeede0a | [] | [
"annotations_creators:other",
"language_creators:other",
"language:en",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:text-classification",
"task_ids:acceptability-classification",
"task_ids:natural-language-inference",
"task_ids:semantic-similarity-scoring",
"task_ids:sentiment-classification",
"task_ids:text-scoring",
"configs:ax",
"configs:cola",
"configs:mnli",
"configs:mnli_matched",
"configs:mnli_mismatched",
"configs:mrpc",
"configs:qnli",
"configs:qqp",
"configs:rte",
"configs:sst2",
"configs:stsb",
"configs:wnli",
"tags:qa-nli",
"tags:coreference-nli",
"tags:paraphrase-identification"
] | https://huggingface.co/datasets/severo/glue/resolve/main/README.md | ---
annotations_creators:
- other
language_creators:
- other
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- acceptability-classification
- natural-language-inference
- semantic-similarity-scoring
- sentiment-classification
- text-scoring
paperswithcode_id: glue
pretty_name: GLUE (General Language Understanding Evaluation benchmark)
train-eval-index:
- config: cola
task: text-classification
task_id: binary_classification
splits:
train_split: train
eval_split: validation
col_mapping:
sentence: text
label: target
- config: sst2
task: text-classification
task_id: binary_classification
splits:
train_split: train
eval_split: validation
col_mapping:
sentence: text
label: target
- config: mrpc
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
sentence1: text1
sentence2: text2
label: target
- config: qqp
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
question1: text1
question2: text2
label: target
- config: stsb
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
sentence1: text1
sentence2: text2
label: target
- config: mnli
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation_matched
col_mapping:
premise: text1
hypothesis: text2
label: target
- config: mnli_mismatched
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
premise: text1
hypothesis: text2
label: target
- config: mnli_matched
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
premise: text1
hypothesis: text2
label: target
- config: qnli
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
question: text1
sentence: text2
label: target
- config: rte
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
sentence1: text1
sentence2: text2
label: target
- config: wnli
task: text-classification
task_id: natural_language_inference
splits:
train_split: train
eval_split: validation
col_mapping:
sentence1: text1
sentence2: text2
label: target
configs:
- ax
- cola
- mnli
- mnli_matched
- mnli_mismatched
- mrpc
- qnli
- qqp
- rte
- sst2
- stsb
- wnli
tags:
- qa-nli
- coreference-nli
- paraphrase-identification
---
# Dataset Card for GLUE
## Table of Contents
- [Dataset Card for GLUE](#dataset-card-for-glue)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [ax](#ax)
- [cola](#cola)
- [mnli](#mnli)
- [mnli_matched](#mnli_matched)
- [mnli_mismatched](#mnli_mismatched)
- [mrpc](#mrpc)
- [qnli](#qnli)
- [qqp](#qqp)
- [rte](#rte)
- [sst2](#sst2)
- [stsb](#stsb)
- [wnli](#wnli)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [ax](#ax-1)
- [cola](#cola-1)
- [mnli](#mnli-1)
- [mnli_matched](#mnli_matched-1)
- [mnli_mismatched](#mnli_mismatched-1)
- [mrpc](#mrpc-1)
- [qnli](#qnli-1)
- [qqp](#qqp-1)
- [rte](#rte-1)
- [sst2](#sst2-1)
- [stsb](#stsb-1)
- [wnli](#wnli-1)
- [Data Fields](#data-fields)
- [ax](#ax-2)
- [cola](#cola-2)
- [mnli](#mnli-2)
- [mnli_matched](#mnli_matched-2)
- [mnli_mismatched](#mnli_mismatched-2)
- [mrpc](#mrpc-2)
- [qnli](#qnli-2)
- [qqp](#qqp-2)
- [rte](#rte-2)
- [sst2](#sst2-2)
- [stsb](#stsb-2)
- [wnli](#wnli-2)
- [Data Splits](#data-splits)
- [ax](#ax-3)
- [cola](#cola-3)
- [mnli](#mnli-3)
- [mnli_matched](#mnli_matched-3)
- [mnli_mismatched](#mnli_mismatched-3)
- [mrpc](#mrpc-3)
- [qnli](#qnli-3)
- [qqp](#qqp-3)
- [rte](#rte-3)
- [sst2](#sst2-3)
- [stsb](#stsb-3)
- [wnli](#wnli-3)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://nyu-mll.github.io/CoLA/](https://nyu-mll.github.io/CoLA/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 955.33 MB
- **Size of the generated dataset:** 229.68 MB
- **Total amount of disk used:** 1185.01 MB
### Dataset Summary
GLUE, the General Language Understanding Evaluation benchmark (https://gluebenchmark.com/) is a collection of resources for training, evaluating, and analyzing natural language understanding systems.
### Supported Tasks and Leaderboards
The leaderboard for the GLUE benchmark can be found [at this address](https://gluebenchmark.com/). It comprises the following tasks:
#### ax
A manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset.
#### cola
The Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence.
#### mnli
The Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data.
#### mnli_matched
The matched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mnli_mismatched
The mismatched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mrpc
The Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.
#### qnli
The Stanford Question Answering Dataset is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The authors of the benchmark convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue.
#### qqp
The Quora Question Pairs2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent.
#### rte
The Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. The authors of the benchmark combined the data from RTE1 (Dagan et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Examples are constructed based on news and Wikipedia text. The authors of the benchmark convert all datasets to a two-class split, where for three-class datasets they collapse neutral and contradiction into not entailment, for consistency.
#### sst2
The Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels.
#### stsb
The Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5.
#### wnli
The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI).
### Languages
The language data in GLUE is in English (BCP-47 `en`)
## Dataset Structure
### Data Instances
#### ax
- **Size of downloaded dataset files:** 0.21 MB
- **Size of the generated dataset:** 0.23 MB
- **Total amount of disk used:** 0.44 MB
An example of 'test' looks as follows.
```
{
"premise": "The cat sat on the mat.",
"hypothesis": "The cat did not sit on the mat.",
"label": -1,
"idx: 0
}
```
#### cola
- **Size of downloaded dataset files:** 0.36 MB
- **Size of the generated dataset:** 0.58 MB
- **Total amount of disk used:** 0.94 MB
An example of 'train' looks as follows.
```
{
"sentence": "Our friends won't buy this analysis, let alone the next one we propose.",
"label": 1,
"id": 0
}
```
#### mnli
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 78.65 MB
- **Total amount of disk used:** 376.95 MB
An example of 'train' looks as follows.
```
{
"premise": "Conceptually cream skimming has two basic dimensions - product and geography.",
"hypothesis": "Product and geography are what make cream skimming work.",
"label": 1,
"idx": 0
}
```
#### mnli_matched
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 3.52 MB
- **Total amount of disk used:** 301.82 MB
An example of 'test' looks as follows.
```
{
"premise": "Hierbas, ans seco, ans dulce, and frigola are just a few names worth keeping a look-out for.",
"hypothesis": "Hierbas is a name worth looking out for.",
"label": -1,
"idx": 0
}
```
#### mnli_mismatched
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 3.73 MB
- **Total amount of disk used:** 302.02 MB
An example of 'test' looks as follows.
```
{
"premise": "What have you decided, what are you going to do?",
"hypothesis": "So what's your decision?,
"label": -1,
"idx": 0
}
```
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
#### ax
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### cola
- `sentence`: a `string` feature.
- `label`: a classification label, with possible values including `unacceptable` (0), `acceptable` (1).
- `idx`: a `int32` feature.
#### mnli
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mnli_matched
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mnli_mismatched
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Splits
#### ax
| |test|
|---|---:|
|ax |1104|
#### cola
| |train|validation|test|
|----|----:|---------:|---:|
|cola| 8551| 1043|1063|
#### mnli
| |train |validation_matched|validation_mismatched|test_matched|test_mismatched|
|----|-----:|-----------------:|--------------------:|-----------:|--------------:|
|mnli|392702| 9815| 9832| 9796| 9847|
#### mnli_matched
| |validation|test|
|------------|---------:|---:|
|mnli_matched| 9815|9796|
#### mnli_mismatched
| |validation|test|
|---------------|---------:|---:|
|mnli_mismatched| 9832|9847|
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{warstadt2018neural,
title={Neural Network Acceptability Judgments},
author={Warstadt, Alex and Singh, Amanpreet and Bowman, Samuel R},
journal={arXiv preprint arXiv:1805.12471},
year={2018}
}
@inproceedings{wang2019glue,
title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},
author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},
note={In the Proceedings of ICLR.},
year={2019}
}
Note that each GLUE dataset has its own citation. Please see the source to see
the correct citation for each contained dataset.
```
### Contributions
Thanks to [@patpizio](https://github.com/patpizio), [@jeswan](https://github.com/jeswan), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset. |
Rubio41 | null | null | null | false | null | false | Rubio41/hearthstone_images | 2022-10-29T01:45:26.000Z | null | false | cd87d4cb4b13cb44be50e15632b17f559ec4263d | [] | [
"license:openrail"
] | https://huggingface.co/datasets/Rubio41/hearthstone_images/resolve/main/README.md | ---
license: openrail
---
|
chavinlo | null | null | null | false | null | false | chavinlo/stream-test-1 | 2022-10-30T03:15:31.000Z | null | false | 426babfef86cf39a081fcc71c8587c4643b0e9a5 | [] | [] | https://huggingface.co/datasets/chavinlo/stream-test-1/resolve/main/README.md | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_ids
sequence:
sequence: int32
- name: attention_mask
sequence:
sequence: int8
- name: pixel_values
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 27646923235
num_examples: 531
download_size: 3268316867
dataset_size: 27646923235
---
# Dataset Card for "stream-test-1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hamza50 | null | null | null | false | null | false | hamza50/testimg | 2022-10-29T05:09:29.000Z | null | false | d1113c43f763980d981d085de5f414342d0f15b3 | [] | [
"license:wtfpl"
] | https://huggingface.co/datasets/hamza50/testimg/resolve/main/README.md | ---
license: wtfpl
---
|
MarkGG | null | null | null | false | 45 | false | MarkGG/Romance-cleaned-3 | 2022-10-29T06:03:39.000Z | null | false | 8c8403a9c0cb6a7c50d305d661bb06f8f1eac2d5 | [] | [] | https://huggingface.co/datasets/MarkGG/Romance-cleaned-3/resolve/main/README.md | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 3369959.5092553934
num_examples: 6466
- name: validation
num_bytes: 374729.4907446068
num_examples: 719
download_size: 2300275
dataset_size: 3744689.0
---
# Dataset Card for "Romance-cleaned-3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
svjack | null | null | null | false | 7 | false | svjack/pokemon-blip-captions-en-ja | 2022-10-31T06:22:04.000Z | null | false | 8b2557a673e0e0d687c1484a7e197d3f8c43c699 | [] | [
"license:cc-by-nc-sa-4.0",
"annotations_creators:machine-generated",
"language:en",
"language:ja",
"language_creators:other",
"multilinguality:multilingual",
"size_categories:n<1K",
"source_datasets:huggan/few-shot-pokemon",
"task_categories:text-to-image"
] | https://huggingface.co/datasets/svjack/pokemon-blip-captions-en-ja/resolve/main/README.md | ---
license: cc-by-nc-sa-4.0
annotations_creators:
- machine-generated
language:
- en
- ja
language_creators:
- other
multilinguality:
- multilingual
pretty_name: 'Pokémon BLIP captions'
size_categories:
- n<1K
source_datasets:
- huggan/few-shot-pokemon
tags: []
task_categories:
- text-to-image
task_ids: []
---
# Dataset Card for Pokémon BLIP captions with English and Japanese.
Dataset used to train Pokémon text to image model, add a Japanese Column of [Pokémon BLIP captions](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions)
BLIP generated captions for Pokémon images from Few Shot Pokémon dataset introduced by Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis (FastGAN). Original images were obtained from FastGAN-pytorch and captioned with the pre-trained BLIP model.
For each row the dataset contains image en_text (caption in English) and ja_text (caption in Japanese) keys. image is a varying size PIL jpeg, and text is the accompanying text caption. Only a train split is provided.
The Japanese captions are translated by [Deepl](https://www.deepl.com/translator) |
siberspace | null | null | null | false | null | false | siberspace/eric2 | 2022-10-29T09:51:39.000Z | null | false | 12d7062a7184418e6cd6c9f3cfb306683b945e73 | [] | [] | https://huggingface.co/datasets/siberspace/eric2/resolve/main/README.md | |
Nerfgun3 | null | null | null | false | null | false | Nerfgun3/sam_yang | 2022-10-29T11:26:45.000Z | null | false | 7f5566dbfedcb5db78e493a0bdf04b410ec769fe | [] | [
"language:en",
"tags:stable-diffusion",
"tags:text-to-image",
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/Nerfgun3/sam_yang/resolve/main/README.md | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: false
---
# Sam Yang Artist Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"drawn by sam_yang"```
If it is to strong just add [] around it.
Trained until 5000 steps
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/cbtBjwH.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/r5s8bSO.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/NpGj5KU.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/eWJlaf5.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/DOJvxTJ.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
Ceyase | null | null | null | false | null | false | Ceyase/audio-diffusion-touhou | 2022-10-29T12:14:50.000Z | null | false | b0d014855f835843f12ca5da42f300baf8c60227 | [] | [
"license:gpl-3.0"
] | https://huggingface.co/datasets/Ceyase/audio-diffusion-touhou/resolve/main/README.md | ---
license: gpl-3.0
---
|
Toveline | null | null | null | false | null | false | Toveline/toveline | 2022-10-30T11:35:14.000Z | null | false | a69385023798d1f563a7a7e9f4abd607a9df71f8 | [] | [
"license:unknown"
] | https://huggingface.co/datasets/Toveline/toveline/resolve/main/README.md | ---
license: unknown
---
|
Dialogue-Model-Research-Group | null | null | null | false | null | false | Dialogue-Model-Research-Group/baike | 2022-11-12T16:00:22.000Z | null | false | 80d17c75a21f9f075690f5e142f76ee1343c7968 | [] | [
"license:cc"
] | https://huggingface.co/datasets/Dialogue-Model-Research-Group/baike/resolve/main/README.md | ---
license: cc
---
|
siberspace | null | null | null | false | null | false | siberspace/katia | 2022-10-29T13:13:07.000Z | null | false | 34b9001dc31ff1fa092786fef67831c12719e37e | [] | [] | https://huggingface.co/datasets/siberspace/katia/resolve/main/README.md | |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-98a820-1924665124 | 2022-10-29T15:11:10.000Z | null | false | 9cf098c4cfce7ab970110f983f16773087f13830 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:cnn_dailymail"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-98a820-1924665124/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- cnn_dailymail
eval_info:
task: summarization
model: patrickvonplaten/bert2bert_cnn_daily_mail
metrics: ['accuracy', 'bleu']
dataset_name: cnn_dailymail
dataset_config: 3.0.0
dataset_split: test
col_mapping:
text: article
target: highlights
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: patrickvonplaten/bert2bert_cnn_daily_mail
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Jomon07](https://huggingface.co/Jomon07) for evaluating this model. |
e3rastel | null | null | null | false | null | false | e3rastel/training_christie | 2022-10-29T15:19:53.000Z | null | false | a94866390f154522e1f1ae2c26f3cbfc22259d13 | [] | [] | https://huggingface.co/datasets/e3rastel/training_christie/resolve/main/README.md | |
joell | null | null | null | false | null | false | joell/project1 | 2022-10-29T15:24:40.000Z | null | false | a9a91c5a9021379e94d680e3ec197ed446894ecd | [] | [
"license:mit"
] | https://huggingface.co/datasets/joell/project1/resolve/main/README.md | ---
license: mit
---
|
zZWipeoutZz | null | null | null | false | null | false | zZWipeoutZz/insane_style | 2022-10-29T16:31:20.000Z | null | false | 55c6aa8cf6594b07167e47488ae303b84f4daf38 | [] | [
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/zZWipeoutZz/insane_style/resolve/main/README.md | ---
license: creativeml-openrail-m
---
<h4> Disclosure </h4>
<p> I hope that you are able to create some nice pictures,, if you have any embedding suggestions or issues please let me know </p>
<h4> Usage </h4>
To use this embedding you have to download the file and put it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt add
<em style="font-weight:600">art by insane_style </em>
add <b>[ ]</b> around it to reduce its weight.
<h4> Included Files </h4>
<ul>
<li>6500 steps <em>Usage: art by insane_style-6500</em></li>
<li>10,000 steps <em>Usage: art by insane_style-10000</em> </li>
<li>15,000 steps <em>Usage: art by insane_style</em></li>
</ul>
cheers<br>
Wipeout
<h4> Example Pictures </h4>
<table>
<tbody>
<tr>
<td><img height="100%/" width="100%" src="https://i.imgur.com/YGROrC5.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/IFQRJcH.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/FwfXft0.png"></td>
</tr>
</tbody>
</table>
<h4> prompt comparison </h4>
<em> click the image to enlarge</em>
<a href="https://i.imgur.com/SEkzaVr.jpg" target="_blank"><img height="50%" width="50%" src="https://i.imgur.com/SEkzaVr.jpg"></a>
|
rajshekar2591 | null | null | null | false | null | false | rajshekar2591/testing | 2022-10-29T17:58:19.000Z | null | false | 0ebc43ff404c90551a7aea88c55a074b8ac0af51 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/rajshekar2591/testing/resolve/main/README.md | ---
license: afl-3.0
---
|
zoheb | null | null | null | false | 33 | false | zoheb/sketch-scene | 2022-10-30T10:07:48.000Z | null | false | 22ffd55109e12e1b82003a93e40fee0298e985a3 | [] | [
"license:cc-by-nc-sa-4.0",
"language:en",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:n<10K",
"source_datasets:FS-COCO",
"task_categories:text-to-image"
] | https://huggingface.co/datasets/zoheb/sketch-scene/resolve/main/README.md | ---
license: cc-by-nc-sa-4.0
language:
- en
language_creators:
- machine-generated
multilinguality:
- monolingual
pretty_name: 'Sketch Scene Descriptions'
size_categories:
- n<10K
source_datasets:
- FS-COCO
tags: []
task_categories:
- text-to-image
task_ids: []
---
# Dataset Card for Sketch Scene Descriptions
_Dataset used to train [Sketch Scene text to image model]()_
We advance sketch research to scenes with the first dataset of freehand scene sketches, FS-COCO. With practical applications in mind, we collect sketches that convey well scene content but can be sketched within a few minutes by a person with any sketching skills. Our dataset comprises around 10,000 freehand scene vector sketches with per-point space-time information by 100 non-expert individuals, offering both object- and scene-level abstraction. Each sketch is augmented with its text description.
For each row, the dataset contains `image` and `text` keys. `image` is a varying size PIL jpeg, and `text` is the accompanying text caption. Only a train split is provided.
## Citation
If you use this dataset, please cite it as:
```
@inproceedings{fscoco,
title={FS-COCO: Towards Understanding of Freehand Sketches of Common Objects in Context.}
author={Chowdhury, Pinaki Nath and Sain, Aneeshan and Bhunia, Ayan Kumar and Xiang, Tao and Gryaditskaya, Yulia and Song, Yi-Zhe},
booktitle={ECCV},
year={2022}
}
``` |
eminecg | null | null | null | false | 4 | false | eminecg/turkishReviews-ds-mini | 2022-11-07T10:03:01.000Z | null | false | 5c0abe70104c7e699d1834afd39232def41b0f77 | [] | [] | https://huggingface.co/datasets/eminecg/turkishReviews-ds-mini/resolve/main/README.md | ---
dataset_info:
features:
- name: review
dtype: string
- name: review_length
dtype: int64
splits:
- name: train
num_bytes: 1296087.3
num_examples: 3600
- name: validation
num_bytes: 144009.7
num_examples: 400
download_size: 915922
dataset_size: 1440097.0
---
# Dataset Card for "turkishReviews-ds-mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
stwhiteisme | null | null | null | false | null | false | stwhiteisme/Stwhiteisme | 2022-10-29T18:19:22.000Z | null | false | 13070b62b99dbd27502b95ef9980f2a34d32f691 | [] | [] | https://huggingface.co/datasets/stwhiteisme/Stwhiteisme/resolve/main/README.md | Pictures of ME! |
Tritkoman | null | null | null | false | null | false | Tritkoman/ENtoANGGNOME | 2022-10-29T18:45:18.000Z | null | false | 0c88dc959fd721314f8ad736a96057cf1665e852 | [] | [
"language:en",
"language:nl",
"task_categories:translation"
] | https://huggingface.co/datasets/Tritkoman/ENtoANGGNOME/resolve/main/README.md | ---
language:
- en
- nl
task_categories:
- translation
---
# AutoTrain Dataset for project: oaoqoqkaksk
## Dataset Description
This dataset has been automatically processed by AutoTrain for project oaoqoqkaksk.
### Languages
The BCP-47 code for the dataset's language is en2nl.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"target": "\u00de\u00e6t Sunnanrastere onl\u00edcnescynn",
"source": "The Sun raster image format"
},
{
"target": "Lundon",
"source": "Gordon"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"target": "Value(dtype='string', id=None)",
"source": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1528 |
| valid | 383 |
|
Zxol | null | null | null | false | null | false | Zxol/Dfv | 2022-10-29T19:46:54.000Z | null | false | d00918e71905f1a4f4696d0e61a979cfe8ccee01 | [] | [
"license:bigscience-bloom-rail-1.0"
] | https://huggingface.co/datasets/Zxol/Dfv/resolve/main/README.md | ---
license: bigscience-bloom-rail-1.0
---
Dfggggvvhg |
mariopeng | null | null | null | false | null | false | mariopeng/openIPAseq2seq | 2022-10-29T20:11:57.000Z | null | false | a6326ffae85441615a056dbfea9ce8131b1d67a6 | [] | [
"license:unlicense"
] | https://huggingface.co/datasets/mariopeng/openIPAseq2seq/resolve/main/README.md | ---
license: unlicense
---
|
AnonymousSub | null | null | null | false | 21 | false | AnonymousSub/recipe_RL_data_roberta-base | 2022-11-03T15:38:06.000Z | null | false | b4b871e5d5f20e77218d34aabfd7e09f782fedd0 | [] | [
"language:en",
"multilinguality:monolingual"
] | https://huggingface.co/datasets/AnonymousSub/recipe_RL_data_roberta-base/resolve/main/README.md | ---
annotations_creators: []
language:
- en
language_creators: []
license: []
multilinguality:
- monolingual
pretty_name: recipe RL roberta base
size_categories: []
source_datasets: []
tags: []
task_categories: []
task_ids: []
---
# Dataset Description
## Structure
- Consists of 5 fields
- Each row corresponds to a policy - sequence of actions, given an initial `<START>` state, and corresponding rewards at each step.
## Fields
`steps`, `step_attn_masks`, `rewards`, `actions`, `dones`
## Field descriptions
- `steps` (List of lists of `Int`s) - tokenized step tokens of all the steps in the policy sequence (here we use the `roberta-base` tokenizer, as `roberta-base` would be used to encode each step of a recipe)
- `step_attn_masks` (List of lists of `Int`s) - Attention masks corresponding to `steps`
- `rewards` (List of `Float`s) - Sequence of rewards (normalized b/w 0 and 1) assigned per step.
- `actions` (List of lists of `Int`s) - Sequence of actions (one-hot encoded, as the action space is discrete). There are `33` different actions possible (we consider the maximum number of steps per recipe = `16`, so the action can vary from `-16` to `+16`; The class label is got by adding 16 to the actual action value)
- `dones` (List of `Bool`) - Sequence of flags, conveying if the work is completed when that step is reached, or not.
## Dataset Size
- Number of rows = `2255673`
- Maximum number of steps per row = `16` |
Nerfgun3 | null | null | null | false | null | false | Nerfgun3/chibi_style | 2022-10-29T21:50:26.000Z | null | false | d98f91761614aa984340c6ce99a333e4b2cd21b6 | [] | [
"language:en",
"tags:stable-diffusion",
"tags:text-to-image",
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/Nerfgun3/chibi_style/resolve/main/README.md | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: false
---
# Chibi Style Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"drawn by chibi_style"```
Use (Chibi) tag beside the Embedding for best results
If it is to strong just add [] around it.
Trained until 6000 steps
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/rXHJyFQ.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/eocJJXg.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/8dA3EUO.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/mmChRb3.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/sooxpE5.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
nick-carroll1 | null | null | null | false | null | false | nick-carroll1/sp500 | 2022-10-30T00:08:46.000Z | null | false | 008edafee29d0b086ea59c8b94a83fb12cb1aa00 | [] | [] | https://huggingface.co/datasets/nick-carroll1/sp500/resolve/main/README.md |
# Dataset Card for S&P 500 Dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
This Dataset was created by combining the daily close prices for each stock in the current (as of 10/29/2022) S&P 500 index dating back to January 1, 1970. This data came from the Kaggle dataset (https://www.kaggle.com/datasets/paultimothymooney/stock-market-data) and was aggregated using PANDAS before being converted to a HuggingFace Dataset.
### Dataset Summary
This dataset has 407 columns specifying dates and associated close prices of the stocks in the S&P 500 that had data which could be accessed from the above Kaggle dataset. 94 stocks are missing due to issues loading that stock data into the dataset (i.e. stock name changes (like FB to META)). These items will need further review. There are many NA values due to stocks that were not in existence as early as 1970.
### Supported Tasks and Leaderboards
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
No split has currently been created for the dataset.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
https://www.kaggle.com/datasets/paultimothymooney/stock-market-data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/nick-carroll1) for adding this dataset.
---
dataset_info:
features:
- name: MMM
dtype: float64
- name: AOS
dtype: float64
- name: ABT
dtype: float64
- name: ABBV
dtype: float64
- name: ABMD
dtype: float64
- name: ACN
dtype: float64
- name: ATVI
dtype: float64
- name: ADM
dtype: float64
- name: ADBE
dtype: float64
- name: ADP
dtype: float64
- name: AAP
dtype: float64
- name: A
dtype: float64
- name: APD
dtype: float64
- name: AKAM
dtype: float64
- name: ALK
dtype: float64
- name: ALB
dtype: float64
- name: ARE
dtype: float64
- name: ALGN
dtype: float64
- name: ALLE
dtype: float64
- name: LNT
dtype: float64
- name: GOOG
dtype: float64
- name: MO
dtype: float64
- name: AMZN
dtype: float64
- name: AMD
dtype: float64
- name: AEE
dtype: float64
- name: AAL
dtype: float64
- name: AEP
dtype: float64
- name: AXP
dtype: float64
- name: AIG
dtype: float64
- name: AMT
dtype: float64
- name: AWK
dtype: float64
- name: AMP
dtype: float64
- name: ABC
dtype: float64
- name: AME
dtype: float64
- name: AMGN
dtype: float64
- name: APH
dtype: float64
- name: ADI
dtype: float64
- name: AON
dtype: float64
- name: APA
dtype: float64
- name: AAPL
dtype: float64
- name: AMAT
dtype: float64
- name: ANET
dtype: float64
- name: AJG
dtype: float64
- name: AIZ
dtype: float64
- name: T
dtype: float64
- name: ATO
dtype: float64
- name: ADSK
dtype: float64
- name: AZO
dtype: float64
- name: AVB
dtype: float64
- name: AVY
dtype: float64
- name: BAC
dtype: float64
- name: BAX
dtype: float64
- name: BDX
dtype: float64
- name: WRB
dtype: float64
- name: BBY
dtype: float64
- name: BIO
dtype: float64
- name: BIIB
dtype: float64
- name: BLK
dtype: float64
- name: BK
dtype: float64
- name: BA
dtype: float64
- name: BWA
dtype: float64
- name: BXP
dtype: float64
- name: BSX
dtype: float64
- name: BMY
dtype: float64
- name: AVGO
dtype: float64
- name: BR
dtype: float64
- name: BRO
dtype: float64
- name: CHRW
dtype: float64
- name: CDNS
dtype: float64
- name: CZR
dtype: float64
- name: CPT
dtype: float64
- name: CPB
dtype: float64
- name: COF
dtype: float64
- name: CAH
dtype: float64
- name: KMX
dtype: float64
- name: CAT
dtype: float64
- name: CBOE
dtype: float64
- name: CDW
dtype: float64
- name: CNC
dtype: float64
- name: CNP
dtype: float64
- name: CF
dtype: float64
- name: CRL
dtype: float64
- name: SCHW
dtype: float64
- name: CHTR
dtype: float64
- name: CMG
dtype: float64
- name: CB
dtype: float64
- name: CHD
dtype: float64
- name: CINF
dtype: float64
- name: CTAS
dtype: float64
- name: CSCO
dtype: float64
- name: C
dtype: float64
- name: CFG
dtype: float64
- name: CLX
dtype: float64
- name: CME
dtype: float64
- name: CMS
dtype: float64
- name: KO
dtype: float64
- name: CTSH
dtype: float64
- name: CL
dtype: float64
- name: CMCSA
dtype: float64
- name: CAG
dtype: float64
- name: COP
dtype: float64
- name: ED
dtype: float64
- name: COO
dtype: float64
- name: CPRT
dtype: float64
- name: GLW
dtype: float64
- name: CSGP
dtype: float64
- name: COST
dtype: float64
- name: CCI
dtype: float64
- name: CMI
dtype: float64
- name: DHI
dtype: float64
- name: DRI
dtype: float64
- name: DVA
dtype: float64
- name: DE
dtype: float64
- name: DAL
dtype: float64
- name: DVN
dtype: float64
- name: DXCM
dtype: float64
- name: FANG
dtype: float64
- name: DLR
dtype: float64
- name: DFS
dtype: float64
- name: DISH
dtype: float64
- name: DIS
dtype: float64
- name: DG
dtype: float64
- name: DLTR
dtype: float64
- name: D
dtype: float64
- name: DPZ
dtype: float64
- name: DOV
dtype: float64
- name: DOW
dtype: float64
- name: DTE
dtype: float64
- name: DD
dtype: float64
- name: EMN
dtype: float64
- name: ETN
dtype: float64
- name: EBAY
dtype: float64
- name: ECL
dtype: float64
- name: EIX
dtype: float64
- name: EW
dtype: float64
- name: EA
dtype: float64
- name: LLY
dtype: float64
- name: EMR
dtype: float64
- name: ENPH
dtype: float64
- name: EOG
dtype: float64
- name: EPAM
dtype: float64
- name: EFX
dtype: float64
- name: EQIX
dtype: float64
- name: EQR
dtype: float64
- name: ESS
dtype: float64
- name: EL
dtype: float64
- name: RE
dtype: float64
- name: ES
dtype: float64
- name: EXC
dtype: float64
- name: EXPE
dtype: float64
- name: EXPD
dtype: float64
- name: EXR
dtype: float64
- name: XOM
dtype: float64
- name: FFIV
dtype: float64
- name: FDS
dtype: float64
- name: FAST
dtype: float64
- name: FRT
dtype: float64
- name: FDX
dtype: float64
- name: FITB
dtype: float64
- name: FRC
dtype: float64
- name: FE
dtype: float64
- name: FIS
dtype: float64
- name: FISV
dtype: float64
- name: FLT
dtype: float64
- name: FMC
dtype: float64
- name: F
dtype: float64
- name: FTNT
dtype: float64
- name: FBHS
dtype: float64
- name: FOXA
dtype: float64
- name: BEN
dtype: float64
- name: FCX
dtype: float64
- name: GRMN
dtype: float64
- name: IT
dtype: float64
- name: GNRC
dtype: float64
- name: GD
dtype: float64
- name: GE
dtype: float64
- name: GIS
dtype: float64
- name: GM
dtype: float64
- name: GPC
dtype: float64
- name: GILD
dtype: float64
- name: GPN
dtype: float64
- name: HAL
dtype: float64
- name: HIG
dtype: float64
- name: HAS
dtype: float64
- name: HCA
dtype: float64
- name: HSIC
dtype: float64
- name: HSY
dtype: float64
- name: HES
dtype: float64
- name: HPE
dtype: float64
- name: HLT
dtype: float64
- name: HOLX
dtype: float64
- name: HD
dtype: float64
- name: HON
dtype: float64
- name: HRL
dtype: float64
- name: HST
dtype: float64
- name: HPQ
dtype: float64
- name: HUM
dtype: float64
- name: HBAN
dtype: float64
- name: HII
dtype: float64
- name: IBM
dtype: float64
- name: IEX
dtype: float64
- name: IDXX
dtype: float64
- name: ITW
dtype: float64
- name: ILMN
dtype: float64
- name: INCY
dtype: float64
- name: IR
dtype: float64
- name: INTC
dtype: float64
- name: ICE
dtype: float64
- name: IP
dtype: float64
- name: IPG
dtype: float64
- name: IFF
dtype: float64
- name: INTU
dtype: float64
- name: ISRG
dtype: float64
- name: IVZ
dtype: float64
- name: IRM
dtype: float64
- name: JBHT
dtype: float64
- name: JKHY
dtype: float64
- name: JNJ
dtype: float64
- name: JCI
dtype: float64
- name: JPM
dtype: float64
- name: JNPR
dtype: float64
- name: K
dtype: float64
- name: KEY
dtype: float64
- name: KEYS
dtype: float64
- name: KMB
dtype: float64
- name: KIM
dtype: float64
- name: KLAC
dtype: float64
- name: KHC
dtype: float64
- name: KR
dtype: float64
- name: LH
dtype: float64
- name: LRCX
dtype: float64
- name: LVS
dtype: float64
- name: LDOS
dtype: float64
- name: LNC
dtype: float64
- name: LYV
dtype: float64
- name: LKQ
dtype: float64
- name: LMT
dtype: float64
- name: LOW
dtype: float64
- name: LYB
dtype: float64
- name: MRO
dtype: float64
- name: MPC
dtype: float64
- name: MKTX
dtype: float64
- name: MAR
dtype: float64
- name: MMC
dtype: float64
- name: MLM
dtype: float64
- name: MA
dtype: float64
- name: MKC
dtype: float64
- name: MCD
dtype: float64
- name: MCK
dtype: float64
- name: MDT
dtype: float64
- name: MRK
dtype: float64
- name: MET
dtype: float64
- name: MTD
dtype: float64
- name: MGM
dtype: float64
- name: MCHP
dtype: float64
- name: MU
dtype: float64
- name: MSFT
dtype: float64
- name: MAA
dtype: float64
- name: MHK
dtype: float64
- name: MOH
dtype: float64
- name: TAP
dtype: float64
- name: MDLZ
dtype: float64
- name: MPWR
dtype: float64
- name: MNST
dtype: float64
- name: MCO
dtype: float64
- name: MOS
dtype: float64
- name: MSI
dtype: float64
- name: MSCI
dtype: float64
- name: NDAQ
dtype: float64
- name: NTAP
dtype: float64
- name: NFLX
dtype: float64
- name: NWL
dtype: float64
- name: NEM
dtype: float64
- name: NWSA
dtype: float64
- name: NEE
dtype: float64
- name: NI
dtype: float64
- name: NDSN
dtype: float64
- name: NSC
dtype: float64
- name: NTRS
dtype: float64
- name: NOC
dtype: float64
- name: NCLH
dtype: float64
- name: NRG
dtype: float64
- name: NVDA
dtype: float64
- name: NVR
dtype: float64
- name: NXPI
dtype: float64
- name: ORLY
dtype: float64
- name: OXY
dtype: float64
- name: ODFL
dtype: float64
- name: OMC
dtype: float64
- name: OKE
dtype: float64
- name: PCAR
dtype: float64
- name: PKG
dtype: float64
- name: PH
dtype: float64
- name: PAYX
dtype: float64
- name: PAYC
dtype: float64
- name: PNR
dtype: float64
- name: PEP
dtype: float64
- name: PKI
dtype: float64
- name: PFE
dtype: float64
- name: PM
dtype: float64
- name: PSX
dtype: float64
- name: PNW
dtype: float64
- name: PXD
dtype: float64
- name: PNC
dtype: float64
- name: POOL
dtype: float64
- name: PPG
dtype: float64
- name: PFG
dtype: float64
- name: PG
dtype: float64
- name: PLD
dtype: float64
- name: PRU
dtype: float64
- name: PEG
dtype: float64
- name: PTC
dtype: float64
- name: PHM
dtype: float64
- name: QRVO
dtype: float64
- name: PWR
dtype: float64
- name: QCOM
dtype: float64
- name: DGX
dtype: float64
- name: RL
dtype: float64
- name: RJF
dtype: float64
- name: O
dtype: float64
- name: REG
dtype: float64
- name: REGN
dtype: float64
- name: RF
dtype: float64
- name: RSG
dtype: float64
- name: RMD
dtype: float64
- name: RHI
dtype: float64
- name: ROK
dtype: float64
- name: ROL
dtype: float64
- name: ROP
dtype: float64
- name: ROST
dtype: float64
- name: RCL
dtype: float64
- name: CRM
dtype: float64
- name: SBAC
dtype: float64
- name: SLB
dtype: float64
- name: STX
dtype: float64
- name: SEE
dtype: float64
- name: SRE
dtype: float64
- name: NOW
dtype: float64
- name: SHW
dtype: float64
- name: SBNY
dtype: float64
- name: SPG
dtype: float64
- name: SWKS
dtype: float64
- name: SO
dtype: float64
- name: LUV
dtype: float64
- name: SWK
dtype: float64
- name: SBUX
dtype: float64
- name: STT
dtype: float64
- name: SYK
dtype: float64
- name: SIVB
dtype: float64
- name: SYF
dtype: float64
- name: SNPS
dtype: float64
- name: TMUS
dtype: float64
- name: TROW
dtype: float64
- name: TTWO
dtype: float64
- name: TRGP
dtype: float64
- name: TEL
dtype: float64
- name: TDY
dtype: float64
- name: TSLA
dtype: float64
- name: TXN
dtype: float64
- name: TXT
dtype: float64
- name: TMO
dtype: float64
- name: TJX
dtype: float64
- name: TSCO
dtype: float64
- name: TDG
dtype: float64
- name: TRV
dtype: float64
- name: TYL
dtype: float64
- name: TSN
dtype: float64
- name: USB
dtype: float64
- name: UDR
dtype: float64
- name: ULTA
dtype: float64
- name: UNP
dtype: float64
- name: UAL
dtype: float64
- name: UPS
dtype: float64
- name: URI
dtype: float64
- name: UNH
dtype: float64
- name: UHS
dtype: float64
- name: VTR
dtype: float64
- name: VRSN
dtype: float64
- name: VRSK
dtype: float64
- name: VZ
dtype: float64
- name: VRTX
dtype: float64
- name: VFC
dtype: float64
- name: V
dtype: float64
- name: VMC
dtype: float64
- name: WAB
dtype: float64
- name: WBA
dtype: float64
- name: WMT
dtype: float64
- name: WM
dtype: float64
- name: WAT
dtype: float64
- name: WEC
dtype: float64
- name: WFC
dtype: float64
- name: WST
dtype: float64
- name: WDC
dtype: float64
- name: WRK
dtype: float64
- name: WY
dtype: float64
- name: WHR
dtype: float64
- name: WMB
dtype: float64
- name: WTW
dtype: float64
- name: GWW
dtype: float64
- name: WYNN
dtype: float64
- name: XEL
dtype: float64
- name: XYL
dtype: float64
- name: YUM
dtype: float64
- name: ZBRA
dtype: float64
- name: ZBH
dtype: float64
- name: ZION
dtype: float64
- name: ZTS
dtype: float64
- name: Date
dtype: timestamp[ns]
splits:
- name: train
num_bytes: 44121086
num_examples: 13322
download_size: 0
dataset_size: 44121086
---
# Dataset Card for "sp500"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
chavinlo | null | null | null | false | 7 | false | chavinlo/stream-test-2 | 2022-10-30T04:03:28.000Z | null | false | 16acd591d6c96928f08507b2b0d37ce3cbfddd8d | [] | [] | https://huggingface.co/datasets/chavinlo/stream-test-2/resolve/main/README.md | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_ids
sequence:
sequence: int32
- name: attention_mask
sequence:
sequence: int8
- name: pixel_values
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 27646924225
num_examples: 531
download_size: 3267074280
dataset_size: 27646924225
---
# Dataset Card for "stream-test-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
muchojarabe | null | null | null | false | null | false | muchojarabe/gato_slider | 2022-10-30T04:01:23.000Z | null | false | a45b65f64bc7411bb5d29b5076eb12eb9add8103 | [] | [
"license:cc"
] | https://huggingface.co/datasets/muchojarabe/gato_slider/resolve/main/README.md | ---
license: cc
---
|
allenai | null | null | null | false | 6 | false | allenai/prosocial-dialog | 2022-11-14T13:32:40.000Z | null | false | cbc7d7d3d9850bb38cf785ba74b0b1910ff53bc0 | [] | [
"arxiv:2205.12688",
"annotations_creators:crowdsourced",
"language:en",
"language_creators:crowdsourced",
"language_creators:machine-generated",
"license:odc-by",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"size_categories:100K<n<1M",
"source_datasets:original",
"source_datasets:extended|social_bias_frames",
"tags:dialogue",
"tags:dialogue safety",
"tags:social norm",
"tags:rules-of-thumb",
"task_categories:conversational",
"task_categories:text-classification",
"task_ids:dialogue-generation",
"task_ids:multi-class-classification"
] | https://huggingface.co/datasets/allenai/prosocial-dialog/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- crowdsourced
- machine-generated
license:
- odc-by
multilinguality:
- monolingual
pretty_name: ProsocialDialog
size_categories:
- 10K<n<100K
- 100K<n<1M
source_datasets:
- original
- extended|social_bias_frames
tags:
- dialogue
- dialogue safety
- social norm
- rules-of-thumb
task_categories:
- conversational
- text-classification
task_ids:
- dialogue-generation
- multi-class-classification
---
# Dataset Card for ProsocialDialog Dataset
## Dataset Description
- **Repository:** [Dataset and Model](https://github.com/skywalker023/prosocial-dialog)
- **Paper:** [ProsocialDialog: A Prosocial Backbone for Conversational Agents](https://arxiv.org/abs/2205.12688)
- **Point of Contact:** [Hyunwoo Kim](mailto:hyunwook@allenai.org)
## Dataset Summary
ProsocialDialog is the first large-scale multi-turn English dialogue dataset to teach conversational agents to respond to problematic content following social norms. Covering diverse unethical, problematic, biased, and toxic situations, ProsocialDialog contains responses that encourage prosocial behavior, grounded in commonsense social rules (i.e., rules-of-thumb, RoTs). Created via a human-AI collaborative framework, ProsocialDialog consists of 58K dialogues, with 331K utterances, 160K unique RoTs, and 497K dialogue safety labels accompanied by free-form rationales.
## Supported Tasks
* Dialogue response generation
* Dialogue safety prediction
* Rules-of-thumb generation
## Languages
English
## Dataset Structure
### Data Attributes
attribute | type | description
--- | --- | ---
`context` | str | the potentially unsafe utterance
`response` | str | the guiding utterance grounded on rules-of-thumb (`rots`)
`rots` | list of str\|null | the relevant rules-of-thumb for `text` *not* labeled as \_\_casual\_\_
`safety_label` | str | the final verdict of the context according to `safety_annotations`: {\_\_casual\_\_, \_\_possibly\_needs\_caution\_\_, \_\_probably\_needs\_caution\_\_, \_\_needs\_caution\_\_, \_\_needs\_intervention\_\_}
`safety_annotations` | list of str | raw annotations from three workers: {casual, needs caution, needs intervention}
`safety_annotation_reasons` | list of str | the reasons behind the safety annotations in free-form text from each worker
`source` | str | the source of the seed text that was used to craft the first utterance of the dialogue: {socialchemistry, sbic, ethics_amt, ethics_reddit}
`etc` | str\|null | other information
`dialogue_id` | int | the dialogue index
`response_id` | int | the response index
`episode_done` | bool | an indicator of whether it is the end of the dialogue
## Dataset Creation
To create ProsocialDialog, we set up a human-AI collaborative data creation framework, where GPT-3 generates the potentially unsafe utterances, and crowdworkers provide prosocial responses to them. This approach allows us to circumvent two substantial challenges: (1) there are no available large-scale corpora of multiturn prosocial conversations between humans, and (2) asking humans to write unethical, toxic, or problematic utterances could result in psychological harms (Roberts, 2017; Steiger et al., 2021).
### Further Details, Social Impacts, and Limitations
Please refer to our [paper](https://arxiv.org/abs/2205.12688).
## Additional Information
### Citation
Please cite our work if you found the resources in this repository useful:
```
@inproceedings{kim2022prosocialdialog,
title={ProsocialDialog: A Prosocial Backbone for Conversational Agents},
author={Hyunwoo Kim and Youngjae Yu and Liwei Jiang and Ximing Lu and Daniel Khashabi and Gunhee Kim and Yejin Choi and Maarten Sap},
booktitle={EMNLP},
year=2022
}
``` |
arvindrajan92 | null | null | null | false | 10 | false | arvindrajan92/sroie_document_understanding | 2022-10-30T06:30:53.000Z | null | false | 3e9c4eb6eb75d1a72396ab005bcd0abdcf319060 | [] | [
"license:mit"
] | https://huggingface.co/datasets/arvindrajan92/sroie_document_understanding/resolve/main/README.md | ---
license: mit
dataset_info:
features:
- name: image
dtype: image
- name: ocr
list:
- name: box
sequence:
sequence: float64
- name: label
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 267317016.0
num_examples: 652
download_size: 217146103
dataset_size: 267317016.0
---
# Dataset Card for "sroie_document_understanding"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Contributions](#contributions)
## Dataset Description
This dataset is an enriched version of SROIE 2019 dataset with additional labels for line descriptions and line totals for OCR and layout understanding.
## Dataset Structure
```python
DatasetDict({
train: Dataset({
features: ['image', 'ocr'],
num_rows: 652
})
})
```
### Data Fields
```python
{
'image': PIL Image object,
'ocr': [
# text box 1
{
'box': [[float, float], [float, float], [float, float], [float, float]],
'label': str, # "other" | "company" | "address" | "date" | "line_description" | "line_total" | "total"
'text': str
},
...
# text box N
{
'box': [[float, float], [float, float], [float, float], [float, float]],
'label': str,
'text': str,
}
]
}
```
## Dataset Creation
### Source Data
The dataset was obtained from [ICDAR2019 Competition on Scanned Receipt OCR and Information Extraction](https://rrc.cvc.uab.es/?ch=13)
### Annotations
#### Annotation process
Additional labels to receipt line items were added using open source [labelme](https://github.com/wkentaro/labelme) tool.
#### Who are the annotators?
Arvind Rajan (adding labels to the original text boxes from source)
## Additional Information
### Licensing Information
MIT License
### Contributions
Thanks to [@arvindrajan92](https://github.com/arvindrajan92) for adding this dataset. |
breezedeus | null | null | null | false | null | false | breezedeus/openfonts | 2022-10-30T06:37:11.000Z | null | false | 06581f273fd26b82fb36eecb48ddda298564f29f | [] | [
"license:ofl-1.1"
] | https://huggingface.co/datasets/breezedeus/openfonts/resolve/main/README.md | ---
license: ofl-1.1
---
Free Fonts for Simplified Chinese, downloaded from [Google Fonts](https://fonts.google.com/?subset=chinese-simplified). |
RobinWZQ | null | null | null | false | null | false | RobinWZQ/landscape_paintings | 2022-10-30T08:22:20.000Z | null | false | a14c88ac15bf2b44eeddf63c1876f978e20252cf | [] | [
"license:cc-by-nc-4.0"
] | https://huggingface.co/datasets/RobinWZQ/landscape_paintings/resolve/main/README.md | ---
license: cc-by-nc-4.0
---
|
api19750904 | null | null | null | false | 1 | false | api19750904/efeverde | 2022-10-30T09:30:29.000Z | null | false | 1a267499f05a2ada702cca61e9caf6ce4ed0cd6d | [] | [] | https://huggingface.co/datasets/api19750904/efeverde/resolve/main/README.md | noticias medioambiente |
Nerfgun3 | null | null | null | false | null | false | Nerfgun3/wlop_style | 2022-11-03T23:34:09.000Z | null | false | a8f3bebe787e1b70a2bc5d3f6025b414a2eb4467 | [] | [
"language:en",
"tags:stable-diffusion",
"tags:text-to-image",
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/Nerfgun3/wlop_style/resolve/main/README.md | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: false
---
# Wlop Style Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"drawn by wlop_style"```
Use the Embedding with one of [SirVeggies](https://huggingface.co/SirVeggie) Wlop models for best results
If it is to strong just add [] around it.
Trained until 6000 steps
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/ImByEK5.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/BndPSqd.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/4cB2B28.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/Hw5FMID.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/ddwJwoO.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
Nerfgun3 | null | null | null | false | null | false | Nerfgun3/torino_art | 2022-10-30T09:53:46.000Z | null | false | 3971d9415584a57e6564fcc83310433c52a7bb82 | [] | [
"language:en",
"tags:stable-diffusion",
"tags:text-to-image",
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/Nerfgun3/torino_art/resolve/main/README.md | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: false
---
# Torino Artist Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"drawn by torino_art"```
If it is to strong just add [] around it.
Trained until 12800 steps
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/xnRZgRb.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/AcHsCMX.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/egIlKhy.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/nZQh3da.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/V9UFqn2.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
Foxes | null | null | null | false | null | false | Foxes/image | 2022-10-30T10:03:18.000Z | null | false | a89cc21ad1f15b9fc5dd53a517e8ab7611315b3e | [] | [
"license:other"
] | https://huggingface.co/datasets/Foxes/image/resolve/main/README.md | ---
license: other
---
|
Zakia | null | null | null | false | null | false | Zakia/test | 2022-10-30T10:14:09.000Z | null | false | 1104b20e2e295532383774162647922afd7ae301 | [] | [
"doi:10.57967/hf/0074",
"license:cc-by-4.0"
] | https://huggingface.co/datasets/Zakia/test/resolve/main/README.md | ---
license: cc-by-4.0
---
|
PartiallyTyped | null | null | null | false | null | false | PartiallyTyped/answerable_tydiqa_raw | 2022-10-30T10:19:07.000Z | null | false | 6892e2e8f10b7b385041ec817f024c8dfa4cbad2 | [] | [] | https://huggingface.co/datasets/PartiallyTyped/answerable_tydiqa_raw/resolve/main/README.md | ---
dataset_info:
features:
- name: question
dtype: string
- name: language
dtype: string
- name: golds
struct:
- name: answer_start
sequence: int64
- name: answer_text
sequence: string
- name: context
dtype: string
- name: seq_id
dtype: string
splits:
- name: train
num_bytes: 21022889
num_examples: 29868
- name: validation
num_bytes: 2616173
num_examples: 3712
download_size: 16292808
dataset_size: 23639062
---
# Dataset Card for "answerable_tydiqa_raw"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
PartiallyTyped | null | null | null | false | null | false | PartiallyTyped/answerable_tydiqa_raw_split | 2022-10-30T10:19:44.000Z | null | false | 5dc06479106fbe781b1d1bb3c5da16ae4f3fdde0 | [] | [] | https://huggingface.co/datasets/PartiallyTyped/answerable_tydiqa_raw_split/resolve/main/README.md | ---
dataset_info:
features:
- name: question
dtype: string
- name: language
dtype: string
- name: context
dtype: string
- name: seq_id
dtype: string
- name: golds
struct:
- name: answer_start
sequence: int64
- name: answer_text
sequence: string
splits:
- name: train
num_bytes: 32809511
num_examples: 129290
- name: validation
num_bytes: 4034498
num_examples: 15801
download_size: 17092210
dataset_size: 36844009
---
# Dataset Card for "answerable_tydiqa_raw_split"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
PartiallyTyped | null | null | null | false | null | false | PartiallyTyped/answerable_tydiqa_5fbde19f5f4ac461c405a962adddaeb6 | 2022-10-30T10:24:10.000Z | null | false | 9e80f0e386c0c307eea98787ffa2dc558105cbfb | [] | [] | https://huggingface.co/datasets/PartiallyTyped/answerable_tydiqa_5fbde19f5f4ac461c405a962adddaeb6/resolve/main/README.md | ---
dataset_info:
features:
- name: question
dtype: string
- name: language
dtype: string
- name: golds
struct:
- name: answer_start
sequence: int64
- name: answer_text
sequence: string
- name: context
dtype: string
- name: seq_id
dtype: string
splits:
- name: train
num_bytes: 21022889
num_examples: 29868
- name: validation
num_bytes: 2616173
num_examples: 3712
download_size: 16292808
dataset_size: 23639062
---
# Dataset Card for "answerable_tydiqa_5fbde19f5f4ac461c405a962adddaeb6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
PartiallyTyped | null | null | null | false | null | false | PartiallyTyped/answerable_tydiqa_6fe3e6eac99651ae0255a686875476a4 | 2022-10-30T10:26:33.000Z | null | false | 629ddc29395be3b5f982d8daf6d12731d7364931 | [] | [] | https://huggingface.co/datasets/PartiallyTyped/answerable_tydiqa_6fe3e6eac99651ae0255a686875476a4/resolve/main/README.md | ---
dataset_info:
features:
- name: question
dtype: string
- name: language
dtype: string
- name: context
dtype: string
- name: seq_id
dtype: string
- name: golds
struct:
- name: answer_start
sequence: int64
- name: answer_text
sequence: string
splits:
- name: train
num_bytes: 32809511
num_examples: 129290
- name: validation
num_bytes: 4034498
num_examples: 15801
download_size: 17092210
dataset_size: 36844009
---
# Dataset Card for "answerable_tydiqa_6fe3e6eac99651ae0255a686875476a4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Foxter | null | null | null | false | null | false | Foxter/1 | 2022-10-30T11:12:52.000Z | null | false | 6f9b86b4c5141cc8f5b8db89af92ec93ac7ea3d1 | [] | [] | https://huggingface.co/datasets/Foxter/1/resolve/main/README.md | |
zZWipeoutZz | null | null | null | false | null | false | zZWipeoutZz/assassin_style | 2022-10-30T13:00:51.000Z | null | false | c663a7a901ed9bfe086d513ce9de7aa2dbea5680 | [] | [
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/zZWipeoutZz/assassin_style/resolve/main/README.md | ---
license: creativeml-openrail-m
---
<h4> Usage </h4>
To use this embedding you have to download the file and put it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt add
<em style="font-weight:600">art by assassin_style </em>
add <b>[ ]</b> around it to reduce its weight.
<h4> Included Files </h4>
<ul>
<li>6500 steps <em>Usage: art by assassin_style-6500</em></li>
<li>10,000 steps <em>Usage: art by assassin_style-10000</em> </li>
<li>15,000 steps <em>Usage: art by assassin_style </em></li>
</ul>
cheers<br>
Wipeout
<h4> Example Pictures </h4>
<table>
<tbody>
<tr>
<td><img height="100%/" width="100%" src="https://i.imgur.com/RhE7Qce.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/wVOH8GU.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/YaBbNNK.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/63HpAf1.png"></td>
</tr>
</tbody>
</table>
<h4> prompt comparison </h4>
<em> click the image to enlarge</em>
<a href="https://i.imgur.com/nrkCPEf.jpg" target="_blank"><img height="50%" width="50%" src="https://i.imgur.com/nrkCPEf.jpg"></a>
|
auxten | null | null | null | false | null | false | auxten/movielens-20m | 2022-10-30T13:57:36.000Z | null | false | f62c99bb7b1c00254d300679172802b400281cfe | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/auxten/movielens-20m/resolve/main/README.md | ---
license: apache-2.0
---
Movielens 20m data with split training and test set by userId for GAUC.
More details could be see at:
https://github.com/auxten/edgeRec/blob/main/example/movielens/readme.md
## User split
user split status in `user` table, see SQL below:
```sql
create table movies
(
movieId INTEGER,
title TEXT,
genres TEXT
);
create table ratings
(
userId INTEGER,
movieId INTEGER,
rating FLOAT,
timestamp INTEGER
);
create table tags
(
userId INTEGER,
movieId INTEGER,
tag TEXT,
timestamp INTEGER
);
-- import data from csv, do it with any tool
select count(distinct userId) from ratings; -- 138,493 users
create table user as select distinct userId, 0 as is_train from ratings;
-- choose 100000 random user as train user
update user
set is_train = 1
where userId in
(SELECT userId
FROM (select distinct userId from ratings)
ORDER BY RANDOM()
LIMIT 100000);
select count(*) from user where is_train != 1; -- 38,493 test users
-- split train and test set of movielens-20m ratings
create table ratings_train as
select r.userId, movieId, rating, timestamp
from ratings r
left join user u on r.userId = u.userId
where is_train = 1;
create table ratings_test as
select r.userId, movieId, rating, timestamp
from ratings r
left join user u on r.userId = u.userId
where is_train = 0;
select count(*) from ratings_train; --14,393,526
select count(*) from ratings_test; --5,606,737
select count(*) from ratings; --20,000,263
```
## User feature
`user_feature_train` and `user_feature_test` are pre-processed user feature
see SQL below:
```sql
-- user feature prepare
create table user_feature_train as
select r1.userId, ugenres, avgRating, cntRating
from
(
select userId, avg(rating) as avgRating,
count(rating) cntRating
from ratings_train r1 group by userId
) r1 left join (
select userId,
group_concat(genres) as ugenres
from ratings_train r
left join movies t2 on r.movieId = t2.movieId
where r.rating > 3.5
group by userId
) r2 on r2.userId = r1.userId
-- user feature prepare
create table user_feature_test as
select r1.userId, ugenres, avgRating, cntRating
from
(
select userId, avg(rating) as avgRating,
count(rating) cntRating
from ratings_test r1 group by userId
) r1 left join (
select userId,
group_concat(genres) as ugenres
from ratings_test r
left join movies t2 on r.movieId = t2.movieId
where r.rating > 3.5
group by userId
) r2 on r2.userId = r1.userId
```
## User behavior
```sql
create table ub_train as
select userId, group_concat(movieId) movieIds ,group_concat(timestamp) timestamps from ratings_train_desc group by userId order by timestamp
create table ub_test as
select userId, group_concat(movieId) movieIds ,group_concat(timestamp) timestamps from ratings_test_desc group by userId order by timestamp
create table ratings_train_desc as
select r.userId, movieId, rating, timestamp
from ratings_train r order by r.userId, timestamp desc;
create table ratings_test_desc as
select r.userId, movieId, rating, timestamp
from ratings_test r order by r.userId, timestamp desc;
```
|
KEEPYs | null | null | null | false | null | false | KEEPYs/titou | 2022-10-30T14:25:31.000Z | null | false | fb223d9a4bb5dc26d0ed573c2318f3174c0b7b06 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/KEEPYs/titou/resolve/main/README.md | ---
license: openrail
---
|
lmqg | null | @inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
} | Human-annotated question generated by models. | false | 5 | false | lmqg/qg_annotation | 2022-10-30T15:08:30.000Z | null | false | b5c56fd50f5993b1cebb86586d286981ec05ae72 | [] | [
"arxiv:2210.03992",
"license:cc-by-4.0",
"language:en",
"multilinguality:monolingual",
"size_categories:<1K"
] | https://huggingface.co/datasets/lmqg/qg_annotation/resolve/main/README.md | ---
license: cc-by-4.0
pretty_name: QG Annotation
language: en
multilinguality: monolingual
size_categories: <1K
---
# Dataset Card for "lmqg/qg_annotation"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is the annotated questions generated by different models, used to measure the correlation of automatic metrics against
human in ["Generative Language Models for Paragraph-Level Question Generation: A Unified Benchmark and Evaluation, EMNLP 2022 main conference"](https://arxiv.org/abs/2210.03992).
### Languages
English (en)
## Dataset Structure
An example of 'train' looks as follows.
```python
{
"correctness": 1.8,
"grammaticality": 3.0,
"understandability": 2.4,
"prediction": "What trade did the Ming dynasty have a shortage of?",
"Bleu_4": 0.4961682999359617,
"METEOR": 0.3572683356086923,
"ROUGE_L": 0.7272727272727273,
"BERTScore": 0.9142221808433532,
"MoverScore": 0.6782580808848975,
"reference_raw": "What important trade did the Ming Dynasty have with Tibet?",
"answer_raw": "horse trade",
"paragraph_raw": "Some scholars note that Tibetan leaders during the Ming frequently engaged in civil war and conducted their own foreign diplomacy with neighboring states such as Nepal. Some scholars underscore the commercial aspect of the Ming-Tibetan relationship, noting the Ming dynasty's shortage of horses for warfare and thus the importance of the horse trade with Tibet. Others argue that the significant religious nature of the relationship of the Ming court with Tibetan lamas is underrepresented in modern scholarship. In hopes of reviving the unique relationship of the earlier Mongol leader Kublai Khan (r. 1260\u20131294) and his spiritual superior Drog\u00f6n Ch\u00f6gyal Phagpa (1235\u20131280) of the Sakya school of Tibetan Buddhism, the Yongle Emperor (r. 1402\u20131424) made a concerted effort to build a secular and religious alliance with Deshin Shekpa (1384\u20131415), the Karmapa of the Karma Kagyu school. However, the Yongle Emperor's attempts were unsuccessful.",
"sentence_raw": "Some scholars underscore the commercial aspect of the Ming-Tibetan relationship, noting the Ming dynasty's shortage of horses for warfare and thus the importance of the horse trade with Tibet.",
"reference_norm": "what important trade did the ming dynasty have with tibet ?",
"model": "T5 Large"
}
```
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` |
karabas | null | null | null | false | null | false | karabas/Medal | 2022-10-30T20:02:07.000Z | null | false | 074904da08bd9c88c246cc4108954dd5b9df96ce | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/karabas/Medal/resolve/main/README.md | ---
license: apache-2.0
---
|
api19750904 | null | null | null | false | 1 | false | api19750904/efeverde_5_cat_lem | 2022-10-30T17:43:32.000Z | null | false | 66bebf8a6d23d46f11d9528c9b9c01cad0a78d2d | [] | [] | https://huggingface.co/datasets/api19750904/efeverde_5_cat_lem/resolve/main/README.md | efeverde |
karabas | null | null | null | false | null | false | karabas/small_medals | 2022-10-30T20:16:56.000Z | null | false | c6f24b11060e96ccd426b767f308acb33cd716fd | [] | [
"doi:10.57967/hf/0076",
"license:unlicense"
] | https://huggingface.co/datasets/karabas/small_medals/resolve/main/README.md | ---
license: unlicense
---
|
LVN | null | null | null | false | null | false | LVN/photo | 2022-10-30T21:11:15.000Z | null | false | 3ce53de6c851bc3fc44e9c2733d69db7e1185fc3 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/LVN/photo/resolve/main/README.md | ---
license: openrail
---
|
Andris2067 | null | null | null | false | null | false | Andris2067/VPurvitis2 | 2022-10-30T23:14:16.000Z | null | false | b9dcaee150e77ece89e2a10c197ae823ea27a685 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/Andris2067/VPurvitis2/resolve/main/README.md | ---
license: openrail
---
|
society-ethics | null | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | This new dataset is designed to solve this great NLP task and is crafted with a lot of care. | false | 7 | false | society-ethics/LILA | 2022-11-14T22:24:32.000Z | null | false | eafbf8870900cf88c7b217aa79694f54a57c6264 | [] | [
"annotations_creators:expert-generated",
"license:other",
"language_creators:expert-generated",
"language:en",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"task_categories:image-classification",
"tags:biodiversity",
"tags:camera trap data",
"tags:wildlife monitoring"
] | https://huggingface.co/datasets/society-ethics/LILA/resolve/main/README.md | ---
annotations_creators:
- expert-generated
license:
- other
language_creators:
- expert-generated
language:
- en
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- image-classification
tags:
- biodiversity
- camera trap data
- wildlife monitoring
pretty_name: LILA Camera Traps
---
# Dataset Card for LILA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://lila.science/
- **Repository:** N/A
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** [info@lila.science](info@lila.science)
### Dataset Summary
LILA Camera Traps is an aggregate data set of images taken by camera traps, which are devices that automatically (e.g. via motion detection) capture images of wild animals to help ecological research.
This data set is the first time when disparate camera trap data sets have been aggregated into a single training environment with a single [taxonomy](https://lila.science/taxonomy-mapping-for-camera-trap-data-sets/).
This data set consists of only camera trap image data sets, whereas the broader [LILA](lila.science/) website also has other data sets related to biology and conservation, intended as a resource for both machine learning (ML) researchers and those that want to harness ML for this topic.
See below for information about each specific dataset that LILA contains:
<details>
<summary> Caltech Camera Traps </summary>
This data set contains 243,100 images from 140 camera locations in the Southwestern United States, with labels for 21 animal categories (plus empty), primarily at the species level (for example, the most common labels are opossum, raccoon, and coyote), and approximately 66,000 bounding box annotations. Approximately 70% of images are labeled as empty.
More information about this data set is available [here](https://beerys.github.io/CaltechCameraTraps/).
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
For questions about this data set, contact caltechcameratraps@gmail.com.
If you use this data set, please cite the associated manuscript:
```bibtex
@inproceedings{DBLP:conf/eccv/BeeryHP18,
author = {Sara Beery and
Grant Van Horn and
Pietro Perona},
title = {Recognition in Terra Incognita},
booktitle = {Computer Vision - {ECCV} 2018 - 15th European Conference, Munich,
Germany, September 8-14, 2018, Proceedings, Part {XVI}},
pages = {472--489},
year = {2018},
crossref = {DBLP:conf/eccv/2018-16},
url = {https://doi.org/10.1007/978-3-030-01270-0\_28},
doi = {10.1007/978-3-030-01270-0\_28},
timestamp = {Mon, 08 Oct 2018 17:08:07 +0200},
biburl = {https://dblp.org/rec/bib/conf/eccv/BeeryHP18},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
</details>
<details>
<summary> ENA24 </summary>
This data set contains approximately 10,000 camera trap images representing 23 classes from Eastern North America, with bounding boxes on each image. The most common classes are “American Crow”, “American Black Bear”, and “Dog”.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
Please cite this manuscript if you use this data set:
```bibtex
@article{yousif2019dynamic,
title={Dynamic Programming Selection of Object Proposals for Sequence-Level Animal Species Classification in the Wild},
author={Yousif, Hayder and Kays, Roland and He, Zhihai},
journal={IEEE Transactions on Circuits and Systems for Video Technology},
year={2019},
publisher={IEEE}
}
```
For questions about this data set, contact [Hayder Yousif](hyypp5@mail.missouri.edu).
</details>
<details>
<summary> Missouri Camera Traps </summary>
This data set contains approximately 25,000 camera trap images representing 20 species (for example, the most common labels are red deer, mouflon, and white-tailed deer). Images within each sequence share the same species label (even though the animal may not have been recorded in all the images in the sequence). Around 900 bounding boxes are included. These are very challenging sequences with highly cluttered and dynamic scenes. Spatial resolutions of the images vary from 1920 × 1080 to 2048 × 1536. Sequence lengths vary from 3 to more than 300 frames.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
If you use this data set, please cite the associated manuscript:
```bibtex
@article{zhang2016animal,
title={Animal detection from highly cluttered natural scenes using spatiotemporal object region proposals and patch verification},
author={Zhang, Zhi and He, Zhihai and Cao, Guitao and Cao, Wenming},
journal={IEEE Transactions on Multimedia},
volume={18},
number={10},
pages={2079--2092},
year={2016},
publisher={IEEE}
}
```
For questions about this data set, contact [Hayder Yousif](hyypp5@mail.missouri.edu) and [Zhi Zhang](zzbhf@mail.missouri.edu).
</details>
<details>
<summary> North American Camera Trap Images (NACTI) </summary>
This data set contains 3.7M camera trap images from five locations across the United States, with labels for 28 animal categories, primarily at the species level (for example, the most common labels are cattle, boar, and red deer). Approximately 12% of images are labeled as empty. We have also added bounding box annotations to 8892 images (mostly vehicles and birds).
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
Please cite this manuscript if you use this data set:
```bibtex
@article{tabak2019machine,
title={Machine learning to classify animal species in camera trap images: Applications in ecology},
author={Tabak, Michael A and Norouzzadeh, Mohammad S and Wolfson, David W and Sweeney, Steven J and VerCauteren, Kurt C and Snow, Nathan P and Halseth, Joseph M and Di Salvo, Paul A and Lewis, Jesse S and White, Michael D and others},
journal={Methods in Ecology and Evolution},
volume={10},
number={4},
pages={585--590},
year={2019},
publisher={Wiley Online Library}
}
```
For questions about this data set, contact [northamericancameratrapimages@gmail.com](northamericancameratrapimages@gmail.com).
</details>
<details>
<summary> WCS Camera Traps </summary>
This data set contains approximately 1.4M camera trap images representing around 675 species from 12 countries, making it one of the most diverse camera trap data sets available publicly. Data were provided by the [Wildlife Conservation Society](https://www.wcs.org/). The most common classes are tayassu pecari (peccary), meleagris ocellata (ocellated turkey), and bos taurus (cattle). A complete list of classes and associated image counts is available here. Approximately 50% of images are empty. We have also added approximately 375,000 bounding box annotations to approximately 300,000 of those images, which come from sequences covering almost all locations.
Sequences are inferred from timestamps, so may not strictly represent bursts. Images were labeled at a combination of image and sequence level, so – as is the case with most camera trap data sets – empty images may be labeled as non-empty (if an animal was present in one frame of a sequence but not in others). Images containing humans are referred to in metadata, but are not included in the data files. You can find more information about the data set [on the LILA website](https://lila.science/datasets/wcscameratraps).
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Wellington Camera Traps </summary>
This data set contains 270,450 images from 187 camera locations in Wellington, New Zealand. The cameras (Bushnell 119537, 119476, and 119436) recorded sequences of three images when triggered. Each sequence was labelled by citizen scientists and/or professional ecologists from Victoria University of Wellington into 17 classes: 15 animal categories (for example, the most common labels are bird, cat, and hedgehog), empty, and unclassifiable. Approximately 17% of images are labeled as empty. Images within each sequence share the same species label (even though the animal may not have been recorded in all three images).
If you use this data set, please cite the associated manuscript:
```bibtex
@article{anton2018monitoring,
title={Monitoring the mammalian fauna of urban areas using remote cameras and citizen science},
author={Anton, Victor and Hartley, Stephen and Geldenhuis, Andre and Wittmer, Heiko U},
journal={Journal of Urban Ecology},
volume={4},
number={1},
pages={juy002},
year={2018},
publisher={Oxford University Press}
}
```
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
For questions about this data set, contact [Victor Anton](vykanton@gmail.com).
</details>
<details>
<summary> Island Conservation Camera Traps </summary>
This data set contains approximately 123,000 camera trap images from 123 camera locations from 7 islands in 6 countries. Data were provided by Island Conservation during projects conducted to prevent the extinction of threatened species on islands.
The most common classes are rabbit, rat, petrel, iguana, cat, goat, and pig, with both rat and cat represented between multiple island sites representing significantly different ecosystems (tropical forest, dry forest, and temperate forests). Additionally, this data set represents data from locations and ecosystems that, to our knowledge, are not well represented in publicly available datasets including >1,000 images each of iguanas, petrels, and shearwaters. A complete list of classes and associated image counts is available here. Approximately 60% of the images are empty. We have also included approximately 65,000 bounding box annotations for about 50,000 images.
In general cameras were dispersed across each project site to detect the presence of invasive vertebrate species that threaten native island species. Cameras were set to capture bursts of photos for each motion detection event (between three and eight photos) with a set delay between events (10 to 30 seconds) to minimize the number of photos. Images containing humans are referred to in metadata, but are not included in the data files.
For questions about this data set, contact [David Will](david.will@islandconservation.org) at Island Conservation.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
The original data set included a “human” class label; for privacy reasons, we have removed those images from this version of the data set. Those labels are still present in the metadata. If those images are important to your work, contact us; in some cases it will be possible to release those images under an alternative license.
</details>
<details>
<summary> Channel Islands Camera Traps </summary>
This data set contains 246,529 camera trap images from 73 camera locations in the Channel Islands, California. All animals are annotated with bounding boxes. Data were provided by The Nature Conservancy. Animals are classified as rodent1 (82914), fox (48150), bird (11099), skunk (1071), or other (159). 114,949 images (47%) are empty. All images of rats were taken on islands already known to have rat populations.
If you use these data in a publication or report, please use the following citation:
The Nature Conservancy (2021): Channel Islands Camera Traps 1.0. The Nature Conservancy. Dataset.
For questions about this data set, contact [Nathaniel Rindlaub](nathaniel.rindlaub@TNC.ORG) at The Nature Conservancy.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
The original data set included a “human” class label; for privacy reasons, we have removed those images from this version of the data set. Those labels are still present in the metadata.
</details>
<details>
<summary> Idaho Camera Traps </summary>
This data set contains approximately 1.5 million camera trap images from Idaho. Labels are provided for 62 categories, most of which are animal classes (“deer”, “elk”, and “cattle” are the most common animal classes), but labels also include some state indicators (e.g. “snow on lens”, “foggy lens”). Approximately 70.5% of images are labeled as empty. Annotations were assigned to image sequences, rather than individual images, so annotations are meaningful only at the sequence level.
The metadata contains references to images containing humans, but these have been removed from the dataset (along with images containing vehicles and domestic dogs).
Images were provided by the Idaho Department of Fish and Game. No representations or warranties are made regarding the data, including but not limited to warranties of non-infringement or fitness for a particular purpose. Some information shared under this agreement may not have undergone quality assurance procedures and should be considered provisional. Images may not be sold in any format, but may be used for scientific publications. Please acknowledge the Idaho Department of Fish and Game when using images for publication or scientific communication.
</details>
<details>
<summary> Snapshot Serengeti </summary>
This data set contains approximately 2.65M sequences of camera trap images, totaling 7.1M images, from seasons one through eleven of the [Snapshot Serengeti project](https://snapshotserengeti.org/) -- the flagship project of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Serengeti National Park in Tanzania is best known for the massive annual migrations of wildebeest and zebra that drive the cycling of its dynamic ecosystem.
Labels are provided for 61 categories, primarily at the species level (for example, the most common labels are wildebeest, zebra, and Thomson’s gazelle). Approximately 76% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshotserengeti-v-2-0/SnapshotSerengeti_S1-11_v2.1.species_list.csv). We have also added approximately 150,000 bounding box annotations to approximately 78,000 of those images.
The images and species-level labels are described in more detail in the associated manuscript:
```bibtex
@misc{dryad_5pt92,
title = {Data from: Snapshot Serengeti, high-frequency annotated camera trap images of 40 mammalian species in an African savanna},
author = {Swanson, AB and Kosmala, M and Lintott, CJ and Simpson, RJ and Smith, A and Packer, C},
year = {2015},
journal = {Scientific Data},
URL = {https://doi.org/10.5061/dryad.5pt92},
doi = {doi:10.5061/dryad.5pt92},
publisher = {Dryad Digital Repository}
}
```
For questions about this data set, contact [Sarah Huebner](huebn090@umn.edu) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Karoo </summary>
This data set contains 14889 sequences of camera trap images, totaling 38074 images, from the [Snapshot Karoo](https://www.zooniverse.org/projects/shuebner729/snapshot-karoo) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Karoo National Park, located in the arid Nama Karoo biome of South Africa, is defined by its endemic vegetation and mountain landscapes. Its unique topographical gradient has led to a surprising amount of biodiversity, with 58 mammals and more than 200 bird species recorded, as well as a multitude of reptilian species.
Labels are provided for 38 categories, primarily at the species level (for example, the most common labels are gemsbokoryx, hartebeestred, and kudu). Approximately 83.02% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/KAR/SnapshotKaroo_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner](huebn090@umn.edu) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Kgalagadi </summary>
This data set contains 3611 sequences of camera trap images, totaling 10222 images, from the [Snapshot Kgalagadi](https://www.zooniverse.org/projects/shuebner729/snapshot-kgalagadi/) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. The Kgalagadi Transfrontier Park stretches from the Namibian border across South Africa and into Botswana, covering a landscape commonly referred to as the Kalahari – an arid savanna. This region is of great interest to help us understand how animals cope with extreme temperatures at both ends of the scale.
Labels are provided for 31 categories, primarily at the species level (for example, the most common labels are gemsbokoryx, birdother, and ostrich). Approximately 76.14% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/KGA/SnapshotKgalagadi_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner](huebn090@umn.edu) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Enonkishu </summary>
This data set contains 13301 sequences of camera trap images, totaling 28544 images, from the [Snapshot Enonkishu](https://www.zooniverse.org/projects/aguthmann/snapshot-enonkishu) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Enonkishu Conservancy is located on the northern boundary of the Mara-Serengeti ecosystem in Kenya, and is managed by a consortium of stakeholders and land-owning Maasai families. Their aim is to promote coexistence between wildlife and livestock in order to encourage regenerative grazing and build stability in the Mara conservancies.
Labels are provided for 39 categories, primarily at the species level (for example, the most common labels are impala, warthog, and zebra). Approximately 64.76% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/ENO/SnapshotEnonkishu_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner](huebn090@umn.edu) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Camdeboo </summary>
This data set contains 12132 sequences of camera trap images, totaling 30227 images, from the [Snapshot Camdeboo](https://www.zooniverse.org/projects/shuebner729/snapshot-camdeboo) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Camdeboo National Park, South Africa is crucial habitat for many birds on a global scale, with greater than fifty endemic and near-endemic species and many migratory species.
Labels are provided for 43 categories, primarily at the species level (for example, the most common labels are kudu, springbok, and ostrich). Approximately 43.74% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/CDB/SnapshotCamdeboo_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner](huebn090@umn.edu) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Mountain Zebra </summary>
This data set contains 71688 sequences of camera trap images, totaling 73034 images, from the [Snapshot Mountain Zebra](https://www.zooniverse.org/projects/meredithspalmer/snapshot-mountain-zebra/) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Mountain Zebra National Park is located in the Eastern Cape of South Africa in a transitional area between several distinct biomes, which means it is home to many endemic species. As the name suggests, this park contains the largest remnant population of Cape Mountain zebras, ~700 as of 2019 and increasing steadily every year.
Labels are provided for 54 categories, primarily at the species level (for example, the most common labels are zebramountain, kudu, and springbok). Approximately 91.23% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/MTZ/SnapshotMountainZebra_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner](huebn090@umn.edu) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Snapshot Kruger </summary>
This data set contains 4747 sequences of camera trap images, totaling 10072 images, from the [Snapshot Kruger](https://www.zooniverse.org/projects/shuebner729/snapshot-kruger) project, part of the Snapshot Safari network. Using the same camera trapping protocols at every site, Snapshot Safari members are collecting standardized data from many protected areas in Africa, which allows for cross-site comparisons to assess the efficacy of conservation and restoration programs. Kruger National Park, South Africa has been a refuge for wildlife since its establishment in 1898, and it houses one of the most diverse wildlife assemblages remaining in Africa. The Snapshot Safari grid was established in 2018 as part of a research project assessing the impacts of large mammals on plant life as boundary fences were removed and wildlife reoccupied areas of previous extirpation.
Labels are provided for 46 categories, primarily at the species level (for example, the most common labels are impala, elephant, and buffalo). Approximately 61.60% of images are labeled as empty. A full list of species and associated image counts is available [here](https://lilablobssc.blob.core.windows.net/snapshot-safari/KRU/SnapshotKruger_S1_v1.0.species_list.csv).
For questions about this data set, contact [Sarah Huebner](huebn090@umn.edu) at the University of Minnesota.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> SWG Camera Traps </summary>
This data set contains 436,617 sequences of camera trap images from 982 locations in Vietnam and Lao, totaling 2,039,657 images. Labels are provided for 120 categories, primarily at the species level (for example, the most common labels are “Eurasian Wild Pig”, “Large-antlered Muntjac”, and “Unidentified Murid”). Approximately 12.98% of images are labeled as empty. A full list of species and associated image counts is available here. 101,659 bounding boxes are provided on 88,135 images.
This data set is provided by the Saola Working Group; providers include:
- IUCN SSC Asian Wild Cattle Specialist Group’s Saola Working Group (SWG)
- Asian Arks
- Wildlife Conservation Society (Lao)
- WWF Lao
- Integrated Conservation of Biodiversity and Forests project, Lao (ICBF)
- Center for Environment and Rural Development, Vinh University, Vietnam
If you use these data in a publication or report, please use the following citation:
SWG (2021): Northern and Central Annamites Camera Traps 2.0. IUCN SSC Asian Wild Cattle Specialist Group’s Saola Working Group. Dataset.
For questions about this data set, contact saolawg@gmail.com.
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
<details>
<summary> Orinoquia Camera Traps </summary>
This data set contains 104,782 images collected from a 50-camera-trap array deployed from January to July 2020 within the private natural reserves El Rey Zamuro (31 km2) and Las Unamas (40 km2), located in the Meta department in the Orinoquía region in central Colombia. We deployed cameras using a stratified random sampling design across forest core area strata. Cameras were spaced 1 km apart from one another, located facing wildlife trails, and deployed with no bait. Images were stored and reviewed by experts using the Wildlife Insights platform.
This data set contains 51 classes, predominantly mammals such as the collared peccary, black agouti, spotted paca, white-lipped peccary, lowland tapir, and giant anteater. Approximately 20% of images are empty.
The main purpose of the study is to understand how humans, wildlife, and domestic animals interact in multi-functional landscapes (e.g., agricultural livestock areas with native forest remnants). However, this data set was also used to review model performance of AI-powered platforms – Wildlife Insights (WI), MegaDetector (MD), and Machine Learning for Wildlife Image Classification (MLWIC2). We provide a demonstration of the use of WI, MD, and MLWIC2 and R code for evaluating model performance of these platforms in the accompanying [GitHub repository](https://github.com/julianavelez1/Processing-Camera-Trap-Data-Using-AI).
If you use these data in a publication or report, please use the following citation:
```bibtex
@article{velez2022choosing,
title={Choosing an Appropriate Platform and Workflow for Processing Camera Trap Data using Artificial Intelligence},
author={V{\'e}lez, Juliana and Castiblanco-Camacho, Paula J and Tabak, Michael A and Chalmers, Carl and Fergus, Paul and Fieberg, John},
journal={arXiv preprint arXiv:2202.02283},
year={2022}
}
```
For questions about this data set, contact [Juliana Velez Gomez](julianavelezgomez@gmail.com).
This data set is released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/).
</details>
### Supported Tasks and Leaderboards
No leaderboards exist for LILA.
### Languages
The [LILA taxonomy](https://lila.science/taxonomy-mapping-for-camera-trap-data-sets/) is provided in English.
## Dataset Structure
### Data Instances
The data annotations are provided in [COCO Camera Traps](https://github.com/Microsoft/CameraTraps/blob/master/data_management/README.md#coco-cameratraps-format) format.
All of the datasets share a common category taxonomy, which is defined on the [LILA website](https://lila.science/taxonomy-mapping-for-camera-trap-data-sets/).
### Data Fields
Different datasets may have slightly varying fields, which include:
`id`: image ID \
`file_name`: the file name \
`width` and `height`: the dimensions of the image \
`study`: which research study the image was collected as part of \
`location` : the name of the location at which the image was taken \
`annotations`: information about image annotation, which includes `category_id` (the reference to the [LILA taxonomy](https://lila.science/taxonomy-mapping-for-camera-trap-data-sets/), the bounding box/boxes (`bbox`/`bboxes`) if any, as well as any other annotation information. \
`image` : the `path` to download the image and any other information that is available, e.g. its size in `bytes`.
### Data Splits
This dataset does not have a predefined train/test split.
## Dataset Creation
### Curation Rationale
The datasets that constitute LILA have been provided by the organizations, projects and researchers who collected them.
### Source Data
#### Initial data collection and normalization
N/A
#### Who are the source language producers?
N/A
### Annotations
#### Annotation process
Each dataset has been annotated by the members of the project/organization that provided it.
#### Who are the annotators?
The annotations have been provided by domain experts in fields such as biology and ecology.
### Personal and Sensitive Information
Some of the original data sets included a “human” class label; for privacy reasons, these images were removed. Those labels are still present in the metadata. If those images are important to your work, contact the [LILA maintainers](info@lila.science), since in some cases it will be possible to release those images under an alternative license.
## Considerations for Using the Data
### Social Impact of Dataset
Machine learning depends on labeled data, but accessing such data in biology and conservation is a challenge. Consequently, everyone benefits when labeled data is made available. Biologists and conservation scientists benefit by having data to train on, and free hosting allows teams to multiply the impact of their data (we suggest listing this benefit in grant proposals that fund data collection). ML researchers benefit by having data to experiment with.
### Discussion of Biases
These datasets do not represent global diversity, but are examples of local ecosystems and animals.
### Other Known Limitations
N/A
## Additional Information
### Dataset Curators
LILA BC is maintained by a working group that includes representatives from Ecologize, Zooniverse, the Evolving AI Lab, Snapshot Safari, and Microsoft AI for Earth. Hosting on Microsoft Azure is provided by Microsoft AI for Earth.
### Licensing Information
Many, but not all, LILA data sets were released under the [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/). Check the details of the specific dataset you are using in its section above.
### Citation Information
Citations for each dataset (if they exist) are provided in its section above.
### Contributions
Thanks to [@NimaBoscarino](https://github.com/NimaBoscarino/) for adding this dataset.
|
amin-sadeghi | null | null | null | false | null | false | amin-sadeghi/digital-rocks | 2022-10-31T00:04:26.000Z | null | false | 191bf8f52845853c319861567520542fe495ff5c | [] | [
"license:mit"
] | https://huggingface.co/datasets/amin-sadeghi/digital-rocks/resolve/main/README.md | ---
license: mit
---
|
XiangBo | null | null | null | false | null | false | XiangBo/medQA2kw-dpr | 2022-10-31T02:14:15.000Z | null | false | 855a559efea6b2d9bc43b23ff14c734d04464e1c | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/XiangBo/medQA2kw-dpr/resolve/main/README.md | ---
license: apache-2.0
---
|
oo92 | null | null | null | false | 87 | false | oo92/diffusion-data | 2022-10-31T03:08:58.000Z | null | false | 9d031f0412a79f3f53cfb7b584560cb40775bf33 | [] | [
"license:mit"
] | https://huggingface.co/datasets/oo92/diffusion-data/resolve/main/README.md | ---
license: mit
---
|
abhinavk | null | @inproceedings{
title={{OPENPI V2}: }
author={}
note={}
year={2022}
} | TEMPORARY DESCRIPTION | false | 23 | false | abhinavk/openpi_v2 | 2022-11-07T02:23:34.000Z | null | false | 4f352870d3552163c0b4be7ee7195e1cf402f5b3 | [] | [
"annotations_creators:expert-generated",
"language:en",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"task_categories:question-answering",
"task_categories:text-classification",
"task_ids:entity-linking-classification",
"task_ids:natural-language-inference"
] | https://huggingface.co/datasets/abhinavk/openpi_v2/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language:
- en
language_creators: []
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: openpi_v2
size_categories:
- 10K<n<100K
source_datasets: []
tags: []
task_categories:
- question-answering
- text-classification
task_ids:
- entity-linking-classification
- natural-language-inference
---
# Dataset Card for openpi_v2
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Open PI is the first dataset for tracking state changes in procedural text from arbitrary domains by using an unrestricted (open) vocabulary. Our solution is a new task formulation in which just the text is provided, from which a set of state changes (entity, attribute, before, after) is generated for each step, where the entity, attribute, and values must all be predicted from an open vocabulary.
### Supported Tasks and Leaderboards
- `Task 1`: Given paragraph (e.g., with 5 steps), identify entities that change (challenge: implicit entities, some explicit entities that don’t change)
- `Task 3`: Given paragraph, identify the attributes of entity that change (challenge: implicit entities, attributes & many combinations)
- `Task 4`: Given paragraph & an entity, identify the sequence of attribute value changes (challenge: implicit attributes)
- `Task 7`: Given image url, identify the visual attributes of entity and non-visual attributes of entity that change
### Languages
English
## Dataset Structure
### Data Instances
A typical instance in the dataset:
```
{
"goal": "goal1_text",
"steps": [
"step1_text",
"step2_text",
...
],
"topics": "topic1_annotation",
"image_urls": [
"step1_url_text",
"step2_url_text",
...
],
"states": [
{
"answers_openpiv1_metadata": {
"entity": "entity1 | entity2 | ...",
"attribute": "attribute1 | attribute2 | ...",
"answers": [
"before: step1_entity1_before | step1_entity2_before, after: step1_entity1_after | step1_entity2_after",
...
],
"modality": [
"step1_entity1_modality_id | step1_entity2_modality_id",
...
]
},
"entity": "entity1 | entity2 | ...",
"attribute": "attribute1 | attribute2 | ...",
"answers": [
"before: step1_entity1_before_merged | step1_entity2_before_merged, after: step1_entity1_after_merged | step1_entity2_after_merged",
...
]
}
]
}
```
### Data Fields
The following is an excerpt from the dataset README:
Within "goal", "steps", "topics", and "image_urls", the fields should be self-explanatory. Listed below is an explanation about those within "states":
#### Fields specific to questions:
### Data Splits
Train, Valid, Dev
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
KETI-AIR | null | There is no citation information | # 기술과학 분야 한-영 번역 병렬 말뭉치 데이터
## 소개
- 신경망 기반 기계 번역기 학습 데이터로 활용하기 위한 한영, 영한 말뭉치
- 기술과학 번역기의 성능 향상을 위한 학습용 데이터
## 구축목적
- 전문 용어의 이해도가 높고 학술적 의미를 효과적으로 전달할 수 있는 인공 신경망 기계 번역기 학습용 말뭉치 데이터 구축
## Usage
```python
from datasets import load_dataset
raw_datasets = load_dataset(
"aihub_scitech_translation.py",
"base",
cache_dir="huggingface_datasets",
data_dir="data",
ignore_verifications=True,
)
dataset_train = raw_datasets["train"]
for item in dataset_train:
print(item)
exit()
```
## 데이터 관련 문의처
| 담당자명 | 전화번호 | 이메일 |
| ------------- | ------------- | ------------- |
| 최규동 | 1833-5926 | ken.choi@twigfarm.net |
## Copyright
### 데이터 소개
AI 허브에서 제공되는 인공지능 학습용 데이터(이하 ‘AI데이터’라고 함)는 과학기술정보통신부와 한국지능정보사회진흥원의 「지능정보산업 인프라 조성」 사업의 일환으로 구축되었으며, 본 사업의 유‧무형적 결과물인 데이터, AI 응용모델 및 데이터 저작도구의 소스, 각종 매뉴얼 등(이하 ‘AI데이터 등’)에 대한 일체의 권리는 AI데이터 등의 구축 수행기관 및 참여기관(이하 ‘수행기관 등’)과 한국지능정보사회진흥원에 있습니다.
본 AI데이터 등은 인공지능 기술 및 제품·서비스 발전을 위하여 구축하였으며, 지능형 제품・서비스, 챗봇 등 다양한 분야에서 영리적・비영리적 연구・개발 목적으로 활용할 수 있습니다.
### 데이터 이용정책
- 본 AI데이터 등을 이용하기 위해서 다음 사항에 동의하며 준수해야 함을 고지합니다.
1. 본 AI데이터 등을 이용할 때에는 반드시 한국지능정보사회진흥원의 사업결과임을 밝혀야 하며, 본 AI데이터 등을 이용한 2차적 저작물에도 동일하게 밝혀야 합니다.
2. 국외에 소재하는 법인, 단체 또는 개인이 AI데이터 등을 이용하기 위해서는 수행기관 등 및 한국지능정보사회진흥원과 별도로 합의가 필요합니다.
3. 본 AI데이터 등의 국외 반출을 위해서는 수행기관 등 및 한국지능정보사회진흥원과 별도로 합의가 필요합니다.
4. 본 AI데이터는 인공지능 학습모델의 학습용으로만 사용할 수 있습니다. 한국지능정보사회진흥원은 AI데이터 등의 이용의 목적이나 방법, 내용 등이 위법하거나 부적합하다고 판단될 경우 제공을 거부할 수 있으며, 이미 제공한 경우 이용의 중지와 AI 데이터 등의 환수, 폐기 등을 요구할 수 있습니다.
5. 제공 받은 AI데이터 등을 수행기관 등과 한국지능정보사회진흥원의 승인을 받지 않은 다른 법인, 단체 또는 개인에게 열람하게 하거나 제공, 양도, 대여, 판매하여서는 안됩니다.
6. AI데이터 등에 대해서 제 4항에 따른 목적 외 이용, 제5항에 따른 무단 열람, 제공, 양도, 대여, 판매 등의 결과로 인하여 발생하는 모든 민・형사 상의 책임은 AI데이터 등을 이용한 법인, 단체 또는 개인에게 있습니다.
7. 이용자는 AI 허브 제공 데이터셋 내에 개인정보 등이 포함된 것이 발견된 경우, 즉시 AI 허브에 해당 사실을 신고하고 다운로드 받은 데이터셋을 삭제하여야 합니다.
8. AI 허브로부터 제공받은 비식별 정보(재현정보 포함)를 인공지능 서비스 개발 등의 목적으로 안전하게 이용하여야 하며, 이를 이용해서 개인을 재식별하기 위한 어떠한 행위도 하여서는 안됩니다.
9. 향후 한국지능정보사회진흥원에서 활용사례・성과 등에 관한 실태조사를 수행 할 경우 이에 성실하게 임하여야 합니다.
### 데이터 다운로드 신청방법
1. AI 허브를 통해 제공 중인 AI데이터 등을 다운로드 받기 위해서는 별도의 신청자 본인 확인과 정보 제공, 목적을 밝히는 절차가 필요합니다.
2. AI데이터를 제외한 데이터 설명, 저작 도구 등은 별도의 신청 절차나 로그인 없이 이용이 가능합니다.
3. 한국지능정보사회진흥원이 권리자가 아닌 AI데이터 등은 해당 기관의 이용정책과 다운로드 절차를 따라야 하며 이는 AI 허브와 관련이 없음을 알려 드립니다. | false | 73 | false | KETI-AIR/aihub_scitech_translation | 2022-10-31T06:33:28.000Z | null | false | 5f678f5408eb19543510a1af2d58797c9366d6c0 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/KETI-AIR/aihub_scitech_translation/resolve/main/README.md | ---
license: apache-2.0
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.