id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
rohitp1/librispeech_asr_clean | 2023-01-03T18:08:17.000Z | [
"license:cc-by-4.0",
"region:us"
] | rohitp1 | LibriSpeech is a corpus of approximately 1000 hours of read English speech with sampling rate of 16 kHz,
prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read
audiobooks from the LibriVox project, and has been carefully segmented and aligned.87 | @inproceedings{panayotov2015librispeech,
title={Librispeech: an ASR corpus based on public domain audio books},
author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on},
pages={5206--5210},
year={2015},
organization={IEEE}
} | null | 0 | 4 | ---
license: cc-by-4.0
---
This dataset contains only 100 hours train data of librispeech_clean. Functionality of librispeech-other and test-clean and dev-clean is unchanged |
keremberke/aerial-sheep-object-detection | 2023-01-05T08:02:23.000Z | [
"task_categories:object-detection",
"roboflow",
"region:us"
] | keremberke | null | @misc{ aerial-sheep_dataset,
title = { Aerial Sheep Dataset },
type = { Open Source Dataset },
author = { Riis },
howpublished = { \\url{ https://universe.roboflow.com/riis/aerial-sheep } },
url = { https://universe.roboflow.com/riis/aerial-sheep },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { jun },
note = { visited on 2023-01-02 },
} | null | 4 | 4 | ---
task_categories:
- object-detection
tags:
- roboflow
---
### Roboflow Dataset Page
[https://universe.roboflow.com/riis/aerial-sheep/dataset/1](https://universe.roboflow.com/riis/aerial-sheep/dataset/1?ref=roboflow2huggingface)
### Dataset Labels
```
['sheep']
```
### Citation
```
@misc{ aerial-sheep_dataset,
title = { Aerial Sheep Dataset },
type = { Open Source Dataset },
author = { Riis },
howpublished = { \\url{ https://universe.roboflow.com/riis/aerial-sheep } },
url = { https://universe.roboflow.com/riis/aerial-sheep },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { jun },
note = { visited on 2023-01-02 },
}
```
### License
Public Domain
### Dataset Summary
This dataset was exported via roboflow.com on December 2, 2022 at 4:47 AM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
It includes 4133 images.
Sheep are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 600x600 (Stretch)
The following augmentation was applied to create 3 versions of each source image:
* 50% probability of horizontal flip
* 50% probability of vertical flip
* Randomly crop between 0 and 20 percent of the image
* Random brigthness adjustment of between -15 and +15 percent
* Random exposure adjustment of between -10 and +10 percent
|
DavidVivancos/MindBigData2022_VisMNIST_MU2 | 2023-01-04T08:18:34.000Z | [
"license:odbl",
"region:us"
] | DavidVivancos | null | null | null | 0 | 4 | ---
license: odbl
---
|
keremberke/smoke-object-detection | 2023-01-04T20:54:45.000Z | [
"task_categories:object-detection",
"roboflow",
"region:us"
] | keremberke | null | @misc{ smoke100-uwe4t_dataset,
title = { Smoke100 Dataset },
type = { Open Source Dataset },
author = { Smoke Detection },
howpublished = { \\url{ https://universe.roboflow.com/smoke-detection/smoke100-uwe4t } },
url = { https://universe.roboflow.com/smoke-detection/smoke100-uwe4t },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { dec },
note = { visited on 2023-01-02 },
} | null | 2 | 4 | ---
task_categories:
- object-detection
tags:
- roboflow
---
### Roboflow Dataset Page
https://universe.roboflow.com/smoke-detection/smoke100-uwe4t/dataset/4
### Dataset Labels
```
['smoke']
```
### Citation
```
@misc{ smoke100-uwe4t_dataset,
title = { Smoke100 Dataset },
type = { Open Source Dataset },
author = { Smoke Detection },
howpublished = { \\url{ https://universe.roboflow.com/smoke-detection/smoke100-uwe4t } },
url = { https://universe.roboflow.com/smoke-detection/smoke100-uwe4t },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { dec },
note = { visited on 2023-01-02 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.ai on March 17, 2022 at 3:42 PM GMT
It includes 21578 images.
Smoke are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 640x640 (Stretch)
No image augmentation techniques were applied.
|
irds/gov_trec-web-2002_named-page | 2023-01-05T03:04:44.000Z | [
"task_categories:text-retrieval",
"source_datasets:irds/gov",
"region:us"
] | irds | null | null | null | 0 | 4 | ---
pretty_name: '`gov/trec-web-2002/named-page`'
viewer: false
source_datasets: ['irds/gov']
task_categories:
- text-retrieval
---
# Dataset Card for `gov/trec-web-2002/named-page`
The `gov/trec-web-2002/named-page` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/gov#gov/trec-web-2002/named-page).
# Data
This dataset provides:
- `queries` (i.e., topics); count=150
- `qrels`: (relevance assessments); count=170
- For `docs`, use [`irds/gov`](https://huggingface.co/datasets/irds/gov)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/gov_trec-web-2002_named-page', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/gov_trec-web-2002_named-page', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Craswell2002TrecWeb,
title={Overview of the TREC-2002 Web Track},
author={Nick Craswell and David Hawking},
booktitle={TREC},
year={2002}
}
```
|
irds/lotte_writing_dev_search | 2023-01-05T03:15:57.000Z | [
"task_categories:text-retrieval",
"source_datasets:irds/lotte_writing_dev",
"arxiv:2112.01488",
"region:us"
] | irds | null | null | null | 0 | 4 | ---
pretty_name: '`lotte/writing/dev/search`'
viewer: false
source_datasets: ['irds/lotte_writing_dev']
task_categories:
- text-retrieval
---
# Dataset Card for `lotte/writing/dev/search`
The `lotte/writing/dev/search` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/lotte#lotte/writing/dev/search).
# Data
This dataset provides:
- `queries` (i.e., topics); count=497
- `qrels`: (relevance assessments); count=1,287
- For `docs`, use [`irds/lotte_writing_dev`](https://huggingface.co/datasets/irds/lotte_writing_dev)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/lotte_writing_dev_search', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/lotte_writing_dev_search', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Santhanam2021ColBERTv2,
title = "ColBERTv2: Effective and Efficient Retrieval via Lightweight Late Interaction",
author = "Keshav Santhanam and Omar Khattab and Jon Saad-Falcon and Christopher Potts and Matei Zaharia",
journal= "arXiv preprint arXiv:2112.01488",
year = "2021",
url = "https://arxiv.org/abs/2112.01488"
}
```
|
irds/trec-arabic | 2023-01-05T03:51:15.000Z | [
"task_categories:text-retrieval",
"region:us"
] | irds | null | null | null | 0 | 4 | ---
pretty_name: '`trec-arabic`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `trec-arabic`
The `trec-arabic` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-arabic#trec-arabic).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=383,872
This dataset is used by: [`trec-arabic_ar2001`](https://huggingface.co/datasets/irds/trec-arabic_ar2001), [`trec-arabic_ar2002`](https://huggingface.co/datasets/irds/trec-arabic_ar2002)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/trec-arabic', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ..., 'marked_up_doc': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@misc{Graff2001Arabic,
title={Arabic Newswire Part 1 LDC2001T55},
author={Graff, David, and Walker, Kevin},
year={2001},
url={https://catalog.ldc.upenn.edu/LDC2001T55},
publisher={Linguistic Data Consortium}
}
```
|
appvizer/product-sheets-in-french | 2023-03-06T10:13:50.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:fr",
"license:mit",
"region:us"
] | appvizer | null | null | null | 0 | 4 | ---
license: mit
task_categories:
- text-classification
language:
- fr
size_categories:
- 10K<n<100K
viewer: false
---
|
FatmaZahraZ/JobDecriptionsEntityRecognition | 2023-01-06T21:40:48.000Z | [
"region:us"
] | FatmaZahraZ | The shared task of CoNLL-2003 concerns language-independent named entity recognition. We will concentrate on
four types of named entities: persons, locations, organizations and names of miscellaneous entities that do
not belong to the previous three groups.
The CoNLL-2003 shared task data files contain four columns separated by a single space. Each word has been put on
a separate line and there is an empty line after each sentence. The first item on each line is a word, the second
a part-of-speech (POS) tag, the third a syntactic chunk tag and the fourth the named entity tag. The chunk tags
and the named entity tags have the format I-TYPE which means that the word is inside a phrase of type TYPE. Only
if two phrases of the same type immediately follow each other, the first word of the second phrase will have tag
B-TYPE to show that it starts a new phrase. A word with tag O is not part of a phrase. Note the dataset uses IOB2
tagging scheme, whereas the original dataset uses IOB1.
For more details see https://www.clips.uantwerpen.be/conll2003/ner/ and https://www.aclweb.org/anthology/W03-0419 | @inproceedings{tjong-kim-sang-de-meulder-2003-introduction,
title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition",
author = "Tjong Kim Sang, Erik F. and
De Meulder, Fien",
booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003",
year = "2003",
url = "https://www.aclweb.org/anthology/W03-0419",
pages = "142--147",
} | null | 0 | 4 | |
jrtec/Superheroes | 2023-01-08T06:18:48.000Z | [
"task_categories:summarization",
"size_categories:1K<n<10K",
"language:en",
"license:cc0-1.0",
"superheroes",
"heroes",
"anime",
"manga",
"marvel",
"region:us"
] | jrtec | null | null | null | 0 | 4 | ---
license: cc0-1.0
task_categories:
- summarization
language:
- en
tags:
- superheroes
- heroes
- anime
- manga
- marvel
size_categories:
- 1K<n<10K
---
# Dataset Card for Superheroes
## Dataset Description
1400+ Superheroes history and powers description to apply text mining and NLP [Original source](https://www.kaggle.com/datasets/jonathanbesomi/superheroes-nlp-dataset/code?resource=download)
## Context
The aim of this dataset is to make text analytics and NLP even funnier. All of us have dreamed to be like a superhero and save the world, yet we are still on Kaggle figuring out how python works. Then, why not improve our NLP competences by analyzing Superheros' history and powers?
The particularity of this dataset is that it contains categorical and numerical features such as overall_score, intelligence_score, creator, alignment, gender, eye_color but also text features history_text and powers_text. By combining the two, a lot of interesting insights can be gathered!
## Content
We collected all data from superherodb and cooked for you in a nice and clean tabular format.
The dataset contains 1447 different Superheroes. Each superhero row has:
* overall_score - derivated by superherodb from the power stats features. Can you find the relationship?
* history_text - History of the Superhero (text features)
* powers_text - Description of Superheros' powers (text features)
* intelligence_score, strength_score, speed_score, durability_score, power_score and combat_score. (power stats features)
* "Origin" (full_name, alter_egos, …)
* "Connections" (occupation, base, teams, …)
* "Appareance" (gender, type_race, height, weight, eye_color, …)
## Acknowledgements
The following [Github repository](https://github.com/jbesomi/texthero/tree/master/dataset/Superheroes%20NLP%20Dataset) contains the code used to scrape this Dataset.
|
Multimodal-Fatima/OxfordPets_test | 2023-08-15T05:11:14.000Z | [
"region:us"
] | Multimodal-Fatima | null | null | null | 2 | 4 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': abyssinian
'1': american bulldog
'2': american pit bull terrier
'3': basset hound
'4': beagle
'5': bengal
'6': birman
'7': bombay
'8': boxer
'9': british shorthair
'10': chihuahua
'11': egyptian mau
'12': english cocker spaniel
'13': english setter
'14': german shorthaired
'15': great pyrenees
'16': havanese
'17': japanese chin
'18': keeshond
'19': leonberger
'20': maine coon
'21': miniature pinscher
'22': newfoundland
'23': persian
'24': pomeranian
'25': pug
'26': ragdoll
'27': russian blue
'28': saint bernard
'29': samoyed
'30': scottish terrier
'31': shiba inu
'32': siamese
'33': sphynx
'34': staffordshire bull terrier
'35': wheaten terrier
'36': yorkshire terrier
- name: species
dtype:
class_label:
names:
'0': Cat
'1': Dog
- name: id
dtype: int64
- name: clip_tags_ViT_L_14
sequence: string
- name: blip_caption
dtype: string
- name: LLM_Description_gpt3_downstream_tasks_ViT_L_14
sequence: string
- name: clip_tag_ViT_L_14_specific
dtype: string
- name: clip_tags_ViT_L_14_ensemble_specific
dtype: string
- name: clip_tags_ViT_L_14_simple_specific
dtype: string
- name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14
sequence: string
- name: clip_tags_ViT_L_14_with_openai_classes
sequence: string
- name: clip_tags_ViT_L_14_wo_openai_classes
sequence: string
- name: Attributes_ViT_L_14_text_davinci_003
sequence: string
- name: Attributes_ViT_L_14_text_davinci_003_full
sequence: string
- name: Attributes_ViT_L_14_text_davinci_003_oxfordpets
sequence: string
- name: clip_tags_ViT_B_16_simple_specific
dtype: string
- name: clip_tags_ViT_B_16_ensemble_specific
dtype: string
- name: clip_tags_ViT_B_32_simple_specific
dtype: string
- name: clip_tags_ViT_B_32_ensemble_specific
dtype: string
- name: Attributes_ViT_L_14_descriptors_text_davinci_003_full_validate
sequence: string
- name: Attributes_ViT_B_16_descriptors_text_davinci_003_full
sequence: string
- name: Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full
sequence: string
- name: clip_tags_LAION_ViT_H_14_2B_simple_specific
dtype: string
- name: clip_tags_LAION_ViT_H_14_2B_ensemble_specific
dtype: string
- name: blip_caption_beam_5_Salesforce_blip2_opt_6.7b
dtype: string
splits:
- name: test
num_bytes: 421721560.0
num_examples: 3669
download_size: 413176127
dataset_size: 421721560.0
---
# Dataset Card for "OxfordPets_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Lord-Goku/testing_1 | 2023-01-11T18:16:39.000Z | [
"license:afl-3.0",
"region:us"
] | Lord-Goku | null | null | null | 0 | 4 | ---
license: afl-3.0
---
---
TODO: Add YAML tags here. Copy-paste the tags obtained with the online tagging app: https://huggingface.co/spaces/huggingface/datasets-tagging
---
# Dataset Card for Testing Stock Data
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is a test dataset
### Supported Tasks and Leaderboards
BERT
MARKET
STOCK
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
LLukas22/lfqa_preprocessed | 2023-01-10T14:21:56.000Z | [
"task_categories:question-answering",
"task_categories:sentence-similarity",
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"region:us"
] | LLukas22 | null | null | null | 0 | 4 | ---
license: mit
task_categories:
- question-answering
- sentence-similarity
language:
- en
size_categories:
- 100K<n<1M
---
# Dataset Card for "lfqa_preprocessed"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** [https://towardsdatascience.com/long-form-qa-beyond-eli5-an-updated-dataset-and-approach-319cb841aabb](https://towardsdatascience.com/long-form-qa-beyond-eli5-an-updated-dataset-and-approach-319cb841aabb)
### Dataset Summary
This is a simplified version of [vblagoje's](https://huggingface.co/vblagoje) *[lfqa_support_docs](https://huggingface.co/datasets/vblagoje/lfqa_support_docs)* and *[lfqa](https://huggingface.co/datasets/vblagoje/lfqa)* datasets.
It was generated by me to have a more straight forward way to train Seq2Seq models on context based long form question answering tasks.
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```json
{
"question": "what's the difference between a forest and a wood?",
"answer": "They're used interchangeably a lot. You'll get different answers from different resources, but the ...",
"context": [
"Wood is divided, according to its botanical origin, into two kinds: softwoods, ...",
"Processing and products differs especially with regard to the distinction between softwood and hardwood ..."
]
}
```
### Data Fields
The data fields are the same among all splits.
- `question`: a `string` feature.
- `answer`: a `string` feature.
- `context`: a list feature containing `string` features.
### Data Splits
| name |train|validation|
|----------|----:|---------:|
| |226147| 3020|
## Additional Information
### Licensing Information
This dataset is distributed under the MIT licence. |
Cohere/wikipedia-22-12-it-embeddings | 2023-03-22T16:54:18.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:it",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | null | 1 | 4 | ---
annotations_creators:
- expert-generated
language:
- it
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# Wikipedia (it) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (it)](https://it.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-it-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-it-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-it-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) |
Cohere/wikipedia-22-12-es-embeddings | 2023-03-22T16:53:23.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:es",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | null | 4 | 4 | ---
annotations_creators:
- expert-generated
language:
- es
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# Wikipedia (es) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (es)](https://es.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-es-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-es-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-es-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) |
eengel7/sentiment_analysis_training_test | 2023-01-14T19:13:40.000Z | [
"license:apache-2.0",
"region:us"
] | eengel7 | null | null | null | 0 | 4 | ---
license: apache-2.0
---
|
keremberke/excavator-detector | 2023-01-16T21:43:21.000Z | [
"task_categories:object-detection",
"roboflow",
"roboflow2huggingface",
"Manufacturing",
"Construction",
"Machinery",
"region:us"
] | keremberke | null | @misc{ excavators-cwlh0_dataset,
title = { Excavators Dataset },
type = { Open Source Dataset },
author = { Mohamed Sabek },
howpublished = { \\url{ https://universe.roboflow.com/mohamed-sabek-6zmr6/excavators-cwlh0 } },
url = { https://universe.roboflow.com/mohamed-sabek-6zmr6/excavators-cwlh0 },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-01-16 },
} | null | 0 | 4 | ---
task_categories:
- object-detection
tags:
- roboflow
- roboflow2huggingface
- Manufacturing
- Construction
- Machinery
---
<div align="center">
<img width="640" alt="keremberke/excavator-detector" src="https://huggingface.co/datasets/keremberke/excavator-detector/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['excavators', 'dump truck', 'wheel loader']
```
### Number of Images
```json
{'test': 144, 'train': 2245, 'valid': 267}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/excavator-detector", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/mohamed-sabek-6zmr6/excavators-cwlh0/dataset/3](https://universe.roboflow.com/mohamed-sabek-6zmr6/excavators-cwlh0/dataset/3?ref=roboflow2huggingface)
### Citation
```
@misc{ excavators-cwlh0_dataset,
title = { Excavators Dataset },
type = { Open Source Dataset },
author = { Mohamed Sabek },
howpublished = { \\url{ https://universe.roboflow.com/mohamed-sabek-6zmr6/excavators-cwlh0 } },
url = { https://universe.roboflow.com/mohamed-sabek-6zmr6/excavators-cwlh0 },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-01-16 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.ai on April 4, 2022 at 8:56 AM GMT
It includes 2656 images.
Excavator are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 640x640 (Stretch)
No image augmentation techniques were applied.
|
metaeval/cycic_multiplechoice | 2023-01-18T12:15:47.000Z | [
"task_categories:multiple-choice",
"language:en",
"license:apache-2.0",
"arxiv:2301.05948",
"region:us"
] | metaeval | null | null | null | 4 | 4 | ---
license: apache-2.0
task_categories:
- multiple-choice
language:
- en
---
https://colab.research.google.com/drive/16nyxZPS7-ZDFwp7tn_q72Jxyv0dzK1MP?usp=sharing
```
@article{Kejriwal2020DoFC,
title={Do Fine-tuned Commonsense Language Models Really Generalize?},
author={Mayank Kejriwal and Ke Shen},
journal={ArXiv},
year={2020},
volume={abs/2011.09159}
}
```
added for
```
@article{sileo2023tasksource,
title={tasksource: Structured Dataset Preprocessing Annotations for Frictionless Extreme Multi-Task Learning and Evaluation},
author={Sileo, Damien},
url= {https://arxiv.org/abs/2301.05948},
journal={arXiv preprint arXiv:2301.05948},
year={2023}
}
``` |
silatus/1k_Website_Screenshots_and_Metadata | 2023-01-19T05:20:33.000Z | [
"task_categories:text-to-image",
"task_categories:image-classification",
"task_categories:image-segmentation",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-nc-sa-4.0",
"screenshots",
"metadata",
"websites",
"webpages",
"region:us"
] | silatus | null | null | null | 8 | 4 | ---
license: cc-by-nc-sa-4.0
task_categories:
- text-to-image
- image-classification
- image-segmentation
language:
- en
tags:
- screenshots
- metadata
- websites
- webpages
pretty_name: 1000 Website Screenshots with Metadata
size_categories:
- 1K<n<10K
---
# Dataset Card for 1000 Website Screenshots with Metadata
## Dataset Description
- **Homepage:** [silatus.com](https://silatus.com/datasets)
- **Point of Contact:** [datasets@silatus.com](mailto:datasets@silatus.com)
### Dataset Summary
Silatus is sharing, for free, a segment of a dataset that we are using to train a generative AI model for text-to-mockup conversions. This dataset was collected in December 2022 and early January 2023, so it contains very recent data from 1,000 of the world's most popular websites. You can get our larger 10,000 website dataset for free at: [https://silatus.com/datasets](https://silatus.com/datasets)
This dataset includes:
**High-res screenshots**
- 1024x1024px
- Loaded Javascript
- Loaded Images
**Text metadata**
- Site title
- Navbar content
- Full page text data
- Page description
**Visual metadata**
- Content (images, videos, inputs, buttons) absolute & relative positions
- Color profile
- Base font |
yhavinga/imdb_dutch | 2023-01-21T10:57:39.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:nl",
"language:en",
"license:other",
"reg... | yhavinga | Large Movie Review Dataset translated to Dutch.
This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. We provide a set of 24,992 highly polar movie reviews for training, and 24,992 for testing. There is additional unlabeled data for use as well.\ | @InProceedings{maas-EtAl:2011:ACL-HLT2011,
author = {Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher},
title = {Learning Word Vectors for Sentiment Analysis},
booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies},
month = {June},
year = {2011},
address = {Portland, Oregon, USA},
publisher = {Association for Computational Linguistics},
pages = {142--150},
url = {http://www.aclweb.org/anthology/P11-1015}
} | null | 0 | 4 | ---
pretty_name: IMDB
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- nl
- en
license:
- other
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: imdb-movie-reviews
train-eval-index:
- config: plain_text
task: text-classification
task_id: binary_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
- name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
dataset_info:
features:
- name: text
dtype: string
- name: text_en
dtype: string
- name: label
dtype:
class_label:
names:
0: neg
1: pos
config_name: plain_text
splits:
- name: train
num_bytes: 69589646
num_examples: 24992
- name: test
num_bytes: 67958995
num_examples: 24992
- name: unsupervised
num_bytes: 139649169
num_examples: 49984
download_size: 108170940
dataset_size: 277197810
---
# Dataset Card for "imdb_dutch"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
Large Movie Review Dataset translated to Dutch.
This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets.
We provide a set of 24,992 highly polar movie reviews for training, and 24,992 for testing. There is additional unlabeled data for use as well.
### Translation to Dutch
The dataset was translated with [yhavinga/ul2-large-en-nl](https://huggingface.co/yhavinga/ul2-large-en-nl).
The translation code is available in the src directory.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
This dataset contains Dutch and English data.
## Dataset Structure
### Data Instances
#### plain_text
- **Size of downloaded dataset files:** 108 MiB
- **Size of the generated dataset:** 277 MiB
An example of 'train' looks as follows.
```
{
"label": 0,
"text": "Holy shit. Dit was de slechtste film die ik in lange tijd heb gezien."
"text_en": "Holy crap. This was the worst film I have seen in a long time."
}
```
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `text`: a `string` feature.
- `text_en`: a `string` feature.
- `label`: a classification label, with possible values including `neg` (0), `pos` (1).
### Data Splits
| name |train|unsupervised|test |
|----------|----:|-----------:|----:|
|plain_text|24992| 49984|24992|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{maas-EtAl:2011:ACL-HLT2011,
author = {Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher},
title = {Learning Word Vectors for Sentiment Analysis},
booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies},
month = {June},
year = {2011},
address = {Portland, Oregon, USA},
publisher = {Association for Computational Linguistics},
pages = {142--150},
url = {http://www.aclweb.org/anthology/P11-1015}
}
```
### Contributions
Thanks to [@ghazi-f](https://github.com/ghazi-f), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf) for adding
the English `imdb` dataset.
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
Created by [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/)
|
nglaura/arxivlay-summarization | 2023-04-11T10:08:36.000Z | [
"task_categories:summarization",
"language:en",
"license:apache-2.0",
"region:us"
] | nglaura | null | null | null | 0 | 4 | ---
license: apache-2.0
task_categories:
- summarization
language:
- en
pretty_name: arXiv-Lay
---
# LoRaLay: A Multilingual and Multimodal Dataset for Long Range and Layout-Aware Summarization
A collaboration between [reciTAL](https://recital.ai/en/), [MLIA](https://mlia.lip6.fr/) (ISIR, Sorbonne Université), [Meta AI](https://ai.facebook.com/), and [Università di Trento](https://www.unitn.it/)
## Arxiv-Lay dataset for summarization
ArXiv-Lay is an enhanced version of the arXiv summarization dataset, for which layout information is provided.
### Data Fields
- `article_id`: article id
- `article_words`: sequence of words constituting the body of the article
- `article_bboxes`: sequence of corresponding word bounding boxes
- `norm_article_bboxes`: sequence of corresponding normalized word bounding boxes
- `abstract`: a string containing the abstract of the article
- `article_pdf_url`: URL of the article's PDF
### Data Splits
This dataset has 3 splits: _train_, _validation_, and _test_.
| Dataset Split | Number of Instances |
| ------------- | --------------------|
| Train | 122,189 |
| Validation | 4,374 |
| Test | 4,356 |
## Citation
``` latex
@article{nguyen2023loralay,
title={LoRaLay: A Multilingual and Multimodal Dataset for Long Range and Layout-Aware Summarization},
author={Nguyen, Laura and Scialom, Thomas and Piwowarski, Benjamin and Staiano, Jacopo},
journal={arXiv preprint arXiv:2301.11312},
year={2023}
}
``` |
nglaura/hal-summarization | 2023-04-11T10:15:37.000Z | [
"task_categories:summarization",
"language:fr",
"license:apache-2.0",
"region:us"
] | nglaura | null | null | null | 0 | 4 | ---
license: apache-2.0
task_categories:
- summarization
language:
- fr
pretty_name: HAL
---
# LoRaLay: A Multilingual and Multimodal Dataset for Long Range and Layout-Aware Summarization
A collaboration between [reciTAL](https://recital.ai/en/), [MLIA](https://mlia.lip6.fr/) (ISIR, Sorbonne Université), [Meta AI](https://ai.facebook.com/), and [Università di Trento](https://www.unitn.it/)
## HAL dataset for summarization
HAL is a dataset for summarization of research papers written in French, for which layout information is provided.
### Data Fields
- `article_id`: article id
- `article_words`: sequence of words constituting the body of the article
- `article_bboxes`: sequence of corresponding word bounding boxes
- `norm_article_bboxes`: sequence of corresponding normalized word bounding boxes
- `abstract`: a string containing the abstract of the article
- `article_pdf_url`: URL of the article's PDF
### Data Splits
This dataset has 3 splits: _train_, _validation_, and _test_.
| Dataset Split | Number of Instances |
| ------------- | --------------------|
| Train | 43,379 |
| Validation | 1,384 |
| Test | 1,385 |
## Citation
``` latex
@article{nguyen2023loralay,
title={LoRaLay: A Multilingual and Multimodal Dataset for Long Range and Layout-Aware Summarization},
author={Nguyen, Laura and Scialom, Thomas and Piwowarski, Benjamin and Staiano, Jacopo},
journal={arXiv preprint arXiv:2301.11312},
year={2023}
}
``` |
michelecafagna26/hl | 2023-08-02T11:50:20.000Z | [
"task_categories:image-to-text",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_ids:text-scoring",
"annotations_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"arxiv:1405.0312",
"a... | michelecafagna26 | High-level Dataset | @inproceedings{Cafagna2023HLDG,
title={HL Dataset: Grounding High-Level Linguistic Concepts in Vision},
author={Michele Cafagna and Kees van Deemter and Albert Gatt},
year={2023}
} | null | 4 | 4 | ---
license: apache-2.0
task_categories:
- image-to-text
- question-answering
- zero-shot-classification
language:
- en
multilinguality:
- monolingual
task_ids:
- text-scoring
pretty_name: HL (High-Level Dataset)
size_categories:
- 10K<n<100K
annotations_creators:
- crowdsourced
annotations_origin:
- crowdsourced
dataset_info:
splits:
- name: train
num_examples: 13498
- name: test
num_examples: 1499
---
# Dataset Card for the High-Level Dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
The High-Level (HL) dataset aligns **object-centric descriptions** from [COCO](https://arxiv.org/pdf/1405.0312.pdf)
with **high-level descriptions** crowdsourced along 3 axes: **_scene_, _action_, _rationale_**
The HL dataset contains 14997 images from COCO and a total of 134973 crowdsourced captions (3 captions for each axis) aligned with ~749984 object-centric captions from COCO.
Each axis is collected by asking the following 3 questions:
1) Where is the picture taken?
2) What is the subject doing?
3) Why is the subject doing it?
**The high-level descriptions capture the human interpretations of the images**. These interpretations contain abstract concepts not directly linked to physical objects.
Each high-level description is provided with a _confidence score_, crowdsourced by an independent worker measuring the extent to which
the high-level description is likely given the corresponding image, question, and caption. The higher the score, the more the high-level caption is close to the commonsense (in a Likert scale from 1-5).
- **🗃️ Repository:** [github.com/michelecafagna26/HL-dataset](https://github.com/michelecafagna26/HL-dataset)
- **📜 Paper:** [HL Dataset: Visually-grounded Description of Scenes, Actions and Rationales](https://arxiv.org/abs/2302.12189?context=cs.CL)
- **🧭 Spaces:** [Dataset explorer](https://huggingface.co/spaces/michelecafagna26/High-Level-Dataset-explorer)
- **🖊️ Contact:** michele.cafagna@um.edu.mt
### Supported Tasks
- image captioning
- visual question answering
- multimodal text-scoring
- zero-shot evaluation
### Languages
English
## Dataset Structure
The dataset is provided with images from COCO and two metadata jsonl files containing the annotations
### Data Instances
An instance looks like this:
```json
{
"file_name": "COCO_train2014_000000138878.jpg",
"captions": {
"scene": [
"in a car",
"the picture is taken in a car",
"in an office."
],
"action": [
"posing for a photo",
"the person is posing for a photo",
"he's sitting in an armchair."
],
"rationale": [
"to have a picture of himself",
"he wants to share it with his friends",
"he's working and took a professional photo."
],
"object": [
"A man sitting in a car while wearing a shirt and tie.",
"A man in a car wearing a dress shirt and tie.",
"a man in glasses is wearing a tie",
"Man sitting in the car seat with button up and tie",
"A man in glasses and a tie is near a window."
]
},
"confidence": {
"scene": [
5,
5,
4
],
"action": [
5,
5,
4
],
"rationale": [
5,
5,
4
]
},
"purity": {
"scene": [
-1.1760284900665283,
-1.0889461040496826,
-1.442818284034729
],
"action": [
-1.0115827322006226,
-0.5917857885360718,
-1.6931917667388916
],
"rationale": [
-1.0546956062316895,
-0.9740906357765198,
-1.2204363346099854
]
},
"diversity": {
"scene": 25.965358893403383,
"action": 32.713305568898775,
"rationale": 2.658757840479801
}
}
```
### Data Fields
- ```file_name```: original COCO filename
- ```captions```: Dict containing all the captions for the image. Each axis can be accessed with the axis name and it contains a list of captions.
- ```confidence```: Dict containing the captions confidence scores. Each axis can be accessed with the axis name and it contains a list of captions. Confidence scores are not provided for the _object_ axis (COCO captions).t
- ```purity score```: Dict containing the captions purity scores. The purity score measures the semantic similarity of the captions within the same axis (Bleurt-based).
- ```diversity score```: Dict containing the captions diversity scores. The diversity score measures the lexical diversity of the captions within the same axis (Self-BLEU-based).
### Data Splits
There are 14997 images and 134973 high-level captions split into:
- Train-val: 13498 images and 121482 high-level captions
- Test: 1499 images and 13491 high-level captions
## Dataset Creation
The dataset has been crowdsourced on Amazon Mechanical Turk.
From the paper:
>We randomly select 14997 images from the COCO 2014 train-val split. In order to answer questions related to _actions_ and _rationales_ we need to
> ensure the presence of a subject in the image. Therefore, we leverage the entity annotation provided in COCO to select images containing
> at least one person. The whole annotation is conducted on Amazon Mechanical Turk (AMT). We split the workload into batches in order to ease
>the monitoring of the quality of the data collected. Each image is annotated by three different annotators, therefore we collect three annotations per axis.
### Curation Rationale
From the paper:
>In this work, we tackle the issue of **grounding high-level linguistic concepts in the visual modality**, proposing the High-Level (HL) Dataset: a
V\&L resource aligning existing object-centric captions with human-collected high-level descriptions of images along three different axes: _scenes_, _actions_ and _rationales_.
The high-level captions capture the human interpretation of the scene, providing abstract linguistic concepts complementary to object-centric captions
>used in current V\&L datasets, e.g. in COCO. We take a step further, and we collect _confidence scores_ to distinguish commonsense assumptions
>from subjective interpretations and we characterize our data under a variety of semantic and lexical aspects.
### Source Data
- Images: COCO
- object axis annotations: COCO
- scene, action, rationale annotations: crowdsourced
- confidence scores: crowdsourced
- purity score and diversity score: automatically computed
#### Annotation process
From the paper:
>**Pilot:** We run a pilot study with the double goal of collecting feedback and defining the task instructions.
>With the results from the pilot we design a beta version of the task and we run a small batch of cases on the crowd-sourcing platform.
>We manually inspect the results and we further refine the instructions and the formulation of the task before finally proceeding with the
>annotation in bulk. The final annotation form is shown in Appendix D.
>***Procedure:*** The participants are shown an image and three questions regarding three aspects or axes: _scene_, _actions_ and _rationales_
> i,e. _Where is the picture taken?_, _What is the subject doing?_, _Why is the subject doing it?_. We explicitly ask the participants to use
>their personal interpretation of the scene and add examples and suggestions in the instructions to further guide the annotators. Moreover,
>differently from other VQA datasets like (Antol et al., 2015) and (Zhu et al., 2016), where each question can refer to different entities
>in the image, we systematically ask the same three questions about the same subject for each image. The full instructions are reported
>in Figure 1. For details regarding the annotation costs see Appendix A.
#### Who are the annotators?
Turkers from Amazon Mechanical Turk
### Personal and Sensitive Information
There is no personal or sensitive information
## Considerations for Using the Data
[More Information Needed]
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
From the paper:
>**Quantitying grammatical errors:** We ask two expert annotators to correct grammatical errors in a sample of 9900 captions, 900 of which are shared between the two annotators.
> The annotators are shown the image caption pairs and they are asked to edit the caption whenever they identify a grammatical error.
>The most common errors reported by the annotators are:
>- Misuse of prepositions
>- Wrong verb conjugation
>- Pronoun omissions
>In order to quantify the extent to which the corrected captions differ from the original ones, we compute the Levenshtein distance (Levenshtein, 1966) between them.
>We observe that 22.5\% of the sample has been edited and only 5\% with a Levenshtein distance greater than 10. This suggests a reasonable
>level of grammatical quality overall, with no substantial grammatical problems. This can also be observed from the Levenshtein distance
>distribution reported in Figure 2. Moreover, the human evaluation is quite reliable as we observe a moderate inter-annotator agreement
>(alpha = 0.507, (Krippendorff, 2018) computed over the shared sample.
### Dataset Curators
Michele Cafagna
### Licensing Information
The Images and the object-centric captions follow the [COCO terms of Use](https://cocodataset.org/#termsofuse)
The remaining annotations are licensed under Apache-2.0 license.
### Citation Information
```BibTeX
@inproceedings{cafagna2023hl,
title={{HL} {D}ataset: {V}isually-grounded {D}escription of {S}cenes, {A}ctions and
{R}ationales},
author={Cafagna, Michele and van Deemter, Kees and Gatt, Albert},
booktitle={Proceedings of the 16th International Natural Language Generation Conference (INLG'23)},
address = {Prague, Czech Republic},
year={2023}
}
```
|
gorar/A-MNIST | 2023-01-25T22:17:05.000Z | [
"task_categories:image-classification",
"size_categories:100K<n<1M",
"license:mit",
"region:us"
] | gorar | The dataset is built on top of MNIST.
It consists from 130K of images in 10 classes - 120K training and 10K test samples.
The training set was augmented with additional 60K images. | null | null | 0 | 4 | ---
license: mit
task_categories:
- image-classification
size_categories:
- 100K<n<1M
--- |
svjack/bloom-dialogue-generate-ds-zh | 2023-01-26T03:53:12.000Z | [
"region:us"
] | svjack | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: question
dtype: string
- name: dialogue_text
dtype: string
- name: dialogue
sequence: string
- name: repo
dtype: string
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 98021681
num_examples: 24297
download_size: 101459282
dataset_size: 98021681
---
# Dataset Card for "bloom-dialogue-generate-ds-zh"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
metaeval/arct | 2023-05-15T08:19:50.000Z | [
"license:apache-2.0",
"region:us"
] | metaeval | null | null | null | 0 | 4 | ---
license: apache-2.0
---
# The Argument Reasoning Comprehension Task: Identification and Reconstruction of Implicit Warrants
https://github.com/UKPLab/argument-reasoning-comprehension-task
```bib
@InProceedings{Habernal.et.al.2018.NAACL.ARCT,
title = {The Argument Reasoning Comprehension Task: Identification
and Reconstruction of Implicit Warrants},
author = {Habernal, Ivan and Wachsmuth, Henning and
Gurevych, Iryna and Stein, Benno},
publisher = {Association for Computational Linguistics},
booktitle = {Proceedings of the 2018 Conference of the North American Chapter
of the Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long Papers)},
pages = {1930--1940},
month = jun,
year = {2018},
address = {New Orleans, Louisiana},
url = {http://aclweb.org/anthology/N18-1175}
}
``` |
gigant/tib_slides | 2023-03-25T14:28:21.000Z | [
"region:us"
] | gigant | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: Image
dtype: image
- name: file_name
dtype: string
splits:
- name: train
num_bytes: 131956494917.654
num_examples: 484843
download_size: 0
dataset_size: 131956494917.654
---
# Dataset Card for "tib_slides"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ChristophSchuhmann/screenplays | 2023-01-27T19:58:15.000Z | [
"region:us"
] | ChristophSchuhmann | null | null | null | 4 | 4 | Entry not found |
Cohere/miracl-sw-queries-22-12 | 2023-02-06T12:02:02.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:sw",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | null | 0 | 4 | ---
annotations_creators:
- expert-generated
language:
- sw
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (sw) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-sw-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-sw-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-sw-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-sw-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-sw-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-sw-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-sw-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-sw-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-sw-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-sw-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-sw-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-sw-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
Cohere/miracl-bn-queries-22-12 | 2023-02-06T12:01:34.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:bn",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | null | 0 | 4 | ---
annotations_creators:
- expert-generated
language:
- bn
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (bn) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-bn-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-bn-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-bn-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-bn-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-bn-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-bn-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-bn-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-bn-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-bn-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-bn-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-bn-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-bn-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
gokuls/glue_augmented_mrpc | 2023-01-30T14:34:28.000Z | [
"license:apache-2.0",
"region:us"
] | gokuls | null | null | null | 1 | 4 | ---
license: apache-2.0
---
# Dataset Card for glue_augmented_mrpc
## Dataset Description
Augmented MRPC dataset
**Reference:** https://huggingface.co/datasets/glue |
Cohere/miracl-hi-queries-22-12 | 2023-02-06T12:02:28.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:hi",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | null | 0 | 4 | ---
annotations_creators:
- expert-generated
language:
- hi
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (hi) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-hi-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-hi-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-hi-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-hi-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-hi-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-hi-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-hi-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-hi-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-hi-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-hi-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-hi-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-hi-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
Cohere/miracl-te-queries-22-12 | 2023-02-06T12:00:55.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:te",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | null | 0 | 4 | ---
annotations_creators:
- expert-generated
language:
- te
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (te) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-te-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-te-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-te-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-te-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-te-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-te-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-te-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-te-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-te-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-te-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-te-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-te-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
Cohere/miracl-th-queries-22-12 | 2023-02-06T12:01:19.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:th",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | null | 0 | 4 | ---
annotations_creators:
- expert-generated
language:
- th
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (th) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-th-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-th-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-th-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-th-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-th-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-th-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-th-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-th-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-th-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-th-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-th-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-th-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
Cohere/miracl-fa-queries-22-12 | 2023-02-06T11:59:41.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:fa",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | null | 0 | 4 | ---
annotations_creators:
- expert-generated
language:
- fa
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (fa) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-fa-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-fa-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-fa-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-fa-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-fa-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-fa-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-fa-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-fa-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-fa-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-fa-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-fa-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-fa-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
Cohere/miracl-fi-queries-22-12 | 2023-02-06T11:59:18.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:fi",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | null | 0 | 4 | ---
annotations_creators:
- expert-generated
language:
- fi
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (fi) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-fi-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-fi-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-fi-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-fi-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-fi-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-fi-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-fi-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-fi-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-fi-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-fi-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-fi-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-fi-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
Cohere/miracl-id-queries-22-12 | 2023-02-06T11:58:53.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:id",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | null | 0 | 4 | ---
annotations_creators:
- expert-generated
language:
- id
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (id) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-id-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-id-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-id-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-id-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-id-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-id-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-id-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-id-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-id-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-id-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-id-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-id-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
Cohere/miracl-ko-queries-22-12 | 2023-02-06T11:58:15.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:ko",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | null | 1 | 4 | ---
annotations_creators:
- expert-generated
language:
- ko
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (ko) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-ko-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ko-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-ko-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ko-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-ko-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ko-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-ko-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-ko-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-ko-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ko-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-ko-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-ko-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
Cohere/miracl-es-queries-22-12 | 2023-02-06T11:57:49.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:es",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | null | 0 | 4 | ---
annotations_creators:
- expert-generated
language:
- es
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (es) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-es-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-es-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-es-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-es-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-es-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-es-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-es-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-es-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-es-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-es-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-es-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-es-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
Cohere/miracl-fr-queries-22-12 | 2023-02-06T11:57:25.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:fr",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | null | 0 | 4 | ---
annotations_creators:
- expert-generated
language:
- fr
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (fr) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-fr-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-fr-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-fr-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-fr-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-fr-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-fr-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-fr-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-fr-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-fr-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-fr-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-fr-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-fr-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
Cohere/miracl-ja-queries-22-12 | 2023-02-06T11:57:00.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:ja",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | null | 1 | 4 | ---
annotations_creators:
- expert-generated
language:
- ja
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (ja) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-ja-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ja-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-ja-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ja-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-ja-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ja-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-ja-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-ja-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-ja-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ja-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-ja-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-ja-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
Cohere/miracl-ru-queries-22-12 | 2023-02-06T11:56:00.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:ru",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | null | 0 | 4 | ---
annotations_creators:
- expert-generated
language:
- ru
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (ru) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-ru-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ru-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-ru-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ru-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-ru-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-ru-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-ru-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-ru-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-ru-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-ru-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-ru-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-ru-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
mlfoundations/datacomp_pools | 2023-08-21T21:43:57.000Z | [
"license:cc-by-4.0",
"region:us"
] | mlfoundations | null | null | null | 12 | 4 | ---
license: cc-by-4.0
---
## DataComp Pools
This repository contains metadata files for DataComp. For details on how to use the metadata, please visit [our website](https://www.datacomp.ai/) and our [github repository](https://github.com/mlfoundations/datacomp).
We distribute the image url-text samples and metadata under a standard Creative Common CC-BY-4.0 license. The individual images are under their own copyrights.
## Terms and Conditions
We have terms of service that are similar to those adopted by HuggingFace (https://huggingface.co/terms-of-service), which covers their dataset library. Specifically, any content you download, access or use from our index, is at your own risk and subject to the terms of service or copyright limitations accompanying such content. The image url-text index, which is a research artifact, is provided as is. By using said index, you assume all risks, including but not limited to, liabilities related to image downloading and storage.
|
jonathan-roberts1/SAT-4 | 2023-04-03T16:17:18.000Z | [
"license:other",
"region:us"
] | jonathan-roberts1 | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': barren land
'1': grassland
'2': other
'3': trees
splits:
- name: train
num_bytes: 150589308
num_examples: 100000
download_size: 177776551
dataset_size: 150589308
license: other
---
# Dataset Card for Dataset Name
## Dataset Description
- **Paper** [Deepsat: a learning framework for satellite imagery](https://dl.acm.org/doi/pdf/10.1145/2820783.2820816)
- **Split** Test
### Split Information
This HuggingFace dataset repository contains just the 'Test' split.
### Licensing Information
Public Domain
## Citation Information
[https://dl.acm.org/doi/pdf/10.1145/2820783.2820816](https://dl.acm.org/doi/pdf/10.1145/2820783.2820816)
```
@inproceedings{basu2015deepsat,
title = {Deepsat: a learning framework for satellite imagery},
author = {Basu, Saikat and Ganguly, Sangram and Mukhopadhyay, Supratik and DiBiano, Robert and Karki, Manohar and Nemani, Ramakrishna},
year = 2015,
booktitle = {Proceedings of the 23rd SIGSPATIAL international conference on advances in geographic information systems},
pages = {1--10}
}
``` |
jonathan-roberts1/SAT-6 | 2023-04-03T16:17:41.000Z | [
"license:other",
"region:us"
] | jonathan-roberts1 | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': barren land
'1': building
'2': grassland
'3': road
'4': trees
'5': water
splits:
- name: train
num_bytes: 120518797
num_examples: 81000
download_size: 142842069
dataset_size: 120518797
license: other
---
# Dataset Card for "SAT-6"
## Dataset Description
- **Paper** [Deepsat: a learning framework for satellite imagery](https://dl.acm.org/doi/pdf/10.1145/2820783.2820816)
- **Split** Test
### Split Information
This HuggingFace dataset repository contains just the 'Test' split.
### Licensing Information
Public Domain
## Citation Information
[https://dl.acm.org/doi/pdf/10.1145/2820783.2820816](https://dl.acm.org/doi/pdf/10.1145/2820783.2820816)
```
@inproceedings{basu2015deepsat,
title = {Deepsat: a learning framework for satellite imagery},
author = {Basu, Saikat and Ganguly, Sangram and Mukhopadhyay, Supratik and DiBiano, Robert and Karki, Manohar and Nemani, Ramakrishna},
year = 2015,
booktitle = {Proceedings of the 23rd SIGSPATIAL international conference on advances in geographic information systems},
pages = {1--10}
}
```
|
Cohere/miracl-yo-queries-22-12 | 2023-02-06T11:54:06.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:yo",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | null | 0 | 4 | ---
annotations_creators:
- expert-generated
language:
- yo
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (yo) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-yo-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-yo-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-yo-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-yo-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-yo-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-yo-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-yo-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-yo-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-yo-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-yo-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-yo-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-yo-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
Cohere/miracl-de-queries-22-12 | 2023-02-06T11:53:32.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:de",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | null | 0 | 4 | ---
annotations_creators:
- expert-generated
language:
- de
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (de) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-de-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-de-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-de-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-de-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-de-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-de-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-de-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-de-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-de-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-de-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-de-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-de-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
range3/wikipedia-ja-20230101 | 2023-02-04T05:44:41.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"language:ja",
"license:cc-by-sa-3.0",
"license:gfdl",
"region:us"
] | range3 | null | null | null | 2 | 4 | ---
license:
- cc-by-sa-3.0
- gfdl
task_categories:
- text-generation
- fill-mask
language:
- ja
---
# range3/wikipedia-ja-20230101
This dataset consists of a parquet file from the wikipedia dataset with only Japanese data extracted. It is generated by the following python code.
このデータセットは、wikipediaデータセットの日本語データのみを抽出したparquetファイルで構成されます。以下のpythonコードによって生成しています。
```py
import datasets
dss = datasets.load_dataset(
"wikipedia",
language="ja",
date="20230101",
beam_runner="DirectRunner",
)
for split,ds in dss.items():
ds.to_parquet(f"wikipedia-ja-20230101/{split}.parquet")
```
|
racro/sentiment-analysis-finetune | 2023-02-05T21:28:56.000Z | [
"region:us"
] | racro | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 119146
num_examples: 751
download_size: 70123
dataset_size: 119146
---
# Dataset Card for "sentiment-analysis-finetune"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
achang/plot_qa | 2023-02-12T01:20:56.000Z | [
"task_categories:visual-question-answering",
"language:en",
"license:cc",
"plotQA",
"region:us"
] | achang | null | null | null | 3 | 4 | ---
license: cc
task_categories:
- visual-question-answering
language:
- en
tags:
- plotQA
pretty_name: PlotQA
---
# Dataset Card for PlotQA
## Dataset Description
- **PlotQA from here:** [PlotQA](https://github.com/NiteshMethani/PlotQA)
### Dataset Summary
PlotQA is a VQA dataset with 28.9 million question-answer pairs grounded over 224,377 plots on data from real-world sources and questions based on crowd-sourced question templates.
## Dataset Structure
### Data Fields
List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.
- `image`: PIL image of a plot
- `text`: string of json data 'models'. See notes below.
From [here](https://github.com/NiteshMethani/PlotQA/blob/master/PlotQA_Dataset.md):
'models': It is a list of dictionaries. Depending on the type of the plot (single or 2,3,4-multi), the length of the dictionary can vary from 1 to 4. Each dictionary contains the following keys-
name: Label corresponding to the datapoint.
color: Color corresponding to the `name` datapoint.
bboxes: Bounding boxes corresponding to the `name` datapoints in the plot.
label: label corresponding to the datapoint which will appear as the legend (same as the `name` field).
x: x-value of the datapoints.
y: y-value of the datapoints.
[json2token](https://github.com/clovaai/donut/blob/b317b4bbf1eecec7c62e7666f2097e1e90a6b441/donut/model.py#L495) function was used to convert json to string.
The new tokens are already loaded in plotQA processor:
```
from transformers import DonutProcessor
processor = DonutProcessor.from_pretrained("[achang/donut-plotqa-trained](https://huggingface.co/achang/donut-plotqa-trained)")
```
### Data Splits
```
validation: Dataset({
features: ['image', 'text'],
num_rows: 33650
})
train: Dataset({
features: ['image', 'text'],
num_rows: 157070
})
test: Dataset({
features: ['image', 'text'],
num_rows: 33657
})
```
## Misc
Dataset Creation, Annotations, Considerations for Using the Data, Social Impact of Dataset, Additional Information, Licensing Information look at [plotQA](https://github.com/NiteshMethani/PlotQA)
### Citation Information
Please cite the following if you use the PlotQA dataset in your work:
```
@InProceedings{Methani_2020_WACV,
author = {Methani, Nitesh and Ganguly, Pritha and Khapra, Mitesh M. and Kumar, Pratyush},
title = {PlotQA: Reasoning over Scientific Plots},
booktitle = {The IEEE Winter Conference on Applications of Computer Vision (WACV)},
month = {March},
year = {2020}
}
```
|
librarian-bots/model_card_dataset_mentions | 2023-06-30T15:09:18.000Z | [
"task_categories:text-classification",
"size_categories:n<1K",
"language:en",
"license:mit",
"model cards",
"metadata",
"region:us"
] | librarian-bots | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': dataset_mention
'1': no_dataset_mention
splits:
- name: train
num_bytes: 58112
num_examples: 297
download_size: 19321
dataset_size: 58112
license: mit
task_categories:
- text-classification
language:
- en
tags:
- model cards
- metadata
pretty_name: Model Card Dataset Mentions
size_categories:
- n<1K
---
# Dataset Card for Model Card Dataset Mentions
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
hieunguyen1053/binhvq-news-corpus | 2023-02-09T15:49:42.000Z | [
"region:us"
] | hieunguyen1053 | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: title
dtype: string
- name: summary
dtype: string
- name: category
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 51179763136
num_examples: 13954498
download_size: 19065155948
dataset_size: 51179763136
---
# Dataset Card for "binhvq-news-corpus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
cahya/instructions | 2023-02-10T21:02:35.000Z | [
"region:us"
] | cahya | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: lang
dtype: string
splits:
- name: train
num_bytes: 71483925.44051038
num_examples: 170485
- name: test
num_bytes: 3971585.428468864
num_examples: 9472
- name: validation
num_bytes: 3971166.1310207574
num_examples: 9471
download_size: 45997378
dataset_size: 79426677.0
---
# Dataset Card for "instructions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rexarski/climate_fever_fixed | 2023-04-30T03:46:52.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"climate",
"region:us"
] | rexarski | null | null | null | 1 | 4 | ---
dataset_info:
features:
- name: claim_id
dtype: int64
- name: claim
dtype: string
- name: evidence
dtype: string
- name: label
dtype:
class_label:
names:
'0': SUPPORTS
'1': REFUTES
'2': NOT_ENOUGH_INFO
- name: category
dtype: string
splits:
- name: train
num_bytes: 1467456
num_examples: 4298
- name: test
num_bytes: 526276
num_examples: 1535
- name: valid
num_bytes: 635174
num_examples: 1842
download_size: 1372892
dataset_size: 2628906
license: mit
task_categories:
- text-classification
language:
- en
tags:
- climate
pretty_name: climate_fever dataset with one-to-one claim-evidence pair
size_categories:
- 1K<n<10K
---
# Dataset Card for "climate_fever_fixed"
### Dataset Summary
This dataset was created to aid our team in developing a model to more accurately perform climate change-related fact checking. We approach this task from a perspective heavily impacted
by the work of the [ClimateBERT](https://climatebert.ai/about) team. With that in mind, our team likewise leveraged a BERT Language model to solve this task. This dataset presents an
edited version of the [Climate_Fever](https://huggingface.co/datasets/climate_fever) dataset, hosted by HuggingFace. Climate_Fever is composed of climate-related documents
that have been annotated with labels related to fact-checking and misinformation. However, in the climate-plus project, we decided to modify the dataset to remove redundancy
and keep only the essentials of a text-entailment problem: claim as the premise and evidence as the hypothesis.
### Data Fields
This dataset contains 7675 records, each of which is composed of several attributes:
- `claim_id`: a `integer` feature, which serves as a unique identifier for each record/row.
- `claim`: a `string` feature, containes the raw text of a given climate-related claim.
- `evidence`: a `string` feature, which provides free text evidence that relates to the previously established claim.
- `label`: a `class label` feature representing an assigned class, where values can either be 0: "supports", 1: "refutes" and 2: "not enough info".
- `category`: a `string` feature, which provides additional detail about the particular focus of a given claim.
<br>
This dataset was then broken into train, test and validation sets to enable proper evaluation of our model. These splits contain the following amount of data:
- `Train`: 4300 Records
- `Test`: 1540 Records
- `Val`: 1840 Records
### Source Data
This dataset represents an evolved version of the original [Climate_Fever](https://huggingface.co/datasets/climate_fever) dataset, hosted by HuggingFace. It was adapted to meet
the needs of our team, as we attempted to solve a specific climate change-related task. The original dataset adopted the FEVER methodology, discussed in more detail [here](https://www.amazon.science/blog/the-fever-data-set-what-doesnt-kill-it-will-make-it-stronger).
Their original dataset consists of 1,535 real-world claims regarding climate-change collected on the internet. Each claim is accompanied by five manually annotated evidence
sentences retrieved from the English Wikipedia that support, refute or do not give enough information to validate the claim totalling in 7,675 claim-evidence pairs.
### Methodology
This dataset was curated by our team to reduce redundancy and keep only the essentials of a text-entailment problem: claim as the premise and evidence as the hypothesis.
For each given claim, there are multiple sentences of evidence. We decided to expand the one-to-many relation to one-to-one.
This resulted in a modified version of the climate_fever dataset that includes only one evidence sentence per claim.
### Languages
The text contained in the dataset is entirely in English, as found in the real-world financial disclosures identified by the TCFD. The associated BCP-47 code is [`en`](https://www.techonthenet.com/js/language_tags.php), to ensure clear labeling of language usage for downstream tasks and other future applications.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rcds/swiss_doc2doc_ir | 2023-07-20T07:33:37.000Z | [
"task_categories:text-classification",
"task_ids:entity-linking-classification",
"annotations_creators:machine-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:de",
"language:fr",
"language:it",
"... | rcds | null | null | null | 0 | 4 | ---
annotations_creators:
- machine-generated
language:
- de
- fr
- it
language_creators:
- expert-generated
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
pretty_name: 'Swiss Doc2doc Information Retrieval'
size_categories:
- 100K<n<1M
source_datasets:
- original
tags: []
task_categories:
- text-classification
task_ids:
- entity-linking-classification
---
https://huggingface.co/spaces/huggingface/datasets-tagging
# Dataset Card for Swiss Doc2doc Information Retrieval
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Swiss Doc2doc Information Retrieval is a multilingual, diachronic dataset of 131K Swiss Federal Supreme Court (FSCS) cases annotated with law citations and ruling citations, posing a challenging text classification task. As unique label we are using decision_id of cited rulings and uuid of cited law articles, which can be found in the SwissCourtRulingCorpus. We also provide additional metadata, i.e., the publication year, the legal area and the canton of origin per case, to promote robustness and fairness studies on the critical area of legal NLP.
### Supported Tasks and Leaderboards
Swiss Doc2Doc IR can be used as information retrieval task using documents in Swiss Legislation (https://huggingface.co/datasets/rcds/swiss_legislation) and Swiss Leading desicions (https://huggingface.co/datasets/rcds/swiss_leading_decisions).
### Languages
Switzerland has four official languages with three languages (German 86K, French 30k and Italian 10k) being represented. The decisions are written by the judges and clerks in the language of the proceedings.
## Dataset Structure
### Data Instances
```
{
"decision_id": "000127ef-17d2-4ded-8621-c0c962c18fd5",
"language": de,
"year": 2018,
"chamber": "CH_BGer_008",
"region": "Federation",
"origin_chamber": 47,
"origin_court": 8,
"origin_canton": 151,
"law_area": "social_law",
"law_sub_area": ,
"laws": "['75488867-c001-4eb9-93b9-04264ea91f55', 'e6b06567-1236-4210-adb3-e11c26e497d5', '04bf6369-99cb-41fa-8aff-413679bc8c18', ...],
"cited_rulings": "['fe8a76b3-8b0f-4f27-a277-2d887140e7ab', '16fef75e-e8d5-4a51-8230-a9ca3676c8a9', '6d21b282-3b23-41dd-9350-6ba5386df9b1', '302fd9f3-e78a-4a9f-9f8d-cde51fcbdfe7']",
"facts": "Sachverhalt: A. A._, geboren 1954, war ab November 2002 als Pflegehilfe im Altersheim C._ angestellt. Am 23. Dezember 2002 meldete sie sich erstmals unter Hinweis auf Depressionen ...",
"considerations": "Erwägungen: 1. 1.1. Die Beschwerde kann wegen Rechtsverletzung gemäss Art. 95 und Art. 96 BGG erhoben werden. Das Bundesgericht wendet das ...",
"rulings": "Demnach erkennt das Bundesgericht: 1. Die Beschwerde wird abgewiesen. 2. Die Gerichtskosten von Fr. 800.- werden der Beschwerdeführerin ...",
}
```
### Data Fields
```
decision_id: (str) a unique identifier of the for the document
language: (str) one of (de, fr, it)
year: (int) the publication year
chamber: (str) the chamber of the case
region: (str) the region of the case
origin_chamber: (str) the chamber of the origin case
origin_court: (str) the court of the origin case
origin_canton: (str) the canton of the origin case
law_area: (str) the law area of the case
law_sub_area:(str) the law sub area of the case
laws: (str) a list of law ids
cited rulings: (str) a list of cited rulings ids
facts: (str) the facts of the case
considerations: (str) the considerations of the case
rulings: (str) the rulings of the case
```
### Data Splits
The dataset was split date-stratisfied
- Train: 2002-2015
- Validation: 2016-2017
- Test: 2018-2022
| Language | Subset | Number of Documents (Training/Validation/Test) |
|------------|------------|------------------------------------------------|
| German | **de** | 86'832 (59'170 / 19'002 / 8'660) |
| French | **fr** | 46'203 (30'513 / 10'816 / 4'874) |
| Italian | **it** | 8'306 (5'673 / 1'855 / 778) |
## Dataset Creation
### Curation Rationale
The dataset was created by Stern et al. (2023).
### Source Data
#### Initial Data Collection and Normalization
The original data are available at the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML.
#### Who are the source language producers?
The original data are published from the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML.
### Annotations
#### Annotation process
The decisions have been annotated with the citation ids using html tags and parsers.
For more details on laws (rcds/swiss_legislation) and rulings (rcds/swiss_rulings).
#### Who are the annotators?
Stern annotated the citations.
Metadata is published by the Swiss Federal Supreme Court (https://www.bger.ch).
### Personal and Sensitive Information
The dataset contains publicly available court decisions from the Swiss Federal Supreme Court. Personal or sensitive information has been anonymized by the court before publication according to the following guidelines: https://www.bger.ch/home/juridiction/anonymisierungsregeln.html.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
We release the data under CC-BY-4.0 which complies with the court licensing (https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf)
© Swiss Federal Supreme Court, 2002-2022
The copyright for the editorial content of this website and the consolidated texts, which is owned by the Swiss Federal Supreme Court, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source: https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf
### Citation Information
Please cite our [ArXiv-Preprint](https://arxiv.org/abs/2306.09237)
```
@misc{rasiah2023scale,
title={SCALE: Scaling up the Complexity for Advanced Language Model Evaluation},
author={Vishvaksenan Rasiah and Ronja Stern and Veton Matoshi and Matthias Stürmer and Ilias Chalkidis and Daniel E. Ho and Joel Niklaus},
year={2023},
eprint={2306.09237},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@Stern5497](https://github.com/stern5497) for adding this dataset. |
jonathan-roberts1/Brazilian_Coffee_Scenes | 2023-03-31T15:27:06.000Z | [
"task_categories:image-classification",
"license:other",
"region:us"
] | jonathan-roberts1 | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': coffee
'1': no coffee
splits:
- name: train
num_bytes: 4256968.464
num_examples: 2876
download_size: 2830232
dataset_size: 4256968.464
license: other
task_categories:
- image-classification
---
# Dataset Card for "Brazilian_Coffee_Scenes"
## Dataset Description
- **Paper** [Do deep features generalize from everyday objects to remote sensing and aerial scenes domains?](https://www.cv-foundation.org/openaccess/content_cvpr_workshops_2015/W13/papers/Penatti_Do_Deep_Features_2015_CVPR_paper.pdf)
### Licensing Information
[CC BY-NC]
## Citation Information
[Do deep features generalize from everyday objects to remote sensing and aerial scenes domains?](https://www.cv-foundation.org/openaccess/content_cvpr_workshops_2015/W13/papers/Penatti_Do_Deep_Features_2015_CVPR_paper.pdf)
```
@inproceedings{penatti2015deep,
title = {Do deep features generalize from everyday objects to remote sensing and aerial scenes domains?},
author = {Penatti, Ot{\'a}vio AB and Nogueira, Keiller and Dos Santos, Jefersson A},
year = 2015,
booktitle = {Proceedings of the IEEE conference on computer vision and pattern recognition workshops},
pages = {44--51}
}
``` |
jonathan-roberts1/Brazilian_Cerrado-Savanna_Scenes | 2023-03-31T15:28:58.000Z | [
"task_categories:zero-shot-image-classification",
"task_categories:image-classification",
"license:other",
"region:us"
] | jonathan-roberts1 | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': agriculture
'1': arboreal vegetation
'2': herbaceous vegetation
'3': shrubby vegetation
splits:
- name: train
num_bytes: 16933385.557
num_examples: 1311
download_size: 14574976
dataset_size: 16933385.557
license: other
task_categories:
- zero-shot-image-classification
- image-classification
---
# Dataset Card for "Brazilian_Cerrado-Savanna_Scenes"
## Dataset Description
- **Paper** [Towards vegetation species discrimination by using data-driven descriptors](https://vision.unipv.it/CV/materiale2016-17/3rd%20Choice/0022.pdf)
-
### Licensing Information
[CC BY-NC]
## Citation Information
[Towards vegetation species discrimination by using data-driven descriptors](https://vision.unipv.it/CV/materiale2016-17/3rd%20Choice/0022.pdf)
```
@inproceedings{nogueira2016towards,
title = {Towards vegetation species discrimination by using data-driven descriptors},
author = {Nogueira, Keiller and Dos Santos, Jefersson A and Fornazari, Tamires and Silva, Thiago Sanna Freire and Morellato, Leonor Patricia and Torres, Ricardo da S},
year = 2016,
booktitle = {2016 9th IAPR Workshop on Pattern Recogniton in Remote Sensing (PRRS)},
pages = {1--6},
organization = {Ieee}
}
``` |
niceblueman/icons_dataset | 2023-02-15T15:01:32.000Z | [
"size_categories:100M<n<1B",
"language:en",
"license:apache-2.0",
"icons",
"svgs",
"doi:10.57967/hf/0375",
"region:us"
] | niceblueman | null | null | null | 1 | 4 | ---
pretty_name: SQuAD
license: apache-2.0
language:
- en
dataset_info:
features:
- name: word
dtype: string
- name: icon
dtype: string
config_name: svg_icons
splits:
- name: train
num_bytes: 240641
num_examples: 22
download_size: 0
dataset_size: 240641
tags:
- icons
- svgs
size_categories:
- 100M<n<1B
---
# svg_icons
## Dataset Description
- **Homepage:[text_to_icon.kmoz.dev](https://text_to_icon.kmoz.dev)**
- **Repository: [@KM8Oz/text_to_icon](https://github.com/KM8Oz/text_to_icon)**
### Dataset Summary
This dataset card aims to be classify svgs icons intos images/label set
## Dataset Structure
- dataset_info:
- features:
- name: word
- dtype: string
- name: icon
- dtype: string |
j-krzywdziak/test2 | 2023-02-17T13:13:40.000Z | [
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"language:pl",
"license:mit",
"region:us"
] | j-krzywdziak | Lorem ipsum | @inproceedings{panayotov2015librispeech,
title={Librispeech: an ASR corpus based on public domain audio books},
author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on},
pages={5206--5210},
year={2015},
organization={IEEE}
} | null | 0 | 4 | ---
annotations_creators:
- expert-generated
language:
- pl
license:
- mit
multilinguality:
- monolingual
dataset_info:
- config_name: config
features:
- name: audio_id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
jonathan-roberts1/Optimal-31 | 2023-03-31T17:06:29.000Z | [
"task_categories:image-classification",
"task_categories:zero-shot-image-classification",
"license:other",
"region:us"
] | jonathan-roberts1 | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': airport
'2': baseball diamond
'3': basketball court
'4': beach
'5': bridge
'6': chaparral
'7': church
'8': circular farmland
'9': commercial area
'10': dense residential
'11': desert
'12': forest
'13': freeway
'14': golf course
'15': ground track field
'16': harbor
'17': industrial area
'18': intersection
'19': island
'20': lake
'21': meadow
'22': medium residential
'23': mobile home park
'24': mountain
'25': overpass
'26': parking lot
'27': railway
'28': rectangular farmland
'29': roundabout
'30': runway
splits:
- name: train
num_bytes: 25100636.72
num_examples: 1860
download_size: 25105452
dataset_size: 25100636.72
license: other
task_categories:
- image-classification
- zero-shot-image-classification
---
# Dataset Card for "Optimal-31"
## Dataset Description
- **Paper** [Scene classification with recurrent attention of VHR remote sensing images](https://ieeexplore.ieee.org/iel7/5/8045830/07891544.pdf)
### Licensing Information
[No license for now, cite the paper below.]
## Citation Information
[Scene classification with recurrent attention of VHR remote sensing images](https://ieeexplore.ieee.org/iel7/5/8045830/07891544.pdf)
```
@article{wang2018scene,
title = {Scene classification with recurrent attention of VHR remote sensing images},
author = {Wang, Qi and Liu, Shaoteng and Chanussot, Jocelyn and Li, Xuelong},
year = 2018,
journal = {IEEE Transactions on Geoscience and Remote Sensing},
publisher = {IEEE},
volume = 57,
number = 2,
pages = {1155--1167}
}
``` |
inkoziev/jokes_dialogues | 2023-02-19T07:07:16.000Z | [
"task_categories:conversational",
"language:ru",
"license:cc-by-nc-4.0",
"region:us"
] | inkoziev | null | null | null | 1 | 4 | ---
license: cc-by-nc-4.0
task_categories:
- conversational
language:
- ru
---
# Диалоги из анекдотов и шуток
Датасет содержит результат парсинга анекдотов, наскрапленных с разных сайтов.
## Формат
Каждый сэмпл содержит четыре поля:
"context" - контекст диалога, включая все недиалоговые вставки. Обратите внимание, что контекст содержит как предшествующие реплики, так и прочий сопутствующий текст, так
как он определяет общий сеттинг, необходимый для генерации реплики. Из реплики удалены маркеры косвенной речи.
"utterance" - диалоговая реплика.
"hash" - хэш-код исходного полного текста для связывания сэмплов.
"reply_num" - порядковый номер диалоговой реплики. Часто последняя реплика является "пайнчалайном", в ней сконцентрирована суть шутки.
Один исходный текст может дать несколько сэмплов, если в нем было много реплик. |
svjack/context-dialogue-generate-ds-zh-v1 | 2023-02-21T07:59:42.000Z | [
"region:us"
] | svjack | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: sent
dtype: string
- name: dialogue
sequence: string
- name: L_emb
sequence: float32
splits:
- name: train
num_bytes: 74417088
num_examples: 20000
download_size: 82191201
dataset_size: 74417088
---
# Dataset Card for "context-dialogue-generate-ds-zh-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Brendan/nlp244_french_snli | 2023-02-21T07:32:38.000Z | [
"region:us"
] | Brendan | null | null | null | 1 | 4 | ---
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
- name: fr_premise
dtype: string
- name: fr_hypothesis
dtype: string
splits:
- name: test
num_bytes: 2298242
num_examples: 10000
- name: train
num_bytes: 122710788
num_examples: 550152
- name: validation
num_bytes: 2305275
num_examples: 10000
download_size: 40406975
dataset_size: 127314305
---
# Dataset Card for "nlp244_french_snli"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lansinuote/nlp.3.reading_for_understanding | 2023-02-23T02:10:06.000Z | [
"region:us"
] | lansinuote | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: start_positions
dtype: int64
- name: end_positions
dtype: int64
splits:
- name: train
num_bytes: 19646064
num_examples: 10106
- name: validation
num_bytes: 398520
num_examples: 205
download_size: 3916983
dataset_size: 20044584
---
# Dataset Card for "nlp.3.reading_for_understanding"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pavanBuduguppa/asr_inverse_text_normalization | 2023-02-22T13:15:29.000Z | [
"license:gpl-3.0",
"region:us"
] | pavanBuduguppa | null | null | null | 0 | 4 | ---
license: gpl-3.0
---
|
philschmid/flanv2 | 2023-02-22T19:39:49.000Z | [
"license:apache-2.0",
"flan",
"flan 2022",
"flan v2",
"arxiv:2301.13688",
"region:us"
] | philschmid | null | null | null | 23 | 4 | ---
license: apache-2.0
tags:
- flan
- flan 2022
- flan v2
pretty_name: Flan v2
duplicated_from: SirNeural/flan_v2
---
# Fork of [SirNeural/flan_v2](https://huggingface.co/datasets/SirNeural/flan_v2)
just in case it gets deleted.
# Dataset Card for Flan V2
## Dataset Description
- **Homepage:** https://ai.googleblog.com/2023/02/the-flan-collection-advancing-open.html
- **Repository:** https://github.com/google-research/FLAN/tree/main/flan/v2
- **Paper:** https://arxiv.org/abs/2301.13688
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is a processed version of the Flan V2 dataset.
I'm not affiliated with the creators, I'm just releasing the files in an easier-to-access format after processing.
The authors of the Flan Collection recommend experimenting with different mixing ratio's of tasks to get optimal results downstream.
This current version I've processed is missing a few datasets compared to the main branch of the flan v2 repo:
- cs-en WMT translation task requires manual download and I wasn't able to get the credentials
- q_re_cc dataset preprocessing for the dialog task wasn't working
-
These are minor hits to the total size of the collection (orders of MB compared to GB) but once those are fixed I will upload a complete version.
## Dataset Structure
### Data Instances
Flan 2021 (flan), P3 (t0), Super-Natural Instructions (niv2), Chain-of-thought (cot), and Dialog (dialog)
### Data Fields
Instruction data comes in a few formats:
- Few Shot (fs)
- Zero Shot (zs)
- Options Provided in context (i.e. multiple choice pick one) (opt)
- No Options Provided (noopt)
Each combination of the above tasks + formats are saved as a JSONL with following schema `{"input": ..., "target": ..., "task": ...}`
### Data Splits
Everything is saved as a train split
|
undertheseanlp/UTS_Text | 2023-03-03T03:29:39.000Z | [
"task_categories:text-generation",
"annotations_creators:no-annotation",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:vi",
"license:apache-2.0",
"region:us"
] | undertheseanlp | UTSText | \ | null | 0 | 4 | ---
annotations_creators:
- no-annotation
language:
- vi
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: UTS_Text
size_categories:
- 10K<n<100K
task_categories:
- text-generation
---
# Dataset Card for UTS_Text
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
The UTS_Text dataset is a collection of 100,000 sentences sourced from various news articles.
Out of the 10,000 sentences in the dataset, 5,000 sentences have a length ranging from 50 to 150, while the other 5,000 sentences have a length ranging from 20 to 50. This distribution of sentence lengths provides a diverse range of text samples that can be used to train and test natural language processing models.
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
| name | train | validation | test |
|---------|--------:|-----------:|-------:|
| small | 1600 | 200 | 200 |
| base | 8000 | 1000 | 1000 |
| large | 95000 | 2500 | 2500 |
## Dataset Creation
### Curation Rationale
### Source Data
### Annotations
## Additional Information
### Licensing Information
The dataset is released under Apache 2.0.
### Citation Information
### Contributions
|
bigcode/kgt-notebooks | 2023-02-23T13:29:44.000Z | [
"license:apache-2.0",
"region:us"
] | bigcode | null | null | null | 1 | 4 | ---
dataset_info:
features:
- name: content
dtype: string
- name: fname
dtype: string
splits:
- name: train
num_bytes: 187060315209
num_examples: 248761
download_size: 121484294194
dataset_size: 187060315209
license: apache-2.0
---
# Dataset Card for "kgt-notebooks"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KonradSzafer/stackoverflow_python_preprocessed | 2023-03-04T23:35:06.000Z | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | KonradSzafer | null | null | null | 4 | 4 | ---
dataset_info:
features:
- name: title
dtype: string
- name: answer
dtype: string
- name: question
dtype: string
splits:
- name: train
num_bytes: 5119086
num_examples: 3296
download_size: 1939470
dataset_size: 5119086
task_categories:
- question-answering
language:
- en
pretty_name: Stack Overflow Python - Preprocessed
size_categories:
- 1K<n<10K
---
# Dataset Card for "stackoverflow_python_preprocessed"
This is a preprocessed version of the [stackoverflow_python] dataset.
Questions and answers were filtered to only include questions with more than 100 votes and answers with more than 5 votes.
The dataset has been converted from HTML to plain text and only includes the title, question, and answer columns.
## Additional Information
### License
All Stack Overflow user contributions are licensed under CC-BY-SA 3.0 with attribution required.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BelalElhossany/mgb2_audios_transcriptions_non_overlap | 2023-02-26T10:09:19.000Z | [
"region:us"
] | BelalElhossany | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 901857303.92
num_examples: 4972
download_size: 965382804
dataset_size: 901857303.92
---
# Dataset Card for "mgb2_audios_transcriptions_non_overlap"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Duskfallcrew/80sCartoons | 2023-02-26T10:40:31.000Z | [
"task_categories:text-to-image",
"size_categories:1K<n<10K",
"language:en",
"license:creativeml-openrail-m",
"text",
"text to image",
"stable diffusion",
"80s",
"region:us"
] | Duskfallcrew | null | null | null | 1 | 4 | ---
license: creativeml-openrail-m
task_categories:
- text-to-image
language:
- en
tags:
- text
- text to image
- stable diffusion
- 80s
pretty_name: Eighties Cartoons
size_categories:
- 1K<n<10K
---
# Do not resell the data, you don't own the data but you do your own outputs of your training. See main lisc for details |
metaeval/implicatures | 2023-02-27T09:01:42.000Z | [
"license:gpl",
"arxiv:2301.05948",
"region:us"
] | metaeval | null | null | null | 1 | 4 | ---
license: gpl
---
Implicature corpus
```bib
@article{george2020conversational,
title={Conversational implicatures in English dialogue: Annotated dataset},
author={George, Elizabeth Jasmi and Mamidi, Radhika},
journal={Procedia Computer Science},
volume={171},
pages={2316--2323},
year={2020},
publisher={Elsevier}
}
```
Augmented with generated distractors https://colab.research.google.com/drive/1ix0FgwzPAjQkIQA2E3ctlylvcmya7vGy?usp=sharing, for tasksource
```bib
@article{sileo2023tasksource,
title={tasksource: Structured Dataset Preprocessing Annotations for Frictionless Extreme Multi-Task Learning and Evaluation},
author={Sileo, Damien},
url= {https://arxiv.org/abs/2301.05948},
journal={arXiv preprint arXiv:2301.05948},
year={2023}
}
``` |
IndianaUniversityDatasetsModels/Medical_reports_Splits | 2023-03-10T11:12:02.000Z | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | IndianaUniversityDatasetsModels | null | null | null | 3 | 4 | ---
dataset_info:
features:
- name: MeSH
dtype: string
- name: Problems
dtype: string
- name: findings
dtype: string
- name: impression
dtype: string
splits:
- name: train
num_bytes: 1046536.8153707596
num_examples: 2831
- name: test
num_bytes: 92417.59231462024
num_examples: 250
- name: validation
num_bytes: 92417.59231462024
num_examples: 250
download_size: 395063
dataset_size: 1231372
task_categories:
- text-generation
- text2text-generation
language:
- en
pretty_name: Indiana University X-Rays and Reports dataset
size_categories:
- 1K<n<10K
---
# Dataset Card for "Medical_reports_Splits"
Orignal Source [openi.nlm.nih.gov](https://openi.nlm.nih.gov/)
Kaggle Source [Chest X-rays (Indiana University)](https://www.kaggle.com/datasets/raddar/chest-xrays-indiana-university)
[For more information](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tonypaul2020/amazon_product_data | 2023-03-10T14:03:21.000Z | [
"region:us"
] | tonypaul2020 | null | null | null | 2 | 4 | Entry not found |
katarinagresova/Genomic_Benchmarks_demo_coding_vs_intergenomic_seqs | 2023-10-04T13:10:11.000Z | [
"region:us"
] | katarinagresova | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: seq
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 15900000
num_examples: 75000
- name: test
num_bytes: 5300000
num_examples: 25000
download_size: 2456511
dataset_size: 21200000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "Genomic_Benchmarks_demo_coding_vs_intergenomic_seqs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sedthh/tv_dialogue | 2023-03-16T13:44:59.000Z | [
"task_categories:conversational",
"task_categories:text2text-generation",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"OpenAssistant",
"transcripts",
"subtitles",
"television",
"region:us"
] | sedthh | null | null | null | 4 | 4 | ---
dataset_info:
features:
- name: TEXT
dtype: string
- name: METADATA
dtype: string
- name: SOURCE
dtype: string
splits:
- name: train
num_bytes: 211728118
num_examples: 2781
download_size: 125187885
dataset_size: 211728118
license: mit
task_categories:
- conversational
- text2text-generation
- text-generation
language:
- en
tags:
- OpenAssistant
- transcripts
- subtitles
- television
pretty_name: TV and Movie dialogue and transcript corpus
size_categories:
- 1K<n<10K
---
# Dataset Card for "tv_dialogue"
This dataset contains transcripts for famous movies and TV shows from multiple sources.
An example dialogue would be:
```
[PERSON 1] Hello
[PERSON 2] Hello Person 2!
How's it going?
(they are both talking)
[PERSON 1] I like being an example
on Huggingface!
They are examples on Huggingface.
CUT OUT TO ANOTHER SCENCE
We are somewhere else
[PERSON 1 (v.o)] I wonder where we are?
```
All dialogues were processed to follow this format. Each row is a single episode / movie (**2781** rows total)
following the [OpenAssistant](https://open-assistant.io/) format. The METADATA column contains dditional information as a JSON string.
## Dialogue only, with some information on the scene
| Show | Number of scripts | Via | Source |
|----|----|---|---|
| Friends | 236 episodes | https://github.com/emorynlp/character-mining | friends/emorynlp |
| The Office | 186 episodes | https://www.kaggle.com/datasets/nasirkhalid24/the-office-us-complete-dialoguetranscript | office/nasirkhalid24 |
| Marvel Cinematic Universe | 18 movies | https://www.kaggle.com/datasets/pdunton/marvel-cinematic-universe-dialogue | marvel/pdunton |
| Doctor Who | 306 episodes | https://www.kaggle.com/datasets/jeanmidev/doctor-who | drwho/jeanmidev |
| Star Trek | 708 episodes | http://www.chakoteya.net/StarTrek/index.html based on https://github.com/GJBroughton/Star_Trek_Scripts/ | statrek/chakoteya |
## Actual transcripts with detailed information on the scenes
| Show | Number of scripts | Via | Source |
|----|----|---|---|
| Top Movies | 919 movies | https://imsdb.com/ | imsdb |
| Top Movies | 171 movies | https://www.dailyscript.com/ | dailyscript |
| Stargate SG-1 | 18 episodes | https://imsdb.com/ | imsdb |
| South Park | 129 episodes | https://imsdb.com/ | imsdb |
| Knight Rider | 80 episodes | http://www.knightriderarchives.com/ | knightriderarchives | |
fatmaElsafoury2022/SST_sentiment_fairness_data | 2023-05-16T09:52:58.000Z | [
"task_categories:text-classification",
"size_categories:n<1K",
"language:en",
"license:afl-3.0",
"Fairness dataset",
"sentiment analysis",
"Gender",
"region:us"
] | fatmaElsafoury2022 | null | null | null | 1 | 4 | ---
license: afl-3.0
task_categories:
- text-classification
language:
- en
tags:
- Fairness dataset
- sentiment analysis
- Gender
size_categories:
- n<1K
---
# Sentiment fairness dataset
================================
This dataset is to measure gender fairness in the downstream task of sentiment analysis. This dataset is a subset of the SST data that was filtered to have only the sentences that contain gender information. The python code used to create this dataset can be found in the prepare_sst.ipyth file.
Then the filtered datset was labeled by 4 human annotators who are the authors of this dataset. The annotations instructions are given below.
---
# Annotation Instructions
==============================
Each sentence has two existing labels:
* 'label' gives the sentiment score
* 'gender' gives the guessed gender of the target of the sentiment
The 'gender' label has two tags:
* 'masc' for masculine-gendered words, like 'he' or 'father'
* 'femm' for feminine-gendered words, like 'she' or 'mother'
For each sentence, you are to annotate if the sentence's **sentiment is directed toward a gendered person** i.e. the gender label is correct.
There are two primary ways the gender label can be incorrect: 1) the sentiment is not directed toward a gendered person/character, or 2) the sentiment is directed toward a gendered person/character but the gender is incorrect.
Please annotate **1** if the sentence is **correctly labeled** and **0** if not.
(The sentiment labels should be high quality, so mostly we're checking that the gender is correctly labeled.)
Some clarifying notes:
* If the sentiment is directed towards multiple people with different genders, mark as 0; in this case, the subject of the sentiment is not towards a single gender.
* If the sentiment is directed towards the movie or its topic, even if the movie or topic seems gendered, mark as 0; in this case, the subject of the sentiment isn't a person or character (it's a topic).
* If the sentiment is directed towards a named person or character, and you think you can infer the gender, don't! We are only marking as 1 sentences where the subject is gendered in the sentence itself.
## Positive examples (you'd annotate 1)
* sentence: She gave an excellent performance.
* label: .8
* gender: femm
Sentiment is directed at the 'she'.
---
* sentence: The director gets excellent performances out of his cast.
* label: .7
* gender: masc
Sentiment is directed at the male-gendered director.
---
* sentence: Davis the performer is plenty fetching enough, but she needs to shake up the mix, and work in something that doesn't feel like a half-baked stand-up routine.
* label: .4
* gender: femm
Sentiment is directed at Davis, who is gendered with the pronoun 'she'.
## Negative examples (you'd annotate 0)
* sentence: A near miss for this new director.
* label: .3
* gender: femm
This sentence was labeled 'femm' because it had the word 'miss' in it, but the sentiment is not actually directed towards a feminine person (we don't know the gender of the director).
---
* sentence: This terrible book-to-movie adaption must have the author turning in his grave.
* label: .2
* gender: masc
The sentiment is directed towards the movie, or maybe the director, but not the male-gendered author.
---
* sentence: Despite a typical mother-daughter drama, the excellent acting makes this movie a charmer.
* label: .8
* gender: femm
Sentiment is directed at the acting, not a person or character.
---
* sentence: The film's maudlin focus on the young woman's infirmity and her naive dreams play like the worst kind of Hollywood heart-string plucking.
* label: .8
* gender: femm
Similar to above, the sentiment is directed towards the movie's focus---though the focus may be gendered, we are only keeping sentences where the sentiment is directed towards a gendered person or character.
---
* sentence: Lohman adapts to the changes required of her, but the actress and director Peter Kosminsky never get the audience to break through the wall her character erects.
* label: .4
* gender: femm
The sentiment is directed towards both the actress and the director, who may have different genders.
---
# The final dataset
=====================
The final dataset conatina the following columns:
Sentnces: the sentence that contain a sentiment.
label: the sentiment label if hte sentience is positve or negative.
gender: the gender of hte target of the sentiment in the sentence.
A1: the annotation of the first annotator. ("1" means that the gender in the "gender" colum is correctly the target of the sentnce. "0" means otherwise)
A2: the annotation of the second annotator. ("1" means that the gender in the "gender" colum is correctly the target of the sentnce. "0" means otherwise)
A3: the annotation of the third annotator. ("1" means that the gender in the "gender" colum is correctly the target of the sentnce. "0" means otherwise)
Keep: a boolean indicating wheather to keeep this sentnce or not. "Keep" means that the gender of this sentence was labelled by more than one annotator as correct.
agreement: the number of annotators who agreeed o nteh label.
correct: the number of annotators who gave the majority of labels.
incorrect: the number of annotators who gave the minority labels.
**This dataset is ready to use as the majority of the human annotators agreed that the sentiment of these sentences is targeted at the gender mentioned in the "gender" column**
---
# Citation
==============
@misc{sst-sentiment-fainress-dataset,
title={A dataset to measure fairness in the sentiment analysis task},
author={Gero, Katy and Butters, Nathan and Bethke, Anna and Elsafoury, Fatma},
howpublished={https://github.com/efatmae/SST_sentiment_fairness_data},
year={2023}
}
|
mxeval/mathqa-x | 2023-03-20T19:21:12.000Z | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"mathqa-x",
"mathqa",
"mxeval",
"arxiv:2210.14868",
"region:us"
] | mxeval | A collection of execution-based multi-lingual benchmark for code generation. | @article{mbxp_athiwaratkun2022,
title = {Multi-lingual Evaluation of Code Generation Models},
author = {Athiwaratkun, Ben and
Gouda, Sanjay Krishna and
Wang, Zijian and
Li, Xiaopeng and
Tian, Yuchen and
Tan, Ming
and Ahmad, Wasi Uddin and
Wang, Shiqi and
Sun, Qing and
Shang, Mingyue and
Gonugondla, Sujan Kumar and
Ding, Hantian and
Kumar, Varun and
Fulton, Nathan and
Farahani, Arash and
Jain, Siddhartha and
Giaquinto, Robert and
Qian, Haifeng and
Ramanathan, Murali Krishna and
Nallapati, Ramesh and
Ray, Baishakhi and
Bhatia, Parminder and
Sengupta, Sudipta and
Roth, Dan and
Xiang, Bing},
doi = {10.48550/ARXIV.2210.14868},
url = {https://arxiv.org/abs/2210.14868},
keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
} | null | 1 | 4 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- mathqa-x
- mathqa
- mxeval
pretty_name: mbxp
size_categories:
- 1K<n<10K
---
# MBXP
## Table of Contents
- [MathQA-X](#MathQA-X)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#related-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Executional Correctness](#execution)
- [Execution Example](#execution-example)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
# MathQA-X
## Dataset Description
- **Repository:** [GitHub Repository](https://github.com/amazon-science/mbxp-exec-eval)
- **Paper:** [Multi-lingual Evaluation of Code Generation Models](https://openreview.net/forum?id=Bo7eeXm6An8)
### Dataset Summary
This repository contains data and code to perform execution-based multi-lingual evaluation of code generation capabilities and the corresponding data,
namely, a multi-lingual benchmark MBXP, multi-lingual MathQA and multi-lingual HumanEval.
<br>Results and findings can be found in the paper ["Multi-lingual Evaluation of Code Generation Models"](https://arxiv.org/abs/2210.14868).
### Related Tasks and Leaderboards
* [Multi-HumanEval](https://huggingface.co/datasets/mxeval/multi-humaneval)
* [MBXP](https://huggingface.co/datasets/mxeval/mbxp)
* [MathQA-X](https://huggingface.co/datasets/mxeval/mathqa-x)
### Languages
The programming problems are written in multiple programming languages and contain English natural text in comments and docstrings.
## Dataset Structure
To lookup currently supported datasets
```python
get_dataset_config_names("mxeval/mathqa-x")
['python', 'java', 'javascript']
```
To load a specific dataset and language
```python
from datasets import load_dataset
load_dataset("mxeval/mathqa-x", "python")
DatasetDict({
test: Dataset({
features: ['task_id', 'language', 'prompt', 'test', 'entry_point', 'canonical_solution'],
num_rows: 1883
})
})
```
### Data Instances
An example of a dataset instance:
```python
{
"task_id": "MathQA/0",
"language": "python",
"prompt": "def problem():\n \"\"\"\n a shopkeeper sold an article offering a discount of 5 % and earned a profit of 31.1 % . what would have been the percentage of profit earned if no discount had been offered ? n0 = 5.0 n1 = 31.1\n \"\"\"\n",
"test": "import math\ndef compare(x, y):\n return math.fabs(x-y)<1e-8\ncandidate = problem\nassert compare(candidate(), 38.0)\ndef check(x): pass\n",
"entry_point": "problem",
"canonical_solution": " n0 = 5.0\n n1 = 31.1\n t0 = n1 + 100.0\n t1 = 100.0 - n0\n t2 = t0 * 100.0\n t3 = t2 / t1\n answer = t3 - 100.0\n return answer\n"
}
```
### Data Fields
- `task_id`: identifier for the data sample
- `prompt`: input for the model containing function header and docstrings
- `canonical_solution`: solution for the problem in the `prompt`
- `description`: task description
- `test`: contains function to test generated code for correctness
- `entry_point`: entry point for test
- `language`: programming lanuage identifier to call the appropriate subprocess call for program execution
### Data Splits
- MathQA-X
- Python
- Java
- Javascript
## Dataset Creation
### Curation Rationale
Since code generation models are often trained on dumps of GitHub a dataset not included in the dump was necessary to properly evaluate the model. However, since this dataset was published on GitHub it is likely to be included in future dumps.
### Personal and Sensitive Information
None.
### Social Impact of Dataset
With this dataset code generating models can be better evaluated which leads to fewer issues introduced when using such models.
## Execution
### Execution Example
Install the repo [mbxp-exec-eval](https://github.com/amazon-science/mbxp-exec-eval) to execute generations or canonical solutions for the prompts from this dataset.
```python
>>> from datasets import load_dataset
>>> from mxeval.execution import check_correctness
>>> mathqa_python = load_dataset("mxeval/mathqa-x", "python", split="test")
>>> example_problem = mathqa_python[0]
>>> check_correctness(example_problem, example_problem["canonical_solution"], timeout=20.0)
{'task_id': 'MathQA/0', 'passed': True, 'result': 'passed', 'completion_id': None, 'time_elapsed': 9.673357009887695}
```
### Considerations for Using the Data
Make sure to sandbox the execution environment.
### Dataset Curators
AWS AI Labs
### Licensing Information
[LICENSE](https://huggingface.co/datasets/mxeval/mathqa-x/blob/main/mathqa-x-LICENSE) <br>
[THIRD PARTY LICENSES](https://huggingface.co/datasets/mxeval/mathqa-x/blob/main/THIRD_PARTY_LICENSES)
### Citation Information
```
@inproceedings{
athiwaratkun2023multilingual,
title={Multi-lingual Evaluation of Code Generation Models},
author={Ben Athiwaratkun and Sanjay Krishna Gouda and Zijian Wang and Xiaopeng Li and Yuchen Tian and Ming Tan and Wasi Uddin Ahmad and Shiqi Wang and Qing Sun and Mingyue Shang and Sujan Kumar Gonugondla and Hantian Ding and Varun Kumar and Nathan Fulton and Arash Farahani and Siddhartha Jain and Robert Giaquinto and Haifeng Qian and Murali Krishna Ramanathan and Ramesh Nallapati and Baishakhi Ray and Parminder Bhatia and Sudipta Sengupta and Dan Roth and Bing Xiang},
booktitle={The Eleventh International Conference on Learning Representations },
year={2023},
url={https://openreview.net/forum?id=Bo7eeXm6An8}
}
```
### Contributions
[skgouda@](https://github.com/sk-g) [benathi@](https://github.com/benathi) |
oeg/CelebA_RoBERTa_Sp | 2023-06-28T07:52:50.000Z | [
"task_categories:table-question-answering",
"task_categories:question-answering",
"task_categories:translation",
"task_categories:text2text-generation",
"size_categories:100M<n<1B",
"language:es",
"license:apache-2.0",
"CelebA",
"Spanish",
"celebFaces attributes",
"face detection",
"face recog... | oeg | null | null | null | 1 | 4 | ---
license: apache-2.0
task_categories:
- table-question-answering
- question-answering
- translation
- text2text-generation
language:
- es
tags:
- CelebA
- Spanish
- celebFaces attributes
- face detection
- face recognition
pretty_name: RoBERTa+CelebA training corpus in Spanish
size_categories:
- 100M<n<1B
---
## Corpus Summary
This corpus contains 250000 entries made up of a pair of sentences in Spanish and their respective similarity value in the range 0 to 1. This corpus was used in the training of the
[sentence-transformer](https://www.sbert.net/) library to improve the efficiency of the [RoBERTa-large-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne) base model.
Each of the pairs of sentences are textual descriptions of the faces of the CelebA dataset, which were previously translated into Spanish. The process followed to generate it was:
- First, a translation of the original English text into Spanish was made. The original corpus in English was obtained from the work [Text2faceGAN ](https://arxiv.org/pdf/1911.11378.pdf)
- An algorithm was implemented that randomly selects two sentences from the translated corpus and calculates their similarity value. _Spacy_ was used to obtain the similarity value of each
pair of sentences.
- Since both _Spacy_ and most of the libraries to calculate sentence similarity only work in the English language, part of the algorithm consisted in additionally selecting the pair of sentences from the original corpus in English.
Finally, the final training corpus for RoBERTa is defined by the Spanish text and the similarity score.
- Each pair of sentences in Spanish and the similarity value separated by the character "|", are saved as entries of the new corpus.
The training of RoBERTa-large-bne + CelebA, using the present corpus was developed, resulting in the new model [RoBERTa-celebA-Sp](https://huggingface.co/oeg/RoBERTa-CelebA-Sp/blob).
## Corpus Fields
Each corpus entry is composed of:
- Sentence A: Descriptive sentence of a CelebA face in Spanish.
- Sentence B: Descriptive sentence of a CelebA face in Spanish.
- Similarity Value: Similarity of sentence A and sentence B.
Each component is separated by the character "|" with the structure:
```
SentenceA | Sentence B | similarity value
```
You can download the file with a _.txt_ or _.csv_ extension as appropriate.
## Citation information
**Citing**: If you used CelebA_RoBERTa_Sp corpus in your work, please cite the **[????](???)**:
<!--```bib
@article{inffus_TINTO,
title = {A novel deep learning approach using blurring image techniques for Bluetooth-based indoor localisation},
journal = {Information Fusion},
author = {Reewos Talla-Chumpitaz and Manuel Castillo-Cara and Luis Orozco-Barbosa and Raúl García-Castro},
volume = {91},
pages = {173-186},
year = {2023},
issn = {1566-2535},
doi = {https://doi.org/10.1016/j.inffus.2022.10.011}
}
```-->
## License
This corpus is available under the **[Apache License 2.0](https://github.com/manwestc/TINTO/blob/main/LICENSE)**.
## Autors
- [Eduardo Yauri Lozano](https://github.com/eduar03yauri)
- [Manuel Castillo-Cara](https://github.com/manwestc)
- [Raúl García-Castro](https://github.com/rgcmme)
[*Universidad Nacional de Ingeniería*](https://www.uni.edu.pe/), [*Ontology Engineering Group*](https://oeg.fi.upm.es/), [*Universidad Politécnica de Madrid.*](https://www.upm.es/internacional)
## Contributors
See the full list of contributors [here](https://github.com/eduar03yauri/DCGAN-text2face-forSpanish).
<kbd><img src="https://www.uni.edu.pe/images/logos/logo_uni_2016.png" alt="Universidad Politécnica de Madrid" width="100"></kbd>
<kbd><img src="https://raw.githubusercontent.com/oeg-upm/TINTO/main/assets/logo-oeg.png" alt="Ontology Engineering Group" width="100"></kbd>
<kbd><img src="https://raw.githubusercontent.com/oeg-upm/TINTO/main/assets/logo-upm.png" alt="Universidad Politécnica de Madrid" width="100"></kbd> |
Rab0na/bookcorpus_maxlen_32_tokenized | 2023-03-19T08:22:58.000Z | [
"region:us"
] | Rab0na | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: bert_token
sequence: int64
- name: gpt2_token
sequence: int64
splits:
- name: test
num_bytes: 1848440.250435421
num_examples: 6960
- name: train
num_bytes: 18480581597.76182
num_examples: 69585613
download_size: 3934201942
dataset_size: 18482430038.012257
---
# Dataset Card for "bookcorpus_maxlen_32_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
saitsharipov/CelebA-HQ | 2023-03-19T16:05:00.000Z | [
"license:unknown",
"region:us"
] | saitsharipov | null | null | null | 0 | 4 | ---
license: unknown
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 1409379538.427
num_examples: 202599
download_size: 1392722635
dataset_size: 1409379538.427
---
|
LangChainDatasets/llm-math | 2023-03-20T03:47:19.000Z | [
"license:mit",
"region:us"
] | LangChainDatasets | null | null | null | 0 | 4 | ---
license: mit
---
|
semeru/code-text-python | 2023-03-23T18:46:18.000Z | [
"license:mit",
"arxiv:1909.09436",
"region:us"
] | semeru | null | null | null | 2 | 4 | ---
license: mit
Programminglanguage: "python"
version: "2.7"
Date: "Codesearchnet(Jun 2020 - paper release date)"
Contaminated: "Very Likely"
Size: "Standar Tokenizer (TreeSitter)"
---
### Dataset is imported from CodeXGLUE and pre-processed using their script.
# Where to find in Semeru:
The dataset can be found at /nfs/semeru/semeru_datasets/code_xglue/code-to-text/python in Semeru
# CodeXGLUE -- Code-To-Text
## Task Definition
The task is to generate natural language comments for a code, and evaluted by [smoothed bleu-4](https://www.aclweb.org/anthology/C04-1072.pdf) score.
## Dataset
The dataset we use comes from [CodeSearchNet](https://arxiv.org/pdf/1909.09436.pdf) and we filter the dataset as the following:
- Remove examples that codes cannot be parsed into an abstract syntax tree.
- Remove examples that #tokens of documents is < 3 or >256
- Remove examples that documents contain special tokens (e.g. <img ...> or https:...)
- Remove examples that documents are not English.
### Data Format
After preprocessing dataset, you can obtain three .jsonl files, i.e. train.jsonl, valid.jsonl, test.jsonl
For each file, each line in the uncompressed file represents one function. One row is illustrated below.
- **repo:** the owner/repo
- **path:** the full path to the original file
- **func_name:** the function or method name
- **original_string:** the raw string before tokenization or parsing
- **language:** the programming language
- **code/function:** the part of the `original_string` that is code
- **code_tokens/function_tokens:** tokenized version of `code`
- **docstring:** the top-level comment or docstring, if it exists in the original string
- **docstring_tokens:** tokenized version of `docstring`
### Data Statistic
| Programming Language | Training | Dev | Test |
| :------------------- | :------: | :----: | :----: |
| Python | 251,820 | 13,914 | 14,918 |
## Reference
<pre><code>@article{husain2019codesearchnet,
title={Codesearchnet challenge: Evaluating the state of semantic code search},
author={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},
journal={arXiv preprint arXiv:1909.09436},
year={2019}
}</code></pre>
|
MichiganNLP/svo_probes | 2023-06-18T05:28:20.000Z | [
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-4.0",
"region:us"
] | MichiganNLP | null | null | null | 1 | 4 | ---
license: cc-by-4.0
language:
- en
pretty_name: SVO-Probes
size_categories:
- 10K<n<100K
---
# SVO-Probes
This dataset comes from https://github.com/deepmind/svo_probes.
## Usage
```python
from datasets import load_dataset
# Note that the following line says "train" split, but there are actually no splits in this dataset.
dataset = load_dataset("MichiganNLP/svo_probes", split="train")
# To see an example, access the first element of the dataset with `dataset[0]`.
```
|
enoreyes/imdb_3000_sphere | 2023-03-22T22:10:44.000Z | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"region:us"
] | enoreyes | null | null | null | 1 | 4 | ---
license: mit
task_categories:
- text-classification
language:
- en
pretty_name: imdb_3000
size_categories:
- 1K<n<10K
---
# Dataset Card for IMDB 3000 Sphere
- **Homepage:** [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/)
## Dataset Summary
Large Movie Review Dataset.
This is a 3000 item selection from the `imdb` dataset for binary sentiment classification for use in the Sphere course on AutoTrain.
## Dataset Structure
An example of 'train' looks as follows.
```
{
"label": 0,
"text": "Goodbye world2\n"
}
``` |
slhenty/climate-fever-nli-stsb | 2023-03-24T21:08:44.000Z | [
"license:unknown",
"region:us"
] | slhenty | A modified CLIMATE-FEVER dataset that includes NLI-style features and STSb-features suitable for SentenceBERT training scripts. | @InProceedings{huggingface:dataset,
title = {climate-fever-nli-stsb},
author={Steve Henty, Omdena, "Cologne, Germany Chapter - Detecting Bias in Climate Reporting in English and German Language News Media"},
year={2023}
} | null | 1 | 4 | ---
license: unknown
viewer: false
---
**==========================================**
**_IN PROGRESS - NOT READY FOR LOADING OR USE_**
**==========================================**
---
# Dataset Card for climate-fever-nli-stsb
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The CLIMATE-FEVER dataset modified to supply NLI-style (**cf-nli**) features or STSb-style (**cf-stsb**) features that SentenceBERT training scripts can use as drop-in replacements for AllNLI and/or STSb datasets.
There are two **cf-nli** datasets: one derived from only SUPPORTS and REFUTES evidence (**cf-nli**), and one that also derived data from NOT_ENOUGH_INFO evidence based on the annotator votes (**cf-nli-nei**).
The feature style is specified as a named configuration when loading the dataset: cf-nli, cf-nli-nei, or cf-stsb. See usage notes below for `load_dataset` examples.
### Usage
Load the **cf-nli** dataset
```python
# if datasets not already in your environment
!pip install datasets
from datasets import load_dataset
# all splits...
dd = load_dataset('climate-fever-nli-stsb', 'cf-nli')
# ... or specific split (only 'train' is available)
ds_train = load_dataset('climate-fever-nli-stsb', 'cf-nli', split='train')
## ds_train can now be injected into SentenceBERT training scripts at the point
## where individual sentence pairs are aggregated into
## {'claim': {'entailment': set(), 'contradiction': set(), 'neutral': set()}} dicts
## for further processing into training samples
```
Load the **cf-nli-nei** dataset
```python
# if datasets not already in your environment
!pip install datasets
from datasets import load_dataset
# all splits...
dd = load_dataset('climate-fever-nli-stsb', 'cf-nli-nei')
# ... or specific split (only 'train' is available)
ds_train = load_dataset('climate-fever-nli-stsb', 'cf-nli-nei', split='train')
## ds_train can now be injected into SentenceBERT training scripts at the point
## where individual sentence pairs are aggregated into
## {'claim': {'entailment': set(), 'contradiction': set(), 'neutral': set()}} dicts
## for further processing into training samples
```
Load the **cf-stsb** dataset
```python
# if datasets not already in your environment
!pip install datasets
from datasets import load_dataset
# all splits...
dd = load_dataset('climate-fever-nli-stsb', 'cf-stsb')
# ... or specific split ('train', 'dev', 'test' available)
ds_dev = load_dataset('climate-fever-nli-stsb', 'cf-stsb', split='dev')
## ds_dev (or test) can now be injected into SentenceBERT training scripts at the point
## where individual sentence pairs are aggregated into
## a list of dev (or test) samples
```
<!--
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
-->
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
SentenceBERT models are designed for 'Domain Adaptation' and/or 'Fine-tuning' using labeled data in the downstream task domain. As a bi-encoder, the primary objective function is real-valued similarity scoring. Typical training datasets use NLI-style features as input, and STSb-style features as model evaluation during training, and to measure post-hoc, _intrinsic_ STSb performance. Classification tasks typically use a classifier network that accepts SentenceBERT encodings as input, and is trained on class-labeled datasets.
So, to fine-tune a SentenceBERT model in a climate-change domain, a labeled climate change dataset would be ideal. Much like the authors of the CLIMATE-FEVER dataset, we know of no other _labeled_ datasets specific to climate change. And while CLIMATE-FEVER is suitably labeled for classification tasks, it is not ready for similarity tuning in the style of SentenceBERT.
This modified CLIMATE-FEVER dataset attempts to fill that gap by deriving NLI-style features typically used in pre-training and fine-tuning a SentenceBERT model. SentenceBERT also uses STSb-style features to evaluate model performance both during training and after training to gauge _intrinsic_ model performance on STSb.
### Source Data
#### Initial Data Collection and Normalization
see CLIMATE-FEVER
#### Who are the source language producers?
see CLIMATE-FEVER
<!--
### Annotations
-->
### Annotation process
#### **cf-nli**
For each Claim that has both SUPPORTS evidence and REFUTES evidence, create labeled pairs in the style of NLI dataset:
| split | dataset | sentence1 | sentence2 | label |
|---|---|---|---|---|
| {'train', 'test'} | 'climate-fever' | claim | evidence | evidence_label SUPPORTS -> 'entailment', REFUTES -> 'contradiction' |
> Note that by defintion, only claims classified as DISPUTED include both SUPPORTS and REFUTES evidence, so this dataset is limited to a small subset of CLIMATE-FEVER.
### **cf-nli-nei**
This dataset uses the list of annotator 'votes' to cast a NOT_ENOUGH_INFO (NEI) evidence to a SUPPORTS or REFUTES evidence. By doing so, Claims in the SUPPORTS, REFUTES, and NEI classes can be used to generate additional sentence pairs.
| votes | effective evidence_label |
|---|---|
| SUPPORTS > REFUTES | _SUPPORTS_ |
| SUPPORTS < REFUTES | _REFUTES_ |
In addition to all the claims in **cf-nli**, any Claims that have,
* **_at least one_** SUPPORTS or REFUTES evidence, AND
* NEI evidences that can be cast to effective _SUPPORTS_ or _REFUTES_
are included in the datasset.
### **cf-stsb**
For each Claim <-> Evidence pair, create labeled pairs in the style of STSb dataset:
| split | dataset | score | sentence1 | sentence2 |
|---|---|---|---|---|
| {'train', 'dev', 'test'} | 'climate-fever' | cos_sim score | claim | evidence |
This dataset uses 'evidence_label', vote 'entropy', and the list of annotator 'votes' to derive a similarity score for each claim <-> evidence pairing. Similarity score conversion:
> `mean(entropy)` refers to the average entropy within the defined group of evidence
| evidence_label | votes | similarity score |
|---|---|---|
| SUPPORTS | SUPPORTS > 0, REFUTES == 0, NOT_ENOUGH_INFO (NEI) == 0 | 1 |
| | SUPPORTS > 0, REFUTES == 0 | mean(entropy) |
| | SUPPORTS > 0, REFUTES > 0 | 1 - mean(entropy) |
| NEI | SUPPORTS > REFUTES | (1 - mean(entropy)) / 2|
| | SUPPORTS == REFUTES | 0 |
| | SUPPORTS < REFUTES | -(1 - mean(entropy)) / 2 |
| REFUTES | SUPPORTS > 0, REFUTES > 0 | -(1 - mean(entropy)) |
| | SUPPORTS == 0, REFUTES > 0 | -mean(entropy) |
| | SUPPORTS == 0, REFUTES > 0, NEI == 0 | -1 |
The above derivation roughly maps the strength of evidence annotation (REFUTES..NEI..SUPPORTS) to cosine similarity (-1..0..1).
<!--
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
--> |
ErfanMoosaviMonazzah/fake-news-detection-dataset-English | 2023-03-23T13:05:33.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:openrail",
"fake news",
"text classification",
"region:us"
] | ErfanMoosaviMonazzah | null | null | null | 0 | 4 | ---
license: openrail
task_categories:
- text-classification
language:
- en
tags:
- fake news
- text classification
pretty_name: Fake News Detection Dataset (English)
size_categories:
- 10K<n<100K
---
This is a cleaned and splitted version of this dataset (https://www.kaggle.com/datasets/sadikaljarif/fake-news-detection-dataset-english) <br>
Labels:
- Fake News: 0
- Real News: 1
<br>
You can find the cleansing script at: https://github.com/ErfanMoosaviMonazzah/Fake-News-Detection |
nguyenminh871/reentrancy_solidity_function | 2023-03-24T10:25:20.000Z | [
"region:us"
] | nguyenminh871 | null | null | null | 1 | 4 | ---
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: func
dtype: string
- name: target
dtype: bool
- name: project
dtype: string
splits:
- name: train
num_bytes: 840896
num_examples: 3203
download_size: 156960
dataset_size: 840896
---
# Dataset Card for "reentrancy_solidity_function"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
somosnlp/somos-clean-alpaca-es | 2023-04-05T15:00:28.000Z | [
"region:us"
] | somosnlp | null | null | null | 12 | 4 | ---
dataset_info:
features:
- name: text
dtype: 'null'
- name: inputs
struct:
- name: 1-instruction
dtype: string
- name: 2-input
dtype: string
- name: 3-output
dtype: string
- name: prediction
list:
- name: label
dtype: string
- name: score
dtype: float64
- name: prediction_agent
dtype: 'null'
- name: annotation
dtype: 'null'
- name: annotation_agent
dtype: 'null'
- name: vectors
struct:
- name: input
sequence: float64
- name: instruction
sequence: float64
- name: output
sequence: float64
- name: multi_label
dtype: bool
- name: explanation
dtype: 'null'
- name: id
dtype: string
- name: metadata
struct:
- name: tr-flag-1-instruction
dtype: bool
- name: tr-flag-2-input
dtype: bool
- name: tr-flag-3-output
dtype: bool
- name: status
dtype: string
- name: event_timestamp
dtype: timestamp[us]
- name: metrics
dtype: 'null'
splits:
- name: train
num_bytes: 985217294
num_examples: 51942
download_size: 651888026
dataset_size: 985217294
---
# Dataset Card for "somos-clean-alpaca-es"
Este conjunto de datos es una traducción del dataset Clean Alpaca al Español y sirve como referencia para el esfuerzo colaborativo de limpieza y mejora del dataset durante el [Hackathon Somos NLP 2023](https://somosnlp.org/hackathon). *Nota: No es necesario participar en el hackathon para contribuir a esta tarea.*
Cuantas más personas y equipos participen mayor será la calidad del dataset final y por lo tanto también del LLM que entrenemos, ¡únete!
Te explicamos como participar:
> **[Video explicativo (10 mins) | Daniel @Argilla](https://www.youtube.com/watch?v=Q-2qsvOEgnA)**
> **[Artículo "Ayuda a mejorar los LLM de AI en español en 7 sencillos pasos" | Carlos @Platzi](https://platzi.com/blog/ayuda-a-mejorar-los-llm-en-espanol-en-7-sencillos-pasos/)**
Estamos a tu disponibilidad en el **[canal #alpaca-es](https://discord.com/invite/my8w7JUxZR)** del servidor de Discord de Somos NLP.
## 🔥 El reto
A continuación se describen los pasos y normas para participar:
1. Se debe utilizar este conjunto de datos como punto de partida y mantener tanto los `ids` como la estructura. Esto es así para poder realizar tareas posteriores de validación cruzada y mejoras programáticas del dataset final.
2. Se trata de un dataset en formato compatible con Argilla. Cada equipo o persona que quiera participar, puede trabajar con su propia instancia de Argilla. Una forma fácil de empezar es duplicar el Space que hemos creado para el reto. En la sección de abajo encontrarás como hacerlo.
3. Argilla se puede utilizar para validar y etiquetar manualmente y usando búsquedas y similitud semántica desde la UI. Para ello se pondrán ejemplos de uso del lenguaje de búsqueda en esta página, pero se recomienda consultar [la guía de uso](https://docs.argilla.io/en/latest/guides/query_datasets.html).
4. La validación humana es necesaria para garantizar la calidad final pero se pueden realizar también limpiezas programáticas para aquellos casos en los que sea más eficiente. En cualquier caso, para el éxito del experimento se deberán utilizar las etiquetas propuestas, aunque se modifique programáticamente el dataset.
5. No se deben borrar registros del dataset, si un registro es inválido se deberá indicar en la etiqueta (por ejemplo `BAD INPUT`) o con el status `discard`.
6. Antes de empezar a anotar, es necesario leer la [guía de anotación](guia-de-anotacion.md) al completo.
El resultado del reto será un dataset por persona o equipo que contenga el dataset original etiquetado parcialmente, y opcionalmente otras versiones/subconjuntos del dataset con los datos corregidos, mejorados o aumentados. En estos casos es conveniente mantener un dataset a parte con los ids originales.
Al finalizar combinaremos todas las versiones etiquetadas para conseguir un dataset de calidad.
## ✅ Cómo empezar a etiquetar
Para etiquetar el dataset tienes que:
1. Lanzar tu Argilla Space siguiendo [este link](https://huggingface.co/spaces/somosnlp/somos-alpaca-es?duplicate=true). Esto te guiará para crear una instancia de Argilla en el Hub que cargará automaticamente el dataset (ver captura de pantalla abajo). **IMPORTANTE**: que el Space sea Public para poder leer los datos etiquetados desde Python. El proceso de carga puede tardar hasta 10 minutos, puedes consultar los logs para comprobar que se están cargando los datos.
2. **IMPORTANTE:** Si se quiere sincronizar los datos validados con el Hub para no perder las anotaciones si se reinicia el Space, hay que configurar dos secrets (en Settings del Space): `HF_TOKEN` que es [vuestro token de escritura](https://huggingface.co/settings/tokens), y `HUB_DATASET_NAME` que es el dataset donde queréis guardarlo, importante incluir la organizacion o persona seguido de un / y el nombre del dataset. Por ejemplo `juanmartinez/somos-clean-alpaca-es-validations` o `miempresa/somos-clean-alpaca-es-validations`.
3. El usuario y contraseña es `argilla` / `1234`. Mientras se carga tu Argilla Space con el dataset puedes aprovechar para leer las guías de anotación.
4. Aunque en principio se va sincronizar el dataset anotado, recomendamos que abras Colab o un notebook en local y que guardes el dataset periodicamente en un dataset del Hub (puede ser en tu espacio personal o tu organización). Para ello recomendamos leer el apartado como guardar el dataset en el Hub.
Se recomienda mirar el log del Space para ver si hay errores a la hora de configurar los Secret `HF_TOKEN` y `HUB_DATASET_NAME`.

## 🚀 Desplegar Argilla localmente o en un servidor cloud
Para equipos que tengan el tiempo y quieran desplegar una versión con más capacidad de computación y estabilidad que Spaces, [aquí hay una guía explicativa](https://docs.argilla.io/en/latest/getting_started/installation/deployments/deployments.html).
Una vez instalada, se deben subir los datos con [este notebook](https://colab.research.google.com/drive/1KyikSFeJe6_lQNs-9cHveIOGM99ENha9#scrollTo=jbfdRoRVXTW6).
## ✍️ Guías de anotación
Antes de empezar a anotar, es necesario leer la [guía de anotación](guia-de-anotacion.md) al completo.
## 💾 IMPORTANTE: Guardar el dataset en el Hub periodicamente
Aunque se ha configurado el Space para que se sincronice con un dataset del Hub a vuestra elección, para tener más seguridad se recomienda guardar una copia del dataset en el Hub ejecutando el siguiente código. Es necesario hacer login con Python usando `from huggingface_hub import notebook_login` o añadir el token directamente al hacer el push_to_hub:
```python
import argilla as rg
# usar rg.init() para definir la API_URL (la direct URL de tu Space de Argilla) y API_KEY
rg.init(
api_url="https://tu-space-de-argilla.hf.space",
api_key="team.apikey"
)
# Leer dataset con validaciones de Argilla
rg_dataset = rg.load("somos-clean-alpaca-es-team", query="status:Validated")
# Transformar a formato datasets
dataset = rg_dataset.to_datasets()
# Publicar en el Hub, puedes usar cualquier nombre de dataset que elijas
dataset.push_to_hub("somos-clean-alpaca-es", token="TU TOKEN WRITE EN SETTINGS HUB. NO NECESARIO SI HAS HECHO LOGIN")
```
Una vez hecho esto se puede recuperar el dataset y volver a cargar en Argilla con el notebook de "Cómo cargar el dataset en Argilla".
## 🔎 Ejemplos de consultas y trucos para etiquetar
Se recomienda comenzar explorando y etiquetando el dataset de manera secuencial para entender la estructura e ir identificando patrones.
Una vez hecho esto se recomienda combinarlo con las siguientes herramientas:
### Utilizar el buscador
Tanto con palabras clave, como con expresiones regulares, y wildcards y expresiones booleanas, ver [la guía de uso](https://docs.argilla.io/en/latest/guides/query_datasets.html).
Un aspecto interesante es la capacidad de buscar solo en determinados campos. Para ello, hay que utilizar la siguiente sintaxis `inputs.nombre_del_campo:"consulta"`:
Por ejemplo: `inputs.1-instruction:"Crear una página"` encontraría todos aquellos registros con este texto en la instrucción.
Además, esto se puede combinar con expresiones booleanas para buscar en varios campos: `inputs.1-instruction:"Crear una página" AND inputs.3-output:"html"`
Otro ejemplos:
Encontrar frases de instrucción en Inglés: `inputs.1-instruction:Edit the following sentence` encuentra más de 100 instrucciones inválidas.
### Find similar
Cuando encontramos patrones interesantes o erróneos en un registro y campo, podemos usar el botón find similar para encontrar ejemplos similares gracias al uso de similarity search usando embeddings.
### Etiquetado en lote (bulk)
Si encontramos un patrón muy claro, podemos revisar los ejemplos más rápido y anotarlos en bloque usando la barra superior, debajo del buscador. Si hay mucho ejemplos se puede aumentar el número de registros por página. Se recomienda en cualquier caso revisar los ejemplos.
## ✨ Hackathon Somos NLP 2023
- No es necesario participar en el hackathon para unirse a esta tarea colaborativa.
- Los equipos que participen en el hackathon pueden utilizar su versión etiquetada de este dataset para su proyecto.
- Las versiones etiquetadas de este dataset serán elegibles para ganar la mención de honor al mejor dataset etiquetado.
## 🙌 Agradecimientos
Muchas gracias
a `versae` del proyecto BERTIN por la traducción del dataset,
a `dvilasuero` y `nataliaElv` de Argilla por crear la documentación y resolver todas las dudas de las personas participantes,
a `alarcon7a` de Platzi por escribir el artículo de blog, y
a `mariagrandury` de Somos NLP por coordinar e integrar el reto en el hackathon.
Al combinar las versiones y crear el dataset final mencionaremos a todas las personas que hayan participado en este esfuerzo 🤗 |
ID3/wikilibros_artesculinarias_recetas | 2023-03-26T03:33:17.000Z | [
"language:es",
"license:cc-by-sa-3.0",
"region:us"
] | ID3 | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: comensales
dtype: string
- name: tiempo
dtype: string
- name: dificultad
dtype: string
- name: ingredientes
sequence: string
- name: procedimiento
sequence: string
- name: titulo
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 727791
num_examples: 753
- name: validation
num_bytes: 78214
num_examples: 84
download_size: 444915
dataset_size: 806005
license: cc-by-sa-3.0
language:
- es
pretty_name: Recetas de cocina Wikilibros
---
# Dataset Card for "wikilibros_artesculinarias_recetas"
## Dataset Description
Subconjunto de recetas de cocina extraidas de [Artes Culinarias](https://es.wikibooks.org/wiki/Artes_culinarias/Recetas) |
vietgpt/alpaca_vi | 2023-07-03T13:49:13.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:vi",
"SFT",
"region:us"
] | vietgpt | null | null | null | 0 | 4 | ---
dataset_info:
features:
- name: text
dtype: string
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 60333806.54451752
num_examples: 51548
download_size: 28605089
dataset_size: 60333806.54451752
task_categories:
- text-generation
language:
- vi
tags:
- SFT
size_categories:
- 10K<n<100K
---
# Alpaca-Cleaned
- Source: https://huggingface.co/datasets/yahma/alpaca-cleaned
- Num examples: 51,548
- Language: Vietnamese
```python
from datasets import load_dataset
load_dataset("vietgpt/alpaca_vi")
```
- Format for Instruction
```python
def preprocess(
sample,
instruction_key="### Instruction:",
input_key="Input:",
response_key="### Response:",
end_key="<|endoftext|>"
):
instruction = sample['instruction']
input = sample['input']
response = sample['output']
if input:
return {'text': """Dưới đây là một hướng dẫn mô tả một tác vụ, được ghép nối với một đầu vào cung cấp thêm ngữ cảnh. Viết một phản hồi hoàn thành yêu cầu một cách thích hợp.
{instruction_key}
{instruction}
{input_key}
{input}
{response_key}
{response}
{end_key}""".format(
instruction_key=instruction_key,
instruction=instruction,
input_key=input_key,
input=input,
response_key=response_key,
response=response,
end_key=end_key,
)}
else:
return {'text': """Dưới đây là một hướng dẫn mô tả một nhiệm vụ. Viết một phản hồi hoàn thành yêu cầu một cách thích hợp.
{instruction_key}
{instruction}
{response_key}
{response}
{end_key}""".format(
instruction_key=instruction_key,
instruction=instruction,
response_key=response_key,
response=response,
end_key=end_key,
)}
"""
Dưới đây là một hướng dẫn mô tả một nhiệm vụ. Viết một phản hồi hoàn thành yêu cầu một cách thích hợp.
### Instruction:
Đưa ra ba lời khuyên để giữ gìn sức khỏe.
### Response:
1. Ăn một chế độ ăn cân bằng và chắc chắn bao gồm đủ rau và hoa quả.
2. Tập thể dục thường xuyên để giữ cho cơ thể của bạn hoạt động và khỏe mạnh.
3. Ngủ đủ giấc và duy trì lịch trình ngủ ổn định.
<|endoftext|>
"""
``` |
cartesinus/leyzer-fedcsis-translated | 2023-03-27T21:52:34.000Z | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:pl",
"license:cc-by-4.0",
"natural-language-understanding",
"region:us"
] | cartesinus | Leyzer is a multilingual text corpus designed to study multilingual and cross-lingual natural language
understanding (NLU) models and the strategies of localization of virtual assistants. It consists of 20
domains across three languages: English, Spanish and Polish, with 186 intents and a wide range of
samples, ranging from 1 to 672 sentences per intent. | @inproceedings{sowanski2020leyzer,
title={Leyzer: A Dataset for Multilingual Virtual Assistants},
author={Sowa{\'n}ski, Marcin and Janicki, Artur},
booktitle={International Conference on Text, Speech, and Dialogue},
pages={477--486},
year={2020},
organization={Springer}
} | null | 0 | 4 | ---
license: cc-by-4.0
task_categories:
- text-classification
language:
- pl
tags:
- natural-language-understanding
size_categories:
- 10K<n<100K
---
# Leyzer: A Dataset for Multilingual Virtual Assistants
Leyzer is a multilingual text corpus designed to study multilingual and cross-lingual natural language understanding (NLU) models and the strategies of localization of
virtual assistants. It consists of 20 domains across three languages: English, Spanish and Polish, with 186 intents and a wide range of samples, ranging from 1 to 672
sentences per intent. For more stats please refer to wiki.
|
swype/instruct | 2023-04-05T23:14:28.000Z | [
"license:mit",
"region:us"
] | swype | A dataset containing prompt and completion pairs for various tasks. | @misc{srikanth2023swypedataset,
author = {Srikanth Srinivas},
title = {Swype.com Dataset},
year = {2023},
publisher = {Swype.com},
howpublished = {\\url{https://swype.com}},
email = {s@swype.com}
} | null | 48 | 4 | ---
license: mit
---
# A large instruct dataset
This dataset is a combination of multiple sources, including the GPT4All dataset, the Alpaca dataset from Stanford, custom generation using AllenAI augmentation, and some dataset augmentation from open-source Meta datasets. The dataset is split into 70% for training, 20% for validation, and 10% for testing.
## Description
The Swype.com dataset contains prompt and completion pairs for various tasks. It's an augmented version of the following datasets:
- [GPT4All](https://github.com/nomic-ai/gpt4all): A dataset containing a wide range of tasks for training and evaluating general-purpose language models.
- [Alpaca dataset from Stanford](https://github.com/tatsu-lab/stanford_alpaca): A dataset containing prompts, completions, and annotations for controllable text generation.
- Custom generation using [AllenAI augmentation](https://allenai.org): Augmentation performed using the advanced NLP tools provided by AllenAI.
- Some dataset augmentation from open-source Meta datasets: Additional augmentation from various open-source Meta datasets.
The dataset is designed for training and evaluating language models on diverse tasks, with a focus on controllable and instruction-based text generation.
## Dataset Structure
The dataset contains the following columns:
- `prompt`: The input prompt string, representing a task or question.
- `completion`: The output completion string, representing the answer or generated text based on the prompt.
## Citation
If you use this dataset in your research or work, please cite it as follows:
@misc{srikanth2023swypedataset,
author = {Srikanth Srinivas},
title = {Swype.com Dataset},
year = {2023},
publisher = {Swype.com},
howpublished = {\url{https://swype.com}},
email = {s@swype.com}
} |
vietgpt/daily_dialog_en | 2023-03-30T19:39:19.000Z | [
"task_categories:conversational",
"size_categories:10K<n<100K",
"language:en",
"SFT",
"region:us"
] | vietgpt | null | null | null | 1 | 4 | ---
dataset_info:
features:
- name: tokenized_dialog
sequence: string
- name: dialog
sequence: string
splits:
- name: train
num_bytes: 11506589
num_examples: 11118
- name: validation
num_bytes: 1063103
num_examples: 1000
- name: test
num_bytes: 1037258
num_examples: 1000
download_size: 8375068
dataset_size: 13606950
task_categories:
- conversational
language:
- en
tags:
- SFT
size_categories:
- 10K<n<100K
---
# DailyDialog
- Source: https://huggingface.co/datasets/daily_dialog
- Num examples:
- 11,118 (train)
- 1,000 (validation)
- 1,000 (test)
- Language: English
```python
from datasets import load_dataset
load_dataset("tdtunlp/daily_dialog_en")
``` |
vietgpt/alpaca_en | 2023-07-03T13:48:57.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"SFT",
"region:us"
] | vietgpt | null | null | null | 1 | 4 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 46071779.64893658
num_examples: 51848
download_size: 24154901
dataset_size: 46071779.64893658
task_categories:
- text-generation
language:
- en
tags:
- SFT
size_categories:
- 10K<n<100K
---
# Alpaca-Cleaned
- Source: https://huggingface.co/datasets/yahma/alpaca-cleaned
- Num examples: 51,848
- Language: English
```python
from datasets import load_dataset
load_dataset("tdtunlp/alpaca_en")
```
- Format for Instruction task
```python
def preprocess(
sample,
instruction_key="### Instruction:",
input_key="Input:",
response_key="### Response:",
end_key="<|endoftext|>"
):
instruction = sample['instruction']
input = sample['input']
response = sample['output']
if input:
return {'text': """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
{instruction_key}
{instruction}
{input_key}
{input}
{response_key}
{response}
{end_key}""".format(
instruction_key=instruction_key,
instruction=instruction,
input_key=input_key,
input=input,
response_key=response_key,
response=response,
end_key=end_key,
)}
else:
return {'text': """Below is an instruction that describes a task. Write a response that appropriately completes the request.
{instruction_key}
{instruction}
{response_key}
{response}
{end_key}""".format(
instruction_key=instruction_key,
instruction=instruction,
response_key=response_key,
response=response,
end_key=end_key,
)}
"""
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
Give three tips for staying healthy.
### Response:
1.Eat a balanced diet and make sure to include plenty of fruits and vegetables.
2. Exercise regularly to keep your body active and strong.
3. Get enough sleep and maintain a consistent sleep schedule.
<|endoftext|>
"""
``` |
sampath017/plants | 2023-03-30T04:03:57.000Z | [
"task_categories:image-classification",
"size_categories:n<1K",
"language:en",
"license:gpl-3.0",
"region:us"
] | sampath017 | null | null | null | 0 | 4 | ---
license: gpl-3.0
task_categories:
- image-classification
language:
- en
pretty_name: 'plants images '
size_categories:
- n<1K
--- |
rcds/swiss_leading_decisions | 2023-07-20T07:38:35.000Z | [
"task_categories:text-classification",
"annotations_creators:machine-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:de",
"language:it",
"language:fr",
"license:cc-by-sa-4.0",
"arxiv:2306.09237"... | rcds | null | null | null | 2 | 4 | ---
license: cc-by-sa-4.0
language:
- de
- it
- fr
size_categories:
- 10K<n<100K
annotations_creators:
- machine-generated
language_creators:
- expert-generated
multilinguality:
- multilingual
source_datasets:
- original
task_categories:
- text-classification
pretty_name: Swiss Leading Decisions
---
# Dataset Card for Swiss Leading Decisions
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Swiss Leading Decisions is a multilingual, diachronic dataset of 21K Swiss Federal Supreme Court (FSCS) cases. This dataset is part of a challenging text classification task. We also provide additional metadata as the publication year, the law area and the canton of origin per case, to promote robustness and fairness studies on the critical area of legal NLP.
### Supported Tasks and Leaderboards
Swiss Leading Decisions hepled in a text classification task
### Languages
Switzerland has four official languages with three languages German, French and Italian being represenated. The decisions are written by the judges and clerks in the language of the proceedings.
| Language | Subset | Number of Documents |
|------------|------------|----------------------|
| German | **de** | 14K |
| French | **fr** | 6K |
| Italian | **it** | 1K |
## Dataset Structure
### Data Fields
```
decision_id: (str) a unique identifier of the for the document
language: (int64) one of (0,1,2)
chamber_id: (int64) id to identfy chamber
file_id: (int64) id to identify file
date: (int64)
topic: (string)
year: (float64)
language: (string)
facts: (string) text section of the full text
facts_num_tokens_bert: (int64)
facts_num_tokens_spacy: (int64)
considerations: (string) text section of the full text
considerations_num_tokens_bert: (int64)
considerations_num_tokens_spacy: (int64)
rulings: (string) text section of the full text
rulings_num_tokens_bert: (int64)
rulings_num_tokens_spacy: (int64)
chamber (string):
court: (string)
canton: (string)
region: (string)
file_name: (string)
html_url: (string)
pdf_url: (string)
file_number: (string)
```
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
## Dataset Creation
### Curation Rationale
The dataset was created by Stern (2023).
### Source Data
#### Initial Data Collection and Normalization
The original data are published from the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML.
#### Who are the source language producers?
The decisions are written by the judges and clerks in the language of the proceedings.
### Annotations
#### Annotation process
#### Who are the annotators?
Metadata is published by the Swiss Federal Supreme Court (https://www.bger.ch).
### Personal and Sensitive Information
The dataset contains publicly available court decisions from the Swiss Federal Supreme Court. Personal or sensitive information has been anonymized by the court before publication according to the following guidelines: https://www.bger.ch/home/juridiction/anonymisierungsregeln.html.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
We release the data under CC-BY-4.0 which complies with the court licensing (https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf)
© Swiss Federal Supreme Court, 2002-2022
The copyright for the editorial content of this website and the consolidated texts, which is owned by the Swiss Federal Supreme Court, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source: https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf
### Citation Information
Please cite our [ArXiv-Preprint](https://arxiv.org/abs/2306.09237)
```
@misc{rasiah2023scale,
title={SCALE: Scaling up the Complexity for Advanced Language Model Evaluation},
author={Vishvaksenan Rasiah and Ronja Stern and Veton Matoshi and Matthias Stürmer and Ilias Chalkidis and Daniel E. Ho and Joel Niklaus},
year={2023},
eprint={2306.09237},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@Stern5497](https://github.com/stern5497) for adding this dataset. |
Francesco/brain-tumor-m2pbp | 2023-03-30T09:11:06.000Z | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc",
"rf100",
"region:us"
] | Francesco | null | null | null | 2 | 4 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': brain-tumor
'1': label0
'2': label1
'3': label2
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: brain-tumor-m2pbp
tags:
- rf100
---
# Dataset Card for brain-tumor-m2pbp
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/brain-tumor-m2pbp
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
brain-tumor-m2pbp
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/brain-tumor-m2pbp
### Citation Information
```
@misc{ brain-tumor-m2pbp,
title = { brain tumor m2pbp Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/brain-tumor-m2pbp } },
url = { https://universe.roboflow.com/object-detection/brain-tumor-m2pbp },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
Francesco/poker-cards-cxcvz | 2023-03-30T09:14:35.000Z | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc",
"rf100",
"region:us"
] | Francesco | null | null | null | 2 | 4 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': poker-cards
'1': 59
'2': 10 Diamonds
'3': 10 Hearts
'4': 10 Spades
'5': 10 Trefoils
'6': 2 Diamonds
'7': 2 Hearts
'8': 2 Spades
'9': 2 Trefoils
'10': 3 Diamonds
'11': 3 Hearts
'12': 3 Spades
'13': 3 Trefoils
'14': 4 Diamonds
'15': 4 Hearts
'16': 4 Spades
'17': 4 Trefoils
'18': 5 Diamonds
'19': 5 Hearts
'20': 5 Spades
'21': 5 Trefoils
'22': 6 Diamonds
'23': 6 Hearts
'24': 6 Spades
'25': 6 Trefoils
'26': 7 Diamonds
'27': 7 Hearts
'28': 7 Spades
'29': 7 Trefoils
'30': 8 Diamonds
'31': 8 Hearts
'32': 8 Spades
'33': 8 Trefoils
'34': 9 Diamonds
'35': 9 Hearts
'36': 9 Spades
'37': 9 Trefoils
'38': A Diamonds
'39': A Hearts
'40': A Spades
'41': A Trefoils
'42': J Diamonds
'43': J Hearts
'44': J Spades
'45': J Trefoils
'46': K Diamonds
'47': K Hearts
'48': K Spades
'49': K Trefoils
'50': Q Diamonds
'51': Q Hearts
'52': Q Spades
'53': Q Trefoils
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: poker-cards-cxcvz
tags:
- rf100
---
# Dataset Card for poker-cards-cxcvz
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/poker-cards-cxcvz
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
poker-cards-cxcvz
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/poker-cards-cxcvz
### Citation Information
```
@misc{ poker-cards-cxcvz,
title = { poker cards cxcvz Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/poker-cards-cxcvz } },
url = { https://universe.roboflow.com/object-detection/poker-cards-cxcvz },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
Francesco/chess-pieces-mjzgj | 2023-03-30T09:31:59.000Z | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc",
"rf100",
"region:us"
] | Francesco | null | null | null | 2 | 4 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': chess-pieces
'1': bishop
'2': black-bishop
'3': black-king
'4': black-knight
'5': black-pawn
'6': black-queen
'7': black-rook
'8': white-bishop
'9': white-king
'10': white-knight
'11': white-pawn
'12': white-queen
'13': white-rook
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: chess-pieces-mjzgj
tags:
- rf100
---
# Dataset Card for chess-pieces-mjzgj
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/chess-pieces-mjzgj
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
chess-pieces-mjzgj
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/chess-pieces-mjzgj
### Citation Information
```
@misc{ chess-pieces-mjzgj,
title = { chess pieces mjzgj Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/chess-pieces-mjzgj } },
url = { https://universe.roboflow.com/object-detection/chess-pieces-mjzgj },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.