text-classification bool 2 classes | text stringlengths 0 664k |
|---|---|
false |
# Dataset Card for `wapo/v2/trec-news-2019`
The `wapo/v2/trec-news-2019` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wapo#wapo/v2/trec-news-2019).
# Data
This dataset provides:
- `queries` (i.e., topics); count=60
- `qrels`: (relevance assessments); count=15,655
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/wapo_v2_trec-news-2019', 'queries')
for record in queries:
record # {'query_id': ..., 'doc_id': ..., 'url': ...}
qrels = load_dataset('irds/wapo_v2_trec-news-2019', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Soboroff2019News,
title={TREC 2019 News Track Overview},
author={Ian Soboroff and Shudong Huang and Donna Harman},
booktitle={TREC},
year={2019}
}
```
|
false |
# Dataset Card for `wapo/v3/trec-news-2020`
The `wapo/v3/trec-news-2020` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wapo#wapo/v3/trec-news-2020).
# Data
This dataset provides:
- `queries` (i.e., topics); count=50
- `qrels`: (relevance assessments); count=17,764
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/wapo_v3_trec-news-2020', 'queries')
for record in queries:
record # {'query_id': ..., 'doc_id': ..., 'url': ...}
qrels = load_dataset('irds/wapo_v3_trec-news-2020', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
|
false |
# Dataset Card for `wikiclir/ar`
The `wikiclir/ar` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/ar).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=535,118
- `queries` (i.e., topics); count=324,489
- `qrels`: (relevance assessments); count=519,269
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/wikiclir_ar', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'text': ...}
queries = load_dataset('irds/wikiclir_ar', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/wikiclir_ar', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{sasaki-etal-2018-cross,
title = "Cross-Lingual Learning-to-Rank with Shared Representations",
author = "Sasaki, Shota and
Sun, Shuo and
Schamoni, Shigehiko and
Duh, Kevin and
Inui, Kentaro",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N18-2073",
doi = "10.18653/v1/N18-2073",
pages = "458--463"
}
```
|
false |
# Dataset Card for `wikiclir/ca`
The `wikiclir/ca` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/ca).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=548,722
- `queries` (i.e., topics); count=339,586
- `qrels`: (relevance assessments); count=965,233
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/wikiclir_ca', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'text': ...}
queries = load_dataset('irds/wikiclir_ca', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/wikiclir_ca', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{sasaki-etal-2018-cross,
title = "Cross-Lingual Learning-to-Rank with Shared Representations",
author = "Sasaki, Shota and
Sun, Shuo and
Schamoni, Shigehiko and
Duh, Kevin and
Inui, Kentaro",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N18-2073",
doi = "10.18653/v1/N18-2073",
pages = "458--463"
}
```
|
false |
# Dataset Card for `wikiclir/cs`
The `wikiclir/cs` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/cs).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=386,906
- `queries` (i.e., topics); count=233,553
- `qrels`: (relevance assessments); count=954,370
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/wikiclir_cs', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'text': ...}
queries = load_dataset('irds/wikiclir_cs', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/wikiclir_cs', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{sasaki-etal-2018-cross,
title = "Cross-Lingual Learning-to-Rank with Shared Representations",
author = "Sasaki, Shota and
Sun, Shuo and
Schamoni, Shigehiko and
Duh, Kevin and
Inui, Kentaro",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N18-2073",
doi = "10.18653/v1/N18-2073",
pages = "458--463"
}
```
|
false |
# Dataset Card for `wikiclir/de`
The `wikiclir/de` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/de).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=2,091,278
- `queries` (i.e., topics); count=938,217
- `qrels`: (relevance assessments); count=5,550,454
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/wikiclir_de', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'text': ...}
queries = load_dataset('irds/wikiclir_de', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/wikiclir_de', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{sasaki-etal-2018-cross,
title = "Cross-Lingual Learning-to-Rank with Shared Representations",
author = "Sasaki, Shota and
Sun, Shuo and
Schamoni, Shigehiko and
Duh, Kevin and
Inui, Kentaro",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N18-2073",
doi = "10.18653/v1/N18-2073",
pages = "458--463"
}
```
|
false |
# Dataset Card for `wikiclir/en-simple`
The `wikiclir/en-simple` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/en-simple).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=127,089
- `queries` (i.e., topics); count=114,572
- `qrels`: (relevance assessments); count=250,380
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/wikiclir_en-simple', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'text': ...}
queries = load_dataset('irds/wikiclir_en-simple', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/wikiclir_en-simple', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{sasaki-etal-2018-cross,
title = "Cross-Lingual Learning-to-Rank with Shared Representations",
author = "Sasaki, Shota and
Sun, Shuo and
Schamoni, Shigehiko and
Duh, Kevin and
Inui, Kentaro",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N18-2073",
doi = "10.18653/v1/N18-2073",
pages = "458--463"
}
```
|
false |
# Dataset Card for `wikiclir/es`
The `wikiclir/es` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/es).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=1,302,958
- `queries` (i.e., topics); count=781,642
- `qrels`: (relevance assessments); count=2,894,807
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/wikiclir_es', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'text': ...}
queries = load_dataset('irds/wikiclir_es', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/wikiclir_es', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{sasaki-etal-2018-cross,
title = "Cross-Lingual Learning-to-Rank with Shared Representations",
author = "Sasaki, Shota and
Sun, Shuo and
Schamoni, Shigehiko and
Duh, Kevin and
Inui, Kentaro",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N18-2073",
doi = "10.18653/v1/N18-2073",
pages = "458--463"
}
```
|
false |
# Dataset Card for `wikiclir/fi`
The `wikiclir/fi` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/fi).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=418,677
- `queries` (i.e., topics); count=273,819
- `qrels`: (relevance assessments); count=939,613
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/wikiclir_fi', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'text': ...}
queries = load_dataset('irds/wikiclir_fi', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/wikiclir_fi', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{sasaki-etal-2018-cross,
title = "Cross-Lingual Learning-to-Rank with Shared Representations",
author = "Sasaki, Shota and
Sun, Shuo and
Schamoni, Shigehiko and
Duh, Kevin and
Inui, Kentaro",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N18-2073",
doi = "10.18653/v1/N18-2073",
pages = "458--463"
}
```
|
false |
# Dataset Card for `wikiclir/fr`
The `wikiclir/fr` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/fr).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=1,894,397
- `queries` (i.e., topics); count=1,089,179
- `qrels`: (relevance assessments); count=5,137,366
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/wikiclir_fr', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'text': ...}
queries = load_dataset('irds/wikiclir_fr', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/wikiclir_fr', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{sasaki-etal-2018-cross,
title = "Cross-Lingual Learning-to-Rank with Shared Representations",
author = "Sasaki, Shota and
Sun, Shuo and
Schamoni, Shigehiko and
Duh, Kevin and
Inui, Kentaro",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N18-2073",
doi = "10.18653/v1/N18-2073",
pages = "458--463"
}
```
|
false |
# Dataset Card for `wikiclir/it`
The `wikiclir/it` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/it).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=1,347,011
- `queries` (i.e., topics); count=808,605
- `qrels`: (relevance assessments); count=3,443,633
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/wikiclir_it', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'text': ...}
queries = load_dataset('irds/wikiclir_it', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/wikiclir_it', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{sasaki-etal-2018-cross,
title = "Cross-Lingual Learning-to-Rank with Shared Representations",
author = "Sasaki, Shota and
Sun, Shuo and
Schamoni, Shigehiko and
Duh, Kevin and
Inui, Kentaro",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N18-2073",
doi = "10.18653/v1/N18-2073",
pages = "458--463"
}
```
|
false |
# Dataset Card for `wikiclir/ko`
The `wikiclir/ko` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/ko).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=394,177
- `queries` (i.e., topics); count=224,855
- `qrels`: (relevance assessments); count=568,205
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/wikiclir_ko', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'text': ...}
queries = load_dataset('irds/wikiclir_ko', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/wikiclir_ko', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{sasaki-etal-2018-cross,
title = "Cross-Lingual Learning-to-Rank with Shared Representations",
author = "Sasaki, Shota and
Sun, Shuo and
Schamoni, Shigehiko and
Duh, Kevin and
Inui, Kentaro",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N18-2073",
doi = "10.18653/v1/N18-2073",
pages = "458--463"
}
```
|
false |
# Dataset Card for `wikiclir/nl`
The `wikiclir/nl` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/nl).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=1,908,260
- `queries` (i.e., topics); count=687,718
- `qrels`: (relevance assessments); count=2,334,644
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/wikiclir_nl', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'text': ...}
queries = load_dataset('irds/wikiclir_nl', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/wikiclir_nl', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{sasaki-etal-2018-cross,
title = "Cross-Lingual Learning-to-Rank with Shared Representations",
author = "Sasaki, Shota and
Sun, Shuo and
Schamoni, Shigehiko and
Duh, Kevin and
Inui, Kentaro",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N18-2073",
doi = "10.18653/v1/N18-2073",
pages = "458--463"
}
```
|
false |
# Dataset Card for `wikiclir/nn`
The `wikiclir/nn` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/nn).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=133,290
- `queries` (i.e., topics); count=99,493
- `qrels`: (relevance assessments); count=250,141
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/wikiclir_nn', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'text': ...}
queries = load_dataset('irds/wikiclir_nn', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/wikiclir_nn', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{sasaki-etal-2018-cross,
title = "Cross-Lingual Learning-to-Rank with Shared Representations",
author = "Sasaki, Shota and
Sun, Shuo and
Schamoni, Shigehiko and
Duh, Kevin and
Inui, Kentaro",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N18-2073",
doi = "10.18653/v1/N18-2073",
pages = "458--463"
}
```
|
false |
# Dataset Card for `wikiclir/no`
The `wikiclir/no` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/no).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=471,420
- `queries` (i.e., topics); count=299,897
- `qrels`: (relevance assessments); count=963,514
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/wikiclir_no', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'text': ...}
queries = load_dataset('irds/wikiclir_no', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/wikiclir_no', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{sasaki-etal-2018-cross,
title = "Cross-Lingual Learning-to-Rank with Shared Representations",
author = "Sasaki, Shota and
Sun, Shuo and
Schamoni, Shigehiko and
Duh, Kevin and
Inui, Kentaro",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N18-2073",
doi = "10.18653/v1/N18-2073",
pages = "458--463"
}
```
|
false |
# Dataset Card for `wikiclir/pl`
The `wikiclir/pl` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/pl).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=1,234,316
- `queries` (i.e., topics); count=693,656
- `qrels`: (relevance assessments); count=2,471,360
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/wikiclir_pl', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'text': ...}
queries = load_dataset('irds/wikiclir_pl', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/wikiclir_pl', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{sasaki-etal-2018-cross,
title = "Cross-Lingual Learning-to-Rank with Shared Representations",
author = "Sasaki, Shota and
Sun, Shuo and
Schamoni, Shigehiko and
Duh, Kevin and
Inui, Kentaro",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N18-2073",
doi = "10.18653/v1/N18-2073",
pages = "458--463"
}
```
|
false |
# Dataset Card for `wikiclir/ro`
The `wikiclir/ro` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/ro).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=376,655
- `queries` (i.e., topics); count=199,264
- `qrels`: (relevance assessments); count=451,180
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/wikiclir_ro', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'text': ...}
queries = load_dataset('irds/wikiclir_ro', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/wikiclir_ro', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{sasaki-etal-2018-cross,
title = "Cross-Lingual Learning-to-Rank with Shared Representations",
author = "Sasaki, Shota and
Sun, Shuo and
Schamoni, Shigehiko and
Duh, Kevin and
Inui, Kentaro",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N18-2073",
doi = "10.18653/v1/N18-2073",
pages = "458--463"
}
```
|
false |
# Dataset Card for `wikiclir/sv`
The `wikiclir/sv` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/sv).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=3,785,412
- `queries` (i.e., topics); count=639,073
- `qrels`: (relevance assessments); count=2,069,453
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/wikiclir_sv', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'text': ...}
queries = load_dataset('irds/wikiclir_sv', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/wikiclir_sv', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{sasaki-etal-2018-cross,
title = "Cross-Lingual Learning-to-Rank with Shared Representations",
author = "Sasaki, Shota and
Sun, Shuo and
Schamoni, Shigehiko and
Duh, Kevin and
Inui, Kentaro",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N18-2073",
doi = "10.18653/v1/N18-2073",
pages = "458--463"
}
```
|
false |
# Dataset Card for `wikiclir/sw`
The `wikiclir/sw` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/sw).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=37,079
- `queries` (i.e., topics); count=22,860
- `qrels`: (relevance assessments); count=57,924
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/wikiclir_sw', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'text': ...}
queries = load_dataset('irds/wikiclir_sw', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/wikiclir_sw', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{sasaki-etal-2018-cross,
title = "Cross-Lingual Learning-to-Rank with Shared Representations",
author = "Sasaki, Shota and
Sun, Shuo and
Schamoni, Shigehiko and
Duh, Kevin and
Inui, Kentaro",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N18-2073",
doi = "10.18653/v1/N18-2073",
pages = "458--463"
}
```
|
false |
# Dataset Card for `wikiclir/tl`
The `wikiclir/tl` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/tl).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=79,008
- `queries` (i.e., topics); count=48,930
- `qrels`: (relevance assessments); count=72,359
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/wikiclir_tl', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'text': ...}
queries = load_dataset('irds/wikiclir_tl', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/wikiclir_tl', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{sasaki-etal-2018-cross,
title = "Cross-Lingual Learning-to-Rank with Shared Representations",
author = "Sasaki, Shota and
Sun, Shuo and
Schamoni, Shigehiko and
Duh, Kevin and
Inui, Kentaro",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N18-2073",
doi = "10.18653/v1/N18-2073",
pages = "458--463"
}
```
|
false |
# Dataset Card for `wikiclir/tr`
The `wikiclir/tr` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/tr).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=295,593
- `queries` (i.e., topics); count=185,388
- `qrels`: (relevance assessments); count=380,651
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/wikiclir_tr', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'text': ...}
queries = load_dataset('irds/wikiclir_tr', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/wikiclir_tr', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{sasaki-etal-2018-cross,
title = "Cross-Lingual Learning-to-Rank with Shared Representations",
author = "Sasaki, Shota and
Sun, Shuo and
Schamoni, Shigehiko and
Duh, Kevin and
Inui, Kentaro",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N18-2073",
doi = "10.18653/v1/N18-2073",
pages = "458--463"
}
```
|
false |
# Dataset Card for `wikiclir/uk`
The `wikiclir/uk` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/uk).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=704,903
- `queries` (i.e., topics); count=348,222
- `qrels`: (relevance assessments); count=913,358
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/wikiclir_uk', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'text': ...}
queries = load_dataset('irds/wikiclir_uk', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/wikiclir_uk', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{sasaki-etal-2018-cross,
title = "Cross-Lingual Learning-to-Rank with Shared Representations",
author = "Sasaki, Shota and
Sun, Shuo and
Schamoni, Shigehiko and
Duh, Kevin and
Inui, Kentaro",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N18-2073",
doi = "10.18653/v1/N18-2073",
pages = "458--463"
}
```
|
false |
# Dataset Card for `wikiclir/vi`
The `wikiclir/vi` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/vi).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=1,392,152
- `queries` (i.e., topics); count=354,312
- `qrels`: (relevance assessments); count=611,355
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/wikiclir_vi', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'text': ...}
queries = load_dataset('irds/wikiclir_vi', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/wikiclir_vi', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{sasaki-etal-2018-cross,
title = "Cross-Lingual Learning-to-Rank with Shared Representations",
author = "Sasaki, Shota and
Sun, Shuo and
Schamoni, Shigehiko and
Duh, Kevin and
Inui, Kentaro",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N18-2073",
doi = "10.18653/v1/N18-2073",
pages = "458--463"
}
```
|
false |
# Dataset Card for `wikiclir/zh`
The `wikiclir/zh` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/zh).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=951,480
- `queries` (i.e., topics); count=463,273
- `qrels`: (relevance assessments); count=926,130
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/wikiclir_zh', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'text': ...}
queries = load_dataset('irds/wikiclir_zh', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/wikiclir_zh', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{sasaki-etal-2018-cross,
title = "Cross-Lingual Learning-to-Rank with Shared Representations",
author = "Sasaki, Shota and
Sun, Shuo and
Schamoni, Shigehiko and
Duh, Kevin and
Inui, Kentaro",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N18-2073",
doi = "10.18653/v1/N18-2073",
pages = "458--463"
}
```
|
false |
# Dataset Card for `wikir/en1k`
The `wikir/en1k` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikir#wikir/en1k).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=369,721
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/wikir_en1k', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Frej2020Wikir,
title={WIKIR: A Python toolkit for building a large-scale Wikipedia-based English Information Retrieval Dataset},
author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet},
booktitle={LREC},
year={2020}
}
@inproceedings{Frej2020MlWikir,
title={MLWIKIR: A Python Toolkit for Building Large-scale Wikipedia-based Information Retrieval Datasets in Chinese, English, French, Italian, Japanese, Spanish and More},
author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet},
booktitle={CIRCLE},
year={2020}
}
```
|
false |
# Dataset Card for `wikir/en59k`
The `wikir/en59k` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikir#wikir/en59k).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=2,454,785
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/wikir_en59k', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Frej2020Wikir,
title={WIKIR: A Python toolkit for building a large-scale Wikipedia-based English Information Retrieval Dataset},
author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet},
booktitle={LREC},
year={2020}
}
@inproceedings{Frej2020MlWikir,
title={MLWIKIR: A Python Toolkit for Building Large-scale Wikipedia-based Information Retrieval Datasets in Chinese, English, French, Italian, Japanese, Spanish and More},
author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet},
booktitle={CIRCLE},
year={2020}
}
```
|
false |
# Dataset Card for `wikir/en78k`
The `wikir/en78k` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikir#wikir/en78k).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=2,456,637
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/wikir_en78k', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Frej2020Wikir,
title={WIKIR: A Python toolkit for building a large-scale Wikipedia-based English Information Retrieval Dataset},
author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet},
booktitle={LREC},
year={2020}
}
@inproceedings{Frej2020MlWikir,
title={MLWIKIR: A Python Toolkit for Building Large-scale Wikipedia-based Information Retrieval Datasets in Chinese, English, French, Italian, Japanese, Spanish and More},
author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet},
booktitle={CIRCLE},
year={2020}
}
```
|
false |
# Dataset Card for `wikir/ens78k`
The `wikir/ens78k` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikir#wikir/ens78k).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=2,456,637
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/wikir_ens78k', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Frej2020Wikir,
title={WIKIR: A Python toolkit for building a large-scale Wikipedia-based English Information Retrieval Dataset},
author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet},
booktitle={LREC},
year={2020}
}
@inproceedings{Frej2020MlWikir,
title={MLWIKIR: A Python Toolkit for Building Large-scale Wikipedia-based Information Retrieval Datasets in Chinese, English, French, Italian, Japanese, Spanish and More},
author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet},
booktitle={CIRCLE},
year={2020}
}
```
|
false |
# Dataset Card for `wikir/es13k`
The `wikir/es13k` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikir#wikir/es13k).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=645,901
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/wikir_es13k', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Frej2020Wikir,
title={WIKIR: A Python toolkit for building a large-scale Wikipedia-based English Information Retrieval Dataset},
author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet},
booktitle={LREC},
year={2020}
}
@inproceedings{Frej2020MlWikir,
title={MLWIKIR: A Python Toolkit for Building Large-scale Wikipedia-based Information Retrieval Datasets in Chinese, English, French, Italian, Japanese, Spanish and More},
author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet},
booktitle={CIRCLE},
year={2020}
}
```
|
false |
# Dataset Card for `wikir/fr14k`
The `wikir/fr14k` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikir#wikir/fr14k).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=736,616
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/wikir_fr14k', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Frej2020Wikir,
title={WIKIR: A Python toolkit for building a large-scale Wikipedia-based English Information Retrieval Dataset},
author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet},
booktitle={LREC},
year={2020}
}
@inproceedings{Frej2020MlWikir,
title={MLWIKIR: A Python Toolkit for Building Large-scale Wikipedia-based Information Retrieval Datasets in Chinese, English, French, Italian, Japanese, Spanish and More},
author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet},
booktitle={CIRCLE},
year={2020}
}
```
|
false |
# Dataset Card for `wikir/it16k`
The `wikir/it16k` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikir#wikir/it16k).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=503,012
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/wikir_it16k', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Frej2020Wikir,
title={WIKIR: A Python toolkit for building a large-scale Wikipedia-based English Information Retrieval Dataset},
author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet},
booktitle={LREC},
year={2020}
}
@inproceedings{Frej2020MlWikir,
title={MLWIKIR: A Python Toolkit for Building Large-scale Wikipedia-based Information Retrieval Datasets in Chinese, English, French, Italian, Japanese, Spanish and More},
author={Jibril Frej and Didier Schwab and Jean-Pierre Chevallet},
booktitle={CIRCLE},
year={2020}
}
```
|
false |
# Dataset Card for `trec-fair/2022/train`
The `trec-fair/2022/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-fair#trec-fair/2022/train).
# Data
This dataset provides:
- `queries` (i.e., topics); count=50
- `qrels`: (relevance assessments); count=2,088,306
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/trec-fair_2022_train', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ..., 'url': ...}
qrels = load_dataset('irds/trec-fair_2022_train', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
|
false |
# Dataset Card for `trec-cast/v0`
The `trec-cast/v0` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-cast#trec-cast/v0).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=47,696,605
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/trec-cast_v0', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Dalton2019Cast,
title={CAsT 2019: The Conversational Assistance Track Overview},
author={Jeffrey Dalton and Chenyan Xiong and Jamie Callan},
booktitle={TREC},
year={2019}
}
```
|
false |
# Dataset Card for `trec-cast/v1`
The `trec-cast/v1` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-cast#trec-cast/v1).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=38,622,444
This dataset is used by: [`trec-cast_v1_2020`](https://huggingface.co/datasets/irds/trec-cast_v1_2020), [`trec-cast_v1_2020_judged`](https://huggingface.co/datasets/irds/trec-cast_v1_2020_judged)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/trec-cast_v1', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Dalton2019Cast,
title={CAsT 2019: The Conversational Assistance Track Overview},
author={Jeffrey Dalton and Chenyan Xiong and Jamie Callan},
booktitle={TREC},
year={2019}
}
```
|
false |
# Dataset Card for `trec-cast/v1/2020`
The `trec-cast/v1/2020` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-cast#trec-cast/v1/2020).
# Data
This dataset provides:
- `queries` (i.e., topics); count=216
- `qrels`: (relevance assessments); count=40,451
- For `docs`, use [`irds/trec-cast_v1`](https://huggingface.co/datasets/irds/trec-cast_v1)
This dataset is used by: [`trec-cast_v1_2020_judged`](https://huggingface.co/datasets/irds/trec-cast_v1_2020_judged)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/trec-cast_v1_2020', 'queries')
for record in queries:
record # {'query_id': ..., 'raw_utterance': ..., 'automatic_rewritten_utterance': ..., 'manual_rewritten_utterance': ..., 'manual_canonical_result_id': ..., 'topic_number': ..., 'turn_number': ...}
qrels = load_dataset('irds/trec-cast_v1_2020', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Dalton2020Cast,
title={CAsT 2020: The Conversational Assistance Track Overview},
author={Jeffrey Dalton and Chenyan Xiong and Jamie Callan},
booktitle={TREC},
year={2020}
}
```
|
false |
# Dataset Card for `trec-cast/v1/2020/judged`
The `trec-cast/v1/2020/judged` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/trec-cast#trec-cast/v1/2020/judged).
# Data
This dataset provides:
- `queries` (i.e., topics); count=208
- For `docs`, use [`irds/trec-cast_v1`](https://huggingface.co/datasets/irds/trec-cast_v1)
- For `qrels`, use [`irds/trec-cast_v1_2020`](https://huggingface.co/datasets/irds/trec-cast_v1_2020)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/trec-cast_v1_2020_judged', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Dalton2020Cast,
title={CAsT 2020: The Conversational Assistance Track Overview},
author={Jeffrey Dalton and Chenyan Xiong and Jamie Callan},
booktitle={TREC},
year={2020}
}
```
|
false |
# Dataset Card for `hc4/fa`
The `hc4/fa` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/hc4#hc4/fa).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=486,486
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/hc4_fa', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'text': ..., 'url': ..., 'time': ..., 'cc_file': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Lawrie2022HC4,
author = {Dawn Lawrie and James Mayfield and Douglas W. Oard and Eugene Yang},
title = {HC4: A New Suite of Test Collections for Ad Hoc CLIR},
booktitle = {{Advances in Information Retrieval. 44th European Conference on IR Research (ECIR 2022)},
year = {2022},
month = apr,
publisher = {Springer},
series = {Lecture Notes in Computer Science},
site = {Stavanger, Norway},
url = {https://arxiv.org/abs/2201.09992}
}
```
|
false |
# Dataset Card for `hc4/ru`
The `hc4/ru` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/hc4#hc4/ru).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=4,721,064
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/hc4_ru', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'text': ..., 'url': ..., 'time': ..., 'cc_file': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Lawrie2022HC4,
author = {Dawn Lawrie and James Mayfield and Douglas W. Oard and Eugene Yang},
title = {HC4: A New Suite of Test Collections for Ad Hoc CLIR},
booktitle = {{Advances in Information Retrieval. 44th European Conference on IR Research (ECIR 2022)},
year = {2022},
month = apr,
publisher = {Springer},
series = {Lecture Notes in Computer Science},
site = {Stavanger, Norway},
url = {https://arxiv.org/abs/2201.09992}
}
```
|
false |
# Dataset Card for `hc4/zh`
The `hc4/zh` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/hc4#hc4/zh).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=646,305
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/hc4_zh', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'text': ..., 'url': ..., 'time': ..., 'cc_file': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Lawrie2022HC4,
author = {Dawn Lawrie and James Mayfield and Douglas W. Oard and Eugene Yang},
title = {HC4: A New Suite of Test Collections for Ad Hoc CLIR},
booktitle = {{Advances in Information Retrieval. 44th European Conference on IR Research (ECIR 2022)},
year = {2022},
month = apr,
publisher = {Springer},
series = {Lecture Notes in Computer Science},
site = {Stavanger, Norway},
url = {https://arxiv.org/abs/2201.09992}
}
```
|
false |
# Dataset Card for `neuclir/1/fa`
The `neuclir/1/fa` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/neuclir#neuclir/1/fa).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=2,232,016
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/neuclir_1_fa', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'text': ..., 'url': ..., 'time': ..., 'cc_file': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
|
false |
# Dataset Card for `neuclir/1/ru`
The `neuclir/1/ru` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/neuclir#neuclir/1/ru).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=4,627,543
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/neuclir_1_ru', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'text': ..., 'url': ..., 'time': ..., 'cc_file': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
|
false |
# Dataset Card for `neuclir/1/zh`
The `neuclir/1/zh` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/neuclir#neuclir/1/zh).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=3,179,209
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/neuclir_1_zh', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'text': ..., 'url': ..., 'time': ..., 'cc_file': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
|
false |
# Датасет перефразировок коротких фраз (читчат+поэзия)
В датасете содержатся правильные и некорректные перефразировки коротких диалоговых реплик ([проект диалоговой системы](https://github.com/Koziev/chatbot))
и фрагментов стихов ([проект генеративной поэзии](https://github.com/Koziev/verslibre)).
Датасет представляет из себя список сэмплов-кортежей. Каждый сэмпл состоит из двух списков:
```paraphrases``` - примеры правильных перефразировок
```distractors``` - примеры неправильных перефразировок
Датасет используется для создания моделей [детектора перефразировок sbert_synonymy](https://huggingface.co/inkoziev/sbert_synonymy)
и [генеративного поэтического перефразировщика](https://huggingface.co/inkoziev/paraphraser).
## Disclaimer
В датасете целенаправленно допускалась неконсервативность семантики перефразировок в определенных пределах.
К примеру, правильными перефразировками считаются пары "_Помолчи_" и "_Дружище, не говори ни слова!_". Так как перефразировщик
используется в проекте генеративной поэзии для создания датасетов, в нем есть некоторое количество метафоричных
и достаточно вольных перефразировок. Эти особенности датасета могут сделать невозможным использование датасета и моделей
на его основе в Ваших проектах.
## Другие датасеты перефразировок
При обучении моделей вы можете совмещать этот датасет с данными из других датасетов перефразировок, например [tapaco](https://huggingface.co/datasets/tapaco).
|
true | |
false |
Embrapa Wine Grape Instance Segmentation Dataset – Embrapa WGISD
================================================================
[](https://zenodo.org/badge/latestdoi/199083745)
This is a detailed description of the dataset, a
*datasheet for the dataset* as proposed by [Gebru *et al.*](https://arxiv.org/abs/1803.09010)
Motivation for Dataset Creation
-------------------------------
### Why was the dataset created?
Embrapa WGISD (*Wine Grape Instance Segmentation Dataset*) was created
to provide images and annotation to study *object detection and instance
segmentation* for image-based monitoring and field robotics in
viticulture. It provides instances from five different grape varieties
taken on field. These instances shows variance in grape pose,
illumination and focus, including genetic and phenological variations
such as shape, color and compactness.
### What (other) tasks could the dataset be used for?
Possible uses include relaxations of the instance segmentation problem:
classification (Is a grape in the image?), semantic segmentation (What
are the "grape pixels" in the image?), object detection (Where are
the grapes in the image?), and counting (How many berries are there
per cluster?). The WGISD can also be used in grape variety
identification.
### Who funded the creation of the dataset?
The building of the WGISD dataset was supported by the Embrapa SEG
Project 01.14.09.001.05.04, *Image-based metrology for Precision
Agriculture and Phenotyping*, and the CNPq PIBIC Program (grants
161165/2017-6 and 125044/2018-6).
Dataset Composition
-------------------
### What are the instances?
Each instance consists in a RGB image and an annotation describing grape
clusters locations as bounding boxes. A subset of the instances also
contains binary masks identifying the pixels belonging to each grape
cluster. Each image presents at least one grape cluster. Some grape
clusters can appear far at the background and should be ignored.
### Are relationships between instances made explicit in the data?
File names prefixes identify the variety observed in the instance.
| Prefix | Variety |
| --- | --- |
| CDY | *Chardonnay* |
| CFR | *Cabernet Franc* |
| CSV | *Cabernet Sauvignon*|
| SVB | *Sauvignon Blanc* |
| SYH | *Syrah* |
### How many instances of each type are there?
The dataset consists of 300 images containing 4,432 grape clusters
identified by bounding boxes. A subset of 137 images also contains
binary masks identifying the pixels of each cluster. It means that from
the 4,432 clusters, 2,020 of them presents binary masks for instance
segmentation, as summarized in the following table.
|Prefix | Variety | Date | Images | Boxed clusters | Masked clusters|
| --- | --- | --- | --- | --- | --- |
|CDY | *Chardonnay* | 2018-04-27 | 65 | 840 | 308|
|CFR | *Cabernet Franc* | 2018-04-27 | 65 | 1,069 | 513|
|CSV | *Cabernet Sauvignon* | 2018-04-27 | 57 | 643 | 306|
|SVB | *Sauvignon Blanc* | 2018-04-27 | 65 | 1,316 | 608|
|SYH | *Syrah* | 2017-04-27 | 48 | 563 | 285|
|Total | | | 300 | 4,431 | 2,020|
*General information about the dataset: the grape varieties and the associated identifying prefix, the date of image capture on field, number of images (instances) and the identified grapes clusters.*
#### Contributions
Another subset of 111 images with separated and non-occluded grape
clusters was annotated with point annotations for every berry by F. Khoroshevsky and S. Khoroshevsky ([Khoroshevsky *et al.*, 2021](https://doi.org/10.1007/978-3-030-65414-6_19)). Theses annotations are available in `test_berries.txt` , `train_berries.txt` and `val_berries.txt`
|Prefix | Variety | Berries |
| --- | --- | --- |
|CDY | *Chardonnay* | 1,102 |
|CFR | *Cabernet Franc* | 1,592 |
|CSV | *Cabernet Sauvignon* | 1,712 |
|SVB | *Sauvignon Blanc* | 1,974 |
|SYH | *Syrah* | 969 |
|Total | | 7,349 |
*Berries annotations by F. Khoroshevsky and S. Khoroshevsky.*
Geng Deng ([Deng *et al.*, 2020](https://doi.org/10.1007/978-3-030-63820-7_66))
provided point-based annotations for berries in all 300 images, summing 187,374 berries.
These annotations are available in `contrib/berries`.
Daniel Angelov (@23pointsNorth) provided a version for the annotations in [COCO format](https://cocodataset.org/#format-data). See `coco_annotations` directory.
### What data does each instance consist of?
Each instance contains a 8-bits RGB image and a text file containing one
bounding box description per line. These text files follows the "YOLO
format"
CLASS CX CY W H
*class* is an integer defining the object class – the dataset presents
only the grape class that is numbered 0, so every line starts with this
“class zero” indicator. The center of the bounding box is the point
*(c_x, c_y)*, represented as float values because this format normalizes
the coordinates by the image dimensions. To get the absolute position,
use *(2048 c_x, 1365 c_y)*. The bounding box dimensions are
given by *W* and *H*, also normalized by the image size.
The instances presenting mask data for instance segmentation contain
files presenting the `.npz` extension. These files are compressed
archives for NumPy $n$-dimensional arrays. Each array is a
*H X W X n_clusters* three-dimensional array where
*n_clusters* is the number of grape clusters observed in the
image. After assigning the NumPy array to a variable `M`, the mask for
the *i*-th grape cluster can be found in `M[:,:,i]`. The *i*-th mask
corresponds to the *i*-th line in the bounding boxes file.
The dataset also includes the original image files, presenting the full
original resolution. The normalized annotation for bounding boxes allows
easy identification of clusters in the original images, but the mask
data will need to be properly rescaled if users wish to work on the
original full resolution.
#### Contributions
*For `test_berries.txt` , `train_berries.txt` and `val_berries.txt`*:
The berries annotations are following a similar notation with the only
exception being that each text file (train/val/test) includes also the
instance file name.
FILENAME CLASS CX CY
where *filename* stands for instance file name, *class* is an integer
defining the object class (0 for all instances) and the point *(c_x, c_y)*
indicates the absolute position of each "dot" indicating a single berry in
a well defined cluster.
*For `contrib/berries`*:
The annotations provide the *(x, y)* point position for each berry center, in a tabular form:
X Y
These point-based annotations can be easily loaded using, for example, `numpy.loadtxt`. See `WGISD.ipynb`for examples.
[Daniel Angelov (@23pointsNorth)](https://github.com/23pointsNorth) provided a version for the annotations in [COCO format](https://cocodataset.org/#format-data). See `coco_annotations` directory. Also see [COCO format](https://cocodataset.org/#format-data) for the JSON-based format.
### Is everything included or does the data rely on external resources?
Everything is included in the dataset.
### Are there recommended data splits or evaluation measures?
The dataset comes with specified train/test splits. The splits are found
in lists stored as text files. There are also lists referring only to
instances presenting binary masks.
| | Images | Boxed clusters | Masked clusters |
| ---------------------| -------- | ---------------- | ----------------- |
| Training/Validation | 242 | 3,581 | 1,612 |
| Test | 58 | 850 | 408 |
| Total | 300 | 4,431 | 2,020 |
*Dataset recommended split.*
Standard measures from the information retrieval and computer vision
literature should be employed: precision and recall, *F1-score* and
average precision as seen in [COCO](http://cocodataset.org)
and [Pascal VOC](http://host.robots.ox.ac.uk/pascal/VOC).
### What experiments were initially run on this dataset?
The first experiments run on this dataset are described in [*Grape detection, segmentation and tracking using deep neural networks and three-dimensional association*](https://arxiv.org/abs/1907.11819) by Santos *et al.*. See also the following video demo:
[](http://www.youtube.com/watch?v=1Hji3GS4mm4 "Grape detection, segmentation and tracking")
**UPDATE**: The JPG files corresponding to the video frames in the [video demo](http://www.youtube.com/watch?v=1Hji3GS4mm4) are now available in the `extras` directory.
Data Collection Process
-----------------------
### How was the data collected?
Images were captured at the vineyards of Guaspari Winery, located at
Espírito Santo do Pinhal, São Paulo, Brazil (Lat -22.181018, Lon
-46.741618). The winery staff performs dual pruning: one for shaping
(after previous year harvest) and one for production, resulting in
canopies of lower density. The image capturing was realized in April
2017 for *Syrah* and in April 2018 for the other varieties.
A Canon EOS REBEL T3i DSLR camera and a Motorola Z2 Play smartphone were
used to capture the images. The cameras were located between the vines
lines, facing the vines at distances around 1-2 meters. The EOS REBEL
T3i camera captured 240 images, including all *Syrah* pictures. The Z2
smartphone grabbed 60 images covering all varieties except *Syrah* . The
REBEL images were scaled to *2048 X 1365* pixels and the Z2 images
to *2048 X 1536* pixels. More data about the capture process can be found
in the Exif data found in the original image files, included in the dataset.
### Who was involved in the data collection process?
T. T. Santos, A. A. Santos and S. Avila captured the images in
field. T. T. Santos, L. L. de Souza and S. Avila performed the
annotation for bounding boxes and masks.
### How was the data associated with each instance acquired?
The rectangular bounding boxes identifying the grape clusters were
annotated using the [`labelImg` tool](https://github.com/tzutalin/labelImg).
The clusters can be under
severe occlusion by leaves, trunks or other clusters. Considering the
absence of 3-D data and on-site annotation, the clusters locations had
to be defined using only a single-view image, so some clusters could be
incorrectly delimited.
A subset of the bounding boxes was selected for mask annotation, using a
novel tool developed by the authors and presented in this work. This
interactive tool lets the annotator mark grape and background pixels
using scribbles, and a graph matching algorithm developed by [Noma *et al.*](https://doi.org/10.1016/j.patcog.2011.08.017)
is employed to perform image segmentation to every pixel in the bounding
box, producing a binary mask representing grape/background
classification.
#### Contributions
A subset of the bounding boxes of well-defined (separated and non-occluded
clusters) was used for "dot" (berry) annotations of each grape to
serve for counting applications as described in [Khoroshevsky *et
al.*](https://doi.org/10.1007/978-3-030-65414-6_19). The berries
annotation was performed by F. Khoroshevsky and S. Khoroshevsky.
Geng Deng ([Deng *et al.*, 2020](https://doi.org/10.1007/978-3-030-63820-7_66))
provided point-based annotations for berries in all 300 images, summing
187,374 berries. These annotations are available in `contrib/berries`.
Deng *et al.* employed [Huawei ModelArt](https://www.huaweicloud.com/en-us/product/modelarts.html),
for their annotation effort.
Data Preprocessing
------------------
### What preprocessing/cleaning was done?
The following steps were taken to process the data:
1. Bounding boxes were annotated for each image using the `labelImg`
tool.
2. Images were resized to *W = 2048* pixels. This resolution proved to
be practical to mask annotation, a convenient balance between grape
detail and time spent by the graph-based segmentation algorithm.
3. A randomly selected subset of images were employed on mask
annotation using the interactive tool based on graph matching.
4. All binaries masks were inspected, in search of pixels attributed to
more than one grape cluster. The annotator assigned the disputed
pixels to the most likely cluster.
5. The bounding boxes were fitted to the masks, which provided a fine
tuning of grape clusters locations.
### Was the “raw” data saved in addition to the preprocessed data?
The original resolution images, containing the Exif data provided by the
cameras, is available in the dataset.
Dataset Distribution
--------------------
### How is the dataset distributed?
The dataset is [available at GitHub](https://github.com/thsant/wgisd).
### When will the dataset be released/first distributed?
The dataset was released in July, 2019.
### What license (if any) is it distributed under?
The data is released under [**Creative Commons BY-NC 4.0 (Attribution-NonCommercial 4.0 International license)**](https://creativecommons.org/licenses/by-nc/4.0/).
There is a request to cite the corresponding paper if the dataset is used. For
commercial use, contact Embrapa Agricultural Informatics business office.
### Are there any fees or access/export restrictions?
There are no fees or restrictions. For commercial use, contact Embrapa
Agricultural Informatics business office.
Dataset Maintenance
-------------------
### Who is supporting/hosting/maintaining the dataset?
The dataset is hosted at Embrapa Agricultural Informatics and all
comments or requests can be sent to [Thiago T. Santos](https://github.com/thsant)
(maintainer).
### Will the dataset be updated?
There is no scheduled updates.
* In May, 2022, [Daniel Angelov (@23pointsNorth)](https://github.com/23pointsNorth) provided a version for the annotations in [COCO format](https://cocodataset.org/#format-data). See `coco_annotations` directory.
* In February, 2021, F. Khoroshevsky and S. Khoroshevsky provided the first extension: the berries ("dot")
annotations.
* In April, 2021, Geng Deng provided point annotations for berries. T. Santos converted Deng's XML files to
easier-to-load text files now available in `contrib/berries` directory.
In case of further updates, releases will be properly tagged at GitHub.
### If others want to extend/augment/build on this dataset, is there a mechanism for them to do so?
Contributors should contact the maintainer by e-mail.
### No warranty
The maintainers and their institutions are *exempt from any liability,
judicial or extrajudicial, for any losses or damages arising from the
use of the data contained in the image database*.
|
false | 11,5k russian books in txt format, divided by genres
11,5 тыщ книг русской литературы. датасет сделан из древнющего диска "lib in poc" |
false | |
true |
# Dataset Card for ScienceIE
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://scienceie.github.io/index.html](https://scienceie.github.io/index.html)
- **Repository:** [https://github.com/ScienceIE/scienceie.github.io](https://github.com/ScienceIE/scienceie.github.io)
- **Paper:** [SemEval 2017 Task 10: ScienceIE - Extracting Keyphrases and Relations from Scientific Publications](https://arxiv.org/abs/1704.02853)
- **Leaderboard:** [https://competitions.codalab.org/competitions/15898](https://competitions.codalab.org/competitions/15898)
- **Size of downloaded dataset files:** 13.7 MB
- **Size of generated dataset files:** 17.4 MB
### Dataset Summary
ScienceIE is a dataset for the SemEval task of extracting key phrases and relations between them from scientific documents.
A corpus for the task was built from ScienceDirect open access publications and was available freely for participants, without the need to sign a copyright agreement. Each data instance consists of one paragraph of text, drawn from a scientific paper.
Publications were provided in plain text, in addition to xml format, which included the full text of the publication as well as additional metadata. 500 paragraphs from journal articles evenly distributed among the domains Computer Science, Material Sciences and Physics were selected.
The training data part of the corpus consists of 350 documents, 50 for development and 100 for testing. This is similar to the pilot task described in Section 5, for which 144 articles were used for training, 40 for development and for 100 testing.
There are three subtasks:
- Subtask (A): Identification of keyphrases
- Given a scientific publication, the goal of this task is to identify all the keyphrases in the document.
- Subtask (B): Classification of identified keyphrases
- In this task, each keyphrase needs to be labelled by one of three types: (i) PROCESS, (ii) TASK, and (iii) MATERIAL.
- PROCESS: Keyphrases relating to some scientific model, algorithm or process should be labelled by PROCESS.
- TASK: Keyphrases those denote the application, end goal, problem, task should be labelled by TASK.
- MATERIAL: MATERIAL keyphrases identify the resources used in the paper.
- Subtask (C): Extraction of relationships between two identified keyphrases
- Every pair of keyphrases need to be labelled by one of three types: (i) HYPONYM-OF, (ii) SYNONYM-OF, and (iii) NONE.
- HYPONYM-OF: The relationship between two keyphrases A and B is HYPONYM-OF if semantic field of A is included within that of B. One example is Red HYPONYM-OF Color.
- SYNONYM-OF: The relationship between two keyphrases A and B is SYNONYM-OF if they both denote the same semantic field, for example Machine Learning SYNONYM-OF ML.
Note: In this repository the documents were split into sentences using spaCy, resulting in a 2388, 400, 838 split. The `id` consists of the document id and the example index within the document separated by an underscore, e.g. `S0375960115004120_1`. This should enable you to reconstruct the documents from the sentences.
### Supported Tasks and Leaderboards
- **Tasks:** Key phrase extraction and relation extraction in scientific documents
- **Leaderboards:** [https://competitions.codalab.org/competitions/15898](https://competitions.codalab.org/competitions/15898)
### Languages
The language in the dataset is English.
## Dataset Structure
### Data Instances
#### subtask_a
- **Size of downloaded dataset files:** 13.7 MB
- **Size of the generated dataset:** 17.4 MB
An example of 'train' looks as follows:
```json
{
"id": "S0375960115004120_1",
"tokens": ["Another", "remarkable", "feature", "of", "the", "quantum", "field", "treatment", "can", "be", "revealed", "from", "the", "investigation", "of", "the", "vacuum", "state", "."],
"tags": [0, 0, 0, 0, 0, 1, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 0]
}
```
#### subtask_b
- **Size of downloaded dataset files:** 13.7 MB
- **Size of the generated dataset:** 17.4 MB
An example of 'train' looks as follows:
```json
{
"id": "S0375960115004120_2",
"tokens": ["For", "a", "classical", "field", ",", "vacuum", "is", "realized", "by", "simply", "setting", "the", "potential", "to", "zero", "resulting", "in", "an", "unaltered", ",", "free", "evolution", "of", "the", "particle", "'s", "plane", "wave", "(", "|ψI〉=|ψIII〉=|k0", "〉", ")", "."],
"tags": [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0]
}
```
#### subtask_c
- **Size of downloaded dataset files:** 13.7 MB
- **Size of the generated dataset:** 30.1 MB
An example of 'train' looks as follows:
```json
{
"id": "S0375960115004120_3",
"tokens": ["In", "the", "quantized", "treatment", ",", "vacuum", "is", "represented", "by", "an", "initial", "Fock", "state", "|n0=0", "〉", "which", "still", "interacts", "with", "the", "particle", "and", "yields", "as", "final", "state", "|ΨIII", "〉", "behind", "the", "field", "region(19)|ΨI〉=|k0〉⊗|0〉⇒|ΨIII〉=∑n=0∞t0n|k−n〉⊗|n", "〉", "with", "a", "photon", "exchange", "probability(20)P0,n=|t0n|2=1n!e−Λ2Λ2n", "The", "particle", "thus", "transfers", "energy", "to", "the", "vacuum", "field", "leading", "to", "a", "Poissonian", "distributed", "final", "photon", "number", "."],
"tags": [[0, 0, ...], [0, 0, ...], ...]
}
```
Note: The tag sequence consists of vectors for each token, that encode what the relationship between that token
and every other token in the sequence is for the first token in each key phrase.
#### ner
- **Size of downloaded dataset files:** 13.7 MB
- **Size of the generated dataset:** 17.4 MB
An example of 'train' looks as follows:
```json
{
"id": "S0375960115004120_4",
"tokens": ["Let", "'s", "consider", ",", "for", "example", ",", "a", "superconducting", "resonant", "circuit", "as", "source", "of", "the", "field", "."],
"tags": [0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 2, 0, 0, 0, 0, 0, 0]
}
```
#### re
- **Size of downloaded dataset files:** 13.7 MB
- **Size of the generated dataset:** 16.4 MB
An example of 'train' looks as follows:
```json
{
"id": "S0375960115004120_5",
"tokens": ["In", "the", "quantized", "treatment", ",", "vacuum", "is", "represented", "by", "an", "initial", "Fock", "state", "|n0=0", "〉", "which", "still", "interacts", "with", "the", "particle", "and", "yields", "as", "final", "state", "|ΨIII", "〉", "behind", "the", "field", "region(19)|ΨI〉=|k0〉⊗|0〉⇒|ΨIII〉=∑n=0∞t0n|k−n〉⊗|n", "〉", "with", "a", "photon", "exchange", "probability(20)P0,n=|t0n|2=1n!e−Λ2Λ2n", "The", "particle", "thus", "transfers", "energy", "to", "the", "vacuum", "field", "leading", "to", "a", "Poissonian", "distributed", "final", "photon", "number", "."],
"arg1_start": 2,
"arg1_end": 4,
"arg1_type": "Task",
"arg2_start": 5,
"arg2_end": 6,
"arg2_type": "Material",
"relation": 0
}
```
### Data Fields
#### subtask_a
- `id`: the instance id of this sentence, a `string` feature.
- `tokens`: the list of tokens of this sentence, obtained with spaCy, a `list` of `string` features.
- `tags`: the list of tags of this sentence marking a token as being outside, at the beginning, or inside a key phrase, a `list` of classification labels.
```python
{"O": 0, "B": 1, "I": 2}
```
#### subtask_b
- `id`: the instance id of this sentence, a `string` feature.
- `tokens`: the list of tokens of this sentence, obtained with spaCy, a `list` of `string` features.
- `tags`: the list of tags of this sentence marking a token as being outside a key phrase, or being part of a material, process or task, a `list` of classification labels.
```python
{"O": 0, "M": 1, "P": 2, "T": 3}
```
#### subtask_c
- `id`: the instance id of this sentence, a `string` feature.
- `tokens`: the list of tokens of this sentence, obtained with spaCy, a `list` of `string` features.
- `tags`: a vector for each token, that encodes what the relationship between that token and every other token in the sequence is for the first token in each key phrase, a `list` of a `list` of a classification label.
```python
{"O": 0, "S": 1, "H": 2}
```
#### ner
- `id`: the instance id of this sentence, a `string` feature.
- `tokens`: the list of tokens of this sentence, obtained with spaCy, a `list` of `string` features.
- `tags`: the list of ner tags of this sentence, a `list` of classification labels.
```python
{"O": 0, "B-Material": 1, "I-Material": 2, "B-Process": 3, "I-Process": 4, "B-Task": 5, "I-Task": 6}
```
#### re
- `id`: the instance id of this sentence, a `string` feature.
- `token`: the list of tokens of this sentence, obtained with spaCy, a `list` of `string` features.
- `arg1_start`: the 0-based index of the start token of the relation arg1 mention, an `ìnt` feature.
- `arg1_end`: the 0-based index of the end token of the relation arg1 mention, exclusive, an `ìnt` feature.
- `arg1_type`: the key phrase type of the end token of the relation arg1 mention, a `string` feature.
- `arg2_start`: the 0-based index of the start token of the relation arg2 mention, an `ìnt` feature.
- `arg2_end`: the 0-based index of the end token of the relation arg2 mention, exclusive, an `ìnt` feature.
- `arg2_type`: the key phrase type of the relation arg2 mention, a `string` feature.
- `relation`: the relation label of this instance, a classification label.
```python
{"O": 0, "Synonym-of": 1, "Hyponym-of": 2}
```
### Data Splits
| | Train | Dev | Test |
|-----------|-------|------|------|
| subtask_a | 2388 | 400 | 838 |
| subtask_b | 2388 | 400 | 838 |
| subtask_c | 2388 | 400 | 838 |
| ner | 2388 | 400 | 838 |
| re | 24558 | 4838 | 6618 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{DBLP:journals/corr/AugensteinDRVM17,
author = {Isabelle Augenstein and
Mrinal Das and
Sebastian Riedel and
Lakshmi Vikraman and
Andrew McCallum},
title = {SemEval 2017 Task 10: ScienceIE - Extracting Keyphrases and Relations
from Scientific Publications},
journal = {CoRR},
volume = {abs/1704.02853},
year = {2017},
url = {http://arxiv.org/abs/1704.02853},
eprinttype = {arXiv},
eprint = {1704.02853},
timestamp = {Mon, 13 Aug 2018 16:46:36 +0200},
biburl = {https://dblp.org/rec/journals/corr/AugensteinDRVM17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@phucdev](https://github.com/phucdev) for adding this dataset. |
false | # Dataset Card for "hearthstone-cards-512"
# Not affiliated in anyway with Blizzard nor Hearthstone
# Please note that this entrie dataset contains copyrighted matirial |
true | # Green patents dataset
- num_rows: 9145
- features: [title, label]
- label: 0, 1
The dataset contains patent titles that are labeled as 1 (="green") and 0 (="not green").
"green" patents titles were gathered by searching for CPC class "Y02" with Google Patents (query: "status:APPLICATION type:PATENT (Y02) country:EP,US", 05/01/2023).
"not green" patents titles are derived from the [HUPD dataset](https://huggingface.co/datasets/HUPD/hupd) (random choice of 5000 titles). We could not find any patents in HUPD assigned to any CPC class starting with "Y". |
false | # Dataset Card for HunSum-1
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Description
### Dataset Summary
The HunSum-1 Dataset is a Hungarian-language dataset containing over 1.1M unique news articles with lead and other metadata. The dataset contains articles from 9 major Hungarian news websites.
### Supported Tasks and Leaderboards
- 'summarization'
- 'title generation'
## Dataset Structure
### Data Fields
- `uuid`: a string containing the unique id
- `article`: a string containing the body of the news article
- `lead`: a string containing the lead of the article
- `title`: a string containing the title of the article
- `url`: a string containing the URL for the article
- `domain`: a string containing the domain of the url
- `date_of_creation`: a timestamp containing the date when the article was created
- `tags`: a sequence containing the tags of the article
### Data Splits
The HunSum-1 dataset has 3 splits: _train_, _validation_, and _test_.
| Dataset Split | Number of Instances in Split |
| ------------- | ------------------------------------------- |
| Train | 1,144,255 |
| Validation | 1996 |
| Test | 1996 |
## Citation
If you use our dataset, please cite the following paper:
```
@inproceedings {HunSum-1,
title = {{HunSum-1: an Abstractive Summarization Dataset for Hungarian}},
booktitle = {XIX. Magyar Számítógépes Nyelvészeti Konferencia (MSZNY 2023)},
year = {2023},
publisher = {Szegedi Tudományegyetem, Informatikai Intézet},
address = {Szeged, Magyarország},
author = {Barta, Botond and Lakatos, Dorina and Nagy, Attila and Nyist, Mil{\'{a}}n Konor and {\'{A}}cs, Judit},
pages = {231--243}
}
``` |
true | # AutoTrain Dataset for project: real-vs-fake-news
## Dataset Description
This dataset has been automatically processed by AutoTrain for project real-vs-fake-news.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"feat_title": "FBI Russia probe helped by Australian diplomat tip-off: NYT",
"text": "WASHINGTON (Reuters) - Trump campaign adviser George Papadopoulos told an Australian diplomat in May 2016 that Russia had political dirt on Democratic presidential candidate Hillary Clinton, the New York Times reported on Saturday. The conversation between Papadopoulos and the diplomat, Alexander Downer, in London was a driving factor behind the FBI\u2019s decision to open a counter-intelligence investigation of Moscow\u2019s contacts with the Trump campaign, the Times reported. Two months after the meeting, Australian officials passed the information that came from Papadopoulos to their American counterparts when leaked Democratic emails began appearing online, according to the newspaper, which cited four current and former U.S. and foreign officials. Besides the information from the Australians, the probe by the Federal Bureau of Investigation was also propelled by intelligence from other friendly governments, including the British and Dutch, the Times said. Papadopoulos, a Chicago-based international energy lawyer, pleaded guilty on Oct. 30 to lying to FBI agents about contacts with people who claimed to have ties to top Russian officials. It was the first criminal charge alleging links between the Trump campaign and Russia. The White House has played down the former aide\u2019s campaign role, saying it was \u201cextremely limited\u201d and that any actions he took would have been on his own. The New York Times, however, reported that Papadopoulos helped set up a meeting between then-candidate Donald Trump and Egyptian President Abdel Fattah al-Sisi and edited the outline of Trump\u2019s first major foreign policy speech in April 2016. The federal investigation, which is now being led by Special Counsel Robert Mueller, has hung over Trump\u2019s White House since he took office almost a year ago. Some Trump allies have recently accused Mueller\u2019s team of being biased against the Republican president. Lawyers for Papadopoulos did not immediately respond to requests by Reuters for comment. Mueller\u2019s office declined to comment. Trump\u2019s White House attorney, Ty Cobb, declined to comment on the New York Times report. \u201cOut of respect for the special counsel and his process, we are not commenting on matters such as this,\u201d he said in a statement. Mueller has charged four Trump associates, including Papadopoulos, in his investigation. Russia has denied interfering in the U.S. election and Trump has said there was no collusion between his campaign and Moscow. ",
"feat_subject": "politicsNews",
"feat_date": "December 30, 2017 ",
"target": 1
},
{
"feat_title": "Democrats ride grassroots wave to major statehouse gains",
"text": "(Reuters) - Democrats claimed historic gains in Virginia\u2019s statehouse and booted Republicans from state and local office across the United States on Tuesday, in the party\u2019s first big wave of victories since Republican Donald Trump\u2019s won the White House a year ago. Democrats must figure out how to turn that momentum to their advantage in November 2018 elections, when control of the U.S. Congress and scores of statehouses will be at stake. From coast to coast, Democratic victories showed grassroots resistance to Trump rallying the party\u2019s base, while independent and conservative voters appeared frustrated with the unpopular Republican leadership in Washington. Democrats won this year\u2019s races for governor in Virginia and New Jersey, but successes in legislative and local races nationwide may have revealed more about where the party stands a year into Trump\u2019s administration. Unexpectedly massive Democratic gains in Virginia\u2019s statehouse surprised even the most optimistic party loyalists in a state that has trended Democratic in recent years but remains a top target for both parties in national elections. \u201cThis is beyond our wildest expectations, to be honest,\u201d said Catherine Vaughan, co-founder of Flippable, one of several new startup progressive groups rebuilding the party at the grassroots level. With several races still too close to call, Democrats were close to flipping, or splitting, control of the Virginia House of Delegates, erasing overnight a two-to-one Republican majority. Democratic Lieutenant Governor Ralph Northam also defeated Republican Ed Gillespie by nearly nine percentage points in what had seemed a closer contest for Virginia\u2019s governor\u2019s mansion, a year after Democrat Hillary Clinton carried the state by five points in the presidential election. The losing candidate had employed Trump-style campaign tactics that highlighted divisive issues such as immigration, although the president did not join him on the campaign trail. In New Jersey, a Democratic presidential stronghold, voters replaced a two-term Republican governor with a Democrat and increased the party\u2019s majorities in the state legislature. Democrats notched additional wins in a Washington state Senate race that gave the party full control of the state government and in Republican-controlled Georgia, where Democrats picked up three seats in special state legislative elections. \u201cThis was the first chance that the voters got to send a message to Donald Trump and they took advantage of it,\u201d John Feehery, a Republican strategist in Washington, said by phone. The gains suggested to some election analysts that Democrats could retake the U.S. House of Representatives next year. Republicans control both the House and Senate along with the White House. Dave Wasserman, who analyzes U.S. House and statehouse races for the nonpartisan Cook Political Report, called the Virginia results a \u201ctidal wave.\u201d Even after Tuesday\u2019s gains, however, Democrats are completely locked out of power in 26 state governments. Republicans control two-thirds of U.S. legislative chambers. Desperate to rebuild, national Democrats this year showed newfound interest in legislative contests and races even farther down the ballot. The Democratic National Committee successfully invested in mayoral races from St. Petersburg, Florida, to Manchester, New Hampshire. \u201cIf there is a lesson to be taken from yesterday, it is that we need to make sure that we are competing everywhere, because Democrats can win,\u201d DNC Chairman Tom Perez said on a media call. Democratic Legislative Campaign Committee executive director Jessica Post said national party leaders must remain focused on local races, even in a congressional year. \u201cWe don\u2019t focus enough on the state level, and that is why we are in the place we are,\u201d she said. \u201cBut when we do, we win.\u201d ",
"feat_subject": "politicsNews",
"feat_date": "November 8, 2017 ",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"feat_title": "Value(dtype='string', id=None)",
"text": "Value(dtype='string', id=None)",
"feat_subject": "Value(dtype='string', id=None)",
"feat_date": "Value(dtype='string', id=None)",
"target": "ClassLabel(names=['Fake', 'True'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 1598 |
| valid | 400 |
|
true | # Dataset Card for "gids"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Repository:** [RE-DS-Word-Attention-Models](https://github.com/SharmisthaJat/RE-DS-Word-Attention-Models/tree/master/Data/GIDS)
- **Paper:** [Improving Distantly Supervised Relation Extraction using Word and Entity Based Attention](https://arxiv.org/abs/1804.06987)
- **Size of downloaded dataset files:** 8.94 MB
- **Size of the generated dataset:** 11.82 MB
### Dataset Summary
The Google-IISc Distant Supervision (GIDS) is a new dataset for distantly-supervised relation extraction.
GIDS is seeded from the human-judged Google relation extraction corpus.
See the paper for full details: [Improving Distantly Supervised Relation Extraction using Word and Entity Based Attention](https://arxiv.org/abs/1804.06987)
Note:
- There is a formatted version that you can load with `datasets.load_dataset('gids', name='gids_formatted')`. This version is tokenized with spaCy, removes the underscores in the entities and provides entity offsets.
### Supported Tasks and Leaderboards
- **Tasks:** Relation Classification
- **Leaderboards:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
The language in the dataset is English.
## Dataset Structure
### Data Instances
#### gids
- **Size of downloaded dataset files:** 8.94 MB
- **Size of the generated dataset:** 8.5 MB
An example of 'train' looks as follows:
```json
{
"sentence": "War as appropriate. Private Alfred James_Smurthwaite Sample. 26614. 2nd Battalion Yorkshire Regiment. Son of Edward James Sample, of North_Ormesby , Yorks. Died 2 April 1917. Aged 29. Born Ormesby, Enlisted Middlesbrough. Buried BUCQUOY ROAD CEMETERY, FICHEUX. Not listed on the Middlesbrough War Memorial Private Frederick Scott. 46449. 4th Battalion Yorkshire Regiment. Son of William and Maria Scott, of 25, Aspinall St., Heywood, Lancs. Born at West Hartlepool. Died 27 May 1918. Aged 24.",
"subj_id": "/m/02qt0sv",
"obj_id": "/m/0fnhl9",
"subj_text": "James_Smurthwaite",
"obj_text": "North_Ormesby",
"relation": 4
}
```
#### gids_formatted
- **Size of downloaded dataset files:** 8.94 MB
- **Size of the generated dataset:** 11.82 MB
An example of 'train' looks as follows:
```json
{
"token": ["announced", "he", "had", "closed", "shop", ".", "Mary", "D.", "Crisp", "Coyle", "opened", "in", "1951", ".", "Stoffey", ",", "a", "Maricopa", "County", "/", "Phoenix", "city", "resident", "and", "longtime", "customer", ",", "bought", "the", "business", "in", "2011", ",", "when", "then", "owners", "were", "facing", "closure", ".", "He", "renovated", "the", "diner", "is", "interior", ",", "increased", "training", "for", "staff", "and", "expanded", "the", "menu", "."],
"subj_start": 6,
"subj_end": 9,
"obj_start": 17,
"obj_end": 22,
"relation": 4
}
```
### Data Fields
The data fields are the same among all splits.
#### gids
- `sentence`: the sentence, a `string` feature.
- `subj_id`: the id of the relation subject mention, a `string` feature.
- `obj_id`: the id of the relation object mention, a `string` feature.
- `subj_text`: the text of the relation subject mention, a `string` feature.
- `obj_text`: the text of the relation object mention, a `string` feature.
- `relation`: the relation label of this instance, an `int` classification label.
```python
{"NA": 0, "/people/person/education./education/education/institution": 1, "/people/person/education./education/education/degree": 2, "/people/person/place_of_birth": 3, "/people/deceased_person/place_of_death": 4}
```
#### gids_formatted
- `token`: the list of tokens of this sentence, obtained with spaCy, a `list` of `string` features.
- `subj_start`: the 0-based index of the start token of the relation subject mention, an `ìnt` feature.
- `subj_end`: the 0-based index of the end token of the relation subject mention, exclusive, an `ìnt` feature.
- `obj_start`: the 0-based index of the start token of the relation object mention, an `ìnt` feature.
- `obj_end`: the 0-based index of the end token of the relation object mention, exclusive, an `ìnt` feature.
- `relation`: the relation label of this instance, an `int` classification label.
```python
{"NA": 0, "/people/person/education./education/education/institution": 1, "/people/person/education./education/education/degree": 2, "/people/person/place_of_birth": 3, "/people/deceased_person/place_of_death": 4}
```
### Data Splits
| | Train | Dev | Test |
|------|-------|------|------|
| GIDS | 11297 | 1864 | 5663 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{DBLP:journals/corr/abs-1804-06987,
author = {Sharmistha Jat and
Siddhesh Khandelwal and
Partha P. Talukdar},
title = {Improving Distantly Supervised Relation Extraction using Word and
Entity Based Attention},
journal = {CoRR},
volume = {abs/1804.06987},
year = {2018},
url = {http://arxiv.org/abs/1804.06987},
eprinttype = {arXiv},
eprint = {1804.06987},
timestamp = {Fri, 15 Nov 2019 17:16:02 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1804-06987.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@phucdev](https://github.com/phucdev) for adding this dataset. |
false | |
false | # Dataset Card for "eclassTrainST"
This NLI-Dataset can be used to fine-tune Models for the task of sentence-simularity. It consists of names and descriptions of pump-properties from the ECLASS-standard. |
false | # Dataset Card for "eclassCorpus"
This Dataset consists of names and descriptions from ECLASS-standard pump-properties. It can be used to evaluate models on the task of matching paraphrases to the ECLASS-standard pump-properties based on their semantics. |
false | # Dataset Card for "eclassQuery"
This Dataset consists of paraphrases of ECLASS-standard pump-properties. It can be used to evaluate models on the task of matching these paraphrases to the actual ECLASS-standard pump-properties based on their semantics. |
false |
# Dataset Card for "ArASL_Database_Grayscale"
## Dataset Description
- **Homepage:** https://data.mendeley.com/datasets/y7pckrw6z2/1
- **Paper:** [ArASL: Arabic Alphabets Sign Language Dataset](https://www.sciencedirect.com/science/article/pii/S2352340919301283)
### Dataset Summary
A new dataset consists of 54,049 images of ArSL alphabets performed by more than 40 people for 32 standard Arabic signs and alphabets.
The number of images per class differs from one class to another. Sample image of all Arabic Language Signs is also attached. The CSV file contains the Label of each corresponding Arabic Sign Language Image based on the image file name.
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given image into one of 32 classes.
### Languages
Arabic
### Data Instances
A sample from the training set is provided below:
```
{
'img': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=32x32 at 0x201FA6EE748>,
'label': 0
}
```
### Citation Information
```
@article{LATIF2019103777,
title = {ArASL: Arabic Alphabets Sign Language Dataset},
journal = {Data in Brief},
volume = {23},
pages = {103777},
year = {2019},
issn = {2352-3409},
doi = {https://doi.org/10.1016/j.dib.2019.103777},
url = {https://www.sciencedirect.com/science/article/pii/S2352340919301283},
author = {Ghazanfar Latif and Nazeeruddin Mohammad and Jaafar Alghazo and Roaa AlKhalaf and Rawan AlKhalaf},
abstract = {A fully-labelled dataset of Arabic Sign Language (ArSL) images is developed for research related to sign language recognition. The dataset will provide researcher the opportunity to investigate and develop automated systems for the deaf and hard of hearing people using machine learning, computer vision and deep learning algorithms. The contribution is a large fully-labelled dataset for Arabic Sign Language (ArSL) which is made publically available and free for all researchers. The dataset which is named ArSL2018 consists of 54,049 images for the 32 Arabic sign language sign and alphabets collected from 40 participants in different age groups. Different dimensions and different variations were present in images which can be cleared using pre-processing techniques to remove noise, center the image, etc. The dataset is made available publicly at https://data.mendeley.com/datasets/y7pckrw6z2/1.}
}
```
### Contributions
Thanks to [MOHAMMAD ALBARHAM](https://github.com/PAIN-BARHAM) for adding this dataset to huggingface hub. |
false | |
false | # Dataset Card for Superheroes
## Dataset Description
1400+ Superheroes history and powers description to apply text mining and NLP [Original source](https://www.kaggle.com/datasets/jonathanbesomi/superheroes-nlp-dataset/code?resource=download)
## Context
The aim of this dataset is to make text analytics and NLP even funnier. All of us have dreamed to be like a superhero and save the world, yet we are still on Kaggle figuring out how python works. Then, why not improve our NLP competences by analyzing Superheros' history and powers?
The particularity of this dataset is that it contains categorical and numerical features such as overall_score, intelligence_score, creator, alignment, gender, eye_color but also text features history_text and powers_text. By combining the two, a lot of interesting insights can be gathered!
## Content
We collected all data from superherodb and cooked for you in a nice and clean tabular format.
The dataset contains 1447 different Superheroes. Each superhero row has:
* overall_score - derivated by superherodb from the power stats features. Can you find the relationship?
* history_text - History of the Superhero (text features)
* powers_text - Description of Superheros' powers (text features)
* intelligence_score, strength_score, speed_score, durability_score, power_score and combat_score. (power stats features)
* "Origin" (full_name, alter_egos, …)
* "Connections" (occupation, base, teams, …)
* "Appareance" (gender, type_race, height, weight, eye_color, …)
## Acknowledgements
The following [Github repository](https://github.com/jbesomi/texthero/tree/master/dataset/Superheroes%20NLP%20Dataset) contains the code used to scrape this Dataset.
|
false | # Dataset Card for Dataset Name
titulos_noticias_rcn_clasificadas
## Dataset Description
Se tomo las noticias de la pagina de RCN y se clasifico los titulos por ['salud' 'tecnologia' 'colombia' 'economia' 'deportes']
salud= 1805 datos,
tecnologia= 1805 datos,
colombia= 1805 datos,
economia= 1805 datos,
deportes= 1805 datos,
Para dar un total de 9030 filas.
pagina: https://www.noticiasrcn.com/
- **Homepage:**
- **Repository:**
- **Point of Contact:**
### Languages
Español
## Dataset Structure
text, label, url |
false |
A dataset for translation. |
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
false |
# Dataset cointelegraph español
Dataset Description
es un dataset donde se recopila informacion del titulo , descripcion , autor, etc.
tiene aprox: 10738 fila
pagina: https://cointelegraph.com/
categorie: #cryptocurrency, #Bitcoin, #Ethereum ... |
false |
# KPWr & CEN |
false | |
true | |
true | |
false | # Dataset Card for "lfqa_preprocessed"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** [https://towardsdatascience.com/long-form-qa-beyond-eli5-an-updated-dataset-and-approach-319cb841aabb](https://towardsdatascience.com/long-form-qa-beyond-eli5-an-updated-dataset-and-approach-319cb841aabb)
### Dataset Summary
This is a simplified version of [vblagoje's](https://huggingface.co/vblagoje) *[lfqa_support_docs](https://huggingface.co/datasets/vblagoje/lfqa_support_docs)* and *[lfqa](https://huggingface.co/datasets/vblagoje/lfqa)* datasets.
It was generated by me to have a more straight forward way to train Seq2Seq models on context based long form question answering tasks.
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```json
{
"question": "what's the difference between a forest and a wood?",
"answer": "They're used interchangeably a lot. You'll get different answers from different resources, but the ...",
"context": [
"Wood is divided, according to its botanical origin, into two kinds: softwoods, ...",
"Processing and products differs especially with regard to the distinction between softwood and hardwood ..."
]
}
```
### Data Fields
The data fields are the same among all splits.
- `question`: a `string` feature.
- `answer`: a `string` feature.
- `context`: a list feature containing `string` features.
### Data Splits
| name |train|validation|
|----------|----:|---------:|
| |226147| 3020|
## Additional Information
### Licensing Information
This dataset is distributed under the MIT licence. |
true | |
false | AviationQA is introduced in the paper titled- There is No Big Brother or Small Brother: Knowledge Infusion in Language Models for Link Prediction and Question Answering
https://aclanthology.org/2022.icon-main.26/
The paper is accepted in the main conference of ICON 2022.
We create a synthetic dataset, AviationQA, a set of 1 million factoid QA pairs from 12,000 National Transportation Safety Board (NTSB) reports using templates. These QA pairs contain questions such that answers to them are entities occurring in the AviationKG (Agarwal et al., 2022). AviationQA will be helpful to researchers in finding insights into aircraft accidents and their prevention.
Examples from dataset:
What was the Aircraft Damage of the accident no. ERA22LA162? Answer: Substantial
Where was the Destination of the accident no. ERA22LA162?, Answer: Naples, GA (APH) |
false |
# Dataset Card for HC4
## Dataset Description
- **Repository:** https://github.com/hltcoe/HC4
- **Paper:** https://arxiv.org/abs/2201.09992
### Dataset Summary
HC4 is a suite of test collections for ad hoc Cross-Language Information Retrieval (CLIR), with Common Crawl News documents in Chinese, Persian, and Russian. The documents
are Web pages from Common Crawl in Chinese, Persian, and Russian.
### Languages
- Chinese
- Persian
- Russian
## Dataset Structure
### Data Instances
| Split | Documents |
|-----------------|----------:|
| `fas` (Persian) | 486K |
| `rus` (Russian) | 4.7M |
| `zho` (Chinese) | 646K |
### Data Fields
- `id`: unique identifier for this document
- `cc_file`: source file from connon crawl
- `time`: extracted date/time from article
- `title`: title extracted from article
- `text`: extracted article body
- `url`: source URL
## Dataset Usage
Using 🤗 Datasets:
```python
from datasets import load_dataset
dataset = load_dataset('neuclir/hc4')
dataset['fas'] # Persian documents
dataset['rus'] # Russian documents
dataset['zho'] # Chinese documents
```
## Citation Information
```
@article{Lawrie2022HC4,
author = {Dawn Lawrie and James Mayfield and Douglas W. Oard and Eugene Yang},
title = {HC4: A New Suite of Test Collections for Ad Hoc CLIR},
booktitle = {{Advances in Information Retrieval. 44th European Conference on IR Research (ECIR 2022)},
year = {2022},
month = apr,
publisher = {Springer},
series = {Lecture Notes in Computer Science},
site = {Stavanger, Norway},
url = {https://arxiv.org/abs/2201.09992}
}
```
|
false |
A database of Wikipedia pages summarizes certain Natural Launage Processing Model applications. |
false |
# Dataset Card for "NER Model Tune"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/ayuhamaro/nlp-model-tune
- **Paper:** [More Information Needed]
- **Leaderboard:** [If the dataset supports an active leaderboard, add link here]()
- **Point of Contact:** [More Information Needed]
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions |
true | |
false |
# Dataset Card for "WS POS Model Tune"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/ayuhamaro/nlp-model-tune
- **Paper:** [More Information Needed]
- **Leaderboard:** [If the dataset supports an active leaderboard, add link here]()
- **Point of Contact:** [More Information Needed]
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions |
false | |
false |
# textures-normal-1k
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The `textures-normal-1k` dataset is an image dataset of 1000+ normal map textures in 512x512 resolution with associated text descriptions.
The dataset was created for training/fine-tuning models for text to image tasks.
It contains a combination of CC0 procedural and photoscanned PBR materials from [ambientCG](https://ambientcg.com/).
### Languages
The text descriptions are in English, and created by joining the tags of each material with a space character.
## Dataset Structure
### Data Instances
Each data point contains a 512x512 image and and additional `text` feature containing the description of the texture.
### Data Fields
* `image`: the normal map as a PIL image
* `text`: the associated text description created by merging the material's tags
### Data Splits
| | train |
| -- | ----- |
| ambientCG | 1447 |
## Dataset Creation
### Curation Rationale
`textures-normal-1k` was created to provide an accesible source of data for automating 3D-asset creation workflows.
The [Dream Textures](https://github.com/carson-katri/dream-textures) add-on is one such tool providing AI automation in Blender.
By fine-tuning models such as Stable Diffusion on textures, this particular use-case can be more accurately automated.
### Source Data
#### Initial Data Collection and Normalization
The data was obtained from [ambientCG](https://ambientcg.com/)'s CC0 textures. Only the normal maps were included in this dataset.
Text descriptions were synthesized by joining the tags associated with each material with a space.
## Additional Information
### Dataset Curators
The dataset was created by Carson Katri, with the images being provided by [ambientCG](https://ambientcg.com/).
### Licensing Information
All of the images used in this dataset are CC0.
### Citation Information
[N/A]
### Contributions
Thanks to [@carson-katri](https://github.com/carson-katri) for adding this dataset. |
false |
# Wikipedia (ko) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (ko)](https://ko.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-ko-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-ko-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-ko-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) |
false |
Dataset for anime person detection.
| Dataset | Train | Test | Validate | Description |
|-------------|-------|------|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| v1.1 | 9255 | 460 | 877 | Annotated on the Roboflow platform, including labeled data for various types of anime images (e.g. illustrations, comics). The dataset has also undergone data augmentation techniques to enhance its diversity and quality. |
| raw | 3085 | 460 | 877 | The same as `v1.1` dataset, without any preprocess and data augmentation. Suitable for directly upload to Roboflow platform. |
| AniDet3.v3i | 16124 | 944 | 1709 | Third-party dataset, source: https://universe.roboflow.com/university-of-michigan-ann-arbor/anidet3-ai42v/dataset/3 . The dataset only contains images from anime series. This means the models directly trained on it will not perform well on illustrations and comics. |
The best practice is to combine the `AniDet3.v3i` dataset with the `v1.1` dataset for training. We provide an [online demo](https://huggingface.co/spaces/deepghs/anime_object_detection). |
false | # Urdu Summarization
## Dataset Overview
The Urdu Summarization dataset contains news articles in Urdu language along with their summaries. The dataset contains a total of 48,071 news articles collected from the BBC Urdu website. Each article is labeled with its headline, summary, and full text.
## Dataset Details
The dataset contains the following columns:
- id (string): Unique identifier for each article
- url (string): URL for the original article
- title (string): Headline of the article
- summary (string): Summary of the article
- text (string): Full text of the article
The dataset is distributed under the MIT License.
## Data Collection
The data was collected from the BBC Urdu website using web scraping techniques. The articles were published between 2003 and 2020, covering a wide range of topics such as politics, sports, technology, and entertainment.
## Data Preprocessing
The text data was preprocessed to remove any HTML tags and non-Urdu characters. The summaries were created by human annotators, who read the full text of the articles and summarized the main points. The dataset was split into training, validation, and test sets, with 80%, 10%, and 10% of the data in each set respectively.
## Potential Use Cases
This dataset can be used for training and evaluating models for automatic summarization of Urdu text. It can also be used for research in natural language processing, machine learning, and information retrieval.
## Acknowledgements
I thank the BBC Urdu team for publishing the news articles on their website and making them publicly available. We also thank the human annotators who created the summaries for the articles.
## Relevant Papers
No papers have been published yet using this dataset.
## License
The dataset is distributed under the MIT License. |
false |
<div align="center">
<img width="640" alt="fcakyon/crack-instance-segmentation" src="https://huggingface.co/datasets/fcakyon/crack-instance-segmentation/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['cracks-and-spalling', 'object']
```
### Number of Images
```json
{'valid': 73, 'test': 37, 'train': 323}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("fcakyon/crack-instance-segmentation", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/palmdetection-1cjxw/crack_detection_experiment/dataset/5](https://universe.roboflow.com/palmdetection-1cjxw/crack_detection_experiment/dataset/5?ref=roboflow2huggingface)
### Citation
```
@misc{ 400-img_dataset,
title = { 400 img Dataset },
type = { Open Source Dataset },
author = { Master dissertation },
howpublished = { \\url{ https://universe.roboflow.com/master-dissertation/400-img } },
url = { https://universe.roboflow.com/master-dissertation/400-img },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { dec },
note = { visited on 2023-01-14 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on January 14, 2023 at 10:08 AM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 433 images.
Crack-spall are annotated in COCO format.
The following pre-processing was applied to each image:
No image augmentation techniques were applied.
|
false |
Trained on 29 N/SFW Yor Forger images but don't Worry! The SFW will work unexpectedly good! |
false |
# Dataset Card for OpenSubtitles
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/OpenSubtitles.php
- **Repository:** None
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2016/pdf/62_Paper.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
This dataset is a subset from the en-nl open_subtitles dataset.
It contains only subtitles of tv shows that have a rating of at least 8.0 with at least 1000 votes.
The subtitles are also ordered and appended into buffers several lengths, with a maximum of 370 tokens
as tokenized by the 'yhavinga/ul2-base-dutch' tokenizer.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The languages in the dataset are:
- en
- nl
## Dataset Structure
### Data Instances
Here are some examples of questions and facts:
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding the open_subtitles dataset.
|
false |
This dataset is derived from the RICO SCA presented by Google Research in the seq2act paper. This is a synthetically generated dataset for UI RefExp task.
See original repo for details and licensing info:
https://github.com/google-research/google-research/blob/master/seq2act/data_generation/README.md#generate-ricosca-dataset
The splits in this dataset are consistent with the splits in the crowdsourced [UIBert RefExp](https://huggingface.co/datasets/ivelin/ui_refexp_saved) dataset. Training split examples do not include images from the Validation or Test examples in the UI Bert RefExp dataset. Respectively the images in Validation and Test splits here match the images in the Validation and Test splits of UIBert RefExp.
|
false | |
true | # AutoTrain Dataset for project: consunmer-complain-multiclass-classification
## Dataset Description
This dataset has been automatically processed by AutoTrain for project consunmer-complain-multiclass-classification.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"feat_Unnamed: 0": null,
"text": "This is awful and borderline abuse. I can't imagine thinking that's even slightly okay",
"target": 5
},
{
"feat_Unnamed: 0": null,
"text": "i didnt feel so hot",
"target": 3
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"feat_Unnamed: 0": "Value(dtype='int64', id=None)",
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(names=['0', '1', '2', '3', '4', '5'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 20663 |
| valid | 5167 |
|
false |
# Dataset Card for LFID Magnetic Field Data
You will need the package
https://chaosmagpy.readthedocs.io/en/master/
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [LIFD DataSets homepage](https://cemac.github.io/LIFD_ML_Datasets/)
- **Repository:** LIFD GitHub Repo](https://github.com/cemac/LIFD_ML_Datasets/)
- **Point of Contact:** [*coming soon*]()
### Dataset Summary
A description of the dataset:
The gufm1 model is a global geomagnetic model based on spherical harmonics, covering the period 1590 - 1990, and is described in the publication:
[Andrew Jackson, Art R. T. Jonkers and Matthew R. Walker (2000), “Four centuries of geomagnetic secular variation from historical records”, Phil. Trans. R. Soc. A.358957–990, http://doi.org/10.1098/rsta.2000.0569](https://royalsocietypublishing.org/doi/10.1098/rsta.2000.0569)
### Supported Tasks and Leaderboards
*coming soon - Kaggle links?*
### Data Fields
The dataset has dimension (181, 361, 401) whose axes represent co-latitude, longitude, time, and whose values are the radial magnetic field at the core-mantle boundary (radius 3485km) in nT.
The colatitude takes values (in degrees): 0,1,2,3,…180; longitude (degrees) takes values -180,-179,….180; and time is yearly 1590, 1591, …1990.
## Dataset Creation
The native model representation is converted into a discrete dataset in physical space and time, using the Python package [Chaosmagpy](https://chaosmagpy.readthedocs.io/en/master/)
### Source Data
## Additional Information
### Dataset Curators
The dataset was initially created by Angela Fan, Ethan Perez, Yacine Jernite, Jason Weston, Michael Auli, and David Grangier, during work done at Facebook AI Research (FAIR).
### Licensing Information
The licensing status of the dataset hinges on the legal status of the [Pushshift.io](https://files.pushshift.io/reddit/) data which is unclear.
### Citation Information
### Contributions
|
false |
# Dataset Card for ScandiWiki
## Dataset Description
- **Point of Contact:** [Dan Saattrup Nielsen](mailto:dan.nielsen@alexandra.dk)
- **Total amount of disk used:** 4485.90 MB
### Dataset Summary
ScandiWiki is a parsed and deduplicated Wikipedia dump in Danish, Norwegian Bokmål,
Norwegian Nynorsk, Swedish, Icelandic and Faroese.
### Supported Tasks and Leaderboards
This dataset is intended for general language modelling.
### Languages
The dataset is available in Danish (`da`), Swedish (`sv`), Norwegian Bokmål (`nb`),
Norwegian Nynorsk (`nn`), Icelandic (`is`) and Faroese (`fo`).
## Dataset Structure
### Data Instances
- **Total amount of disk used:** 4485.90 MB
An example from the `train` split of the `fo` subset looks as follows.
```
{
'id': '3380',
'url': 'https://fo.wikipedia.org/wiki/Enk%C3%B6pings%20kommuna',
'title': 'Enköpings kommuna',
'text': 'Enköpings kommuna (svenskt: Enköpings kommun), er ein kommuna í Uppsala län í Svøríki. Enköpings kommuna hevur umleið 40.656 íbúgvar (2013).\n\nKeldur \n\nKommunur í Svøríki'
}
```
### Data Fields
The data fields are the same among all splits.
- `id`: a `string` feature.
- `url`: a `string` feature.
- `title`: a `string` feature.
- `text`: a `string` feature.
### Data Subsets
| name | samples |
|----------|----------:|
| sv | 2,469,978 |
| nb | 596,593 |
| da | 287,216 |
| nn | 162,776 |
| is | 55,418 |
| fo | 12,582 |
## Dataset Creation
### Curation Rationale
It takes quite a long time to parse the Wikipedia dump as well as to deduplicate it, so
this dataset is primarily for convenience.
### Source Data
The original data is from the [wikipedia
dataset](https://huggingface.co/datasets/wikipedia).
## Additional Information
### Dataset Curators
[Dan Saattrup Nielsen](https://saattrupdan.github.io/) from the [The Alexandra
Institute](https://alexandra.dk/) curated this dataset.
### Licensing Information
The dataset is licensed under the [CC BY-SA 4.0
license](https://creativecommons.org/licenses/by-sa/4.0/), in accordance with the same
license of the [wikipedia dataset](https://huggingface.co/datasets/wikipedia).
|
true |
# Dataset Card for KorFin-ABSA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
The KorFin-ASC is an extension of KorFin-ABSA including 8818 samples with (aspect, polarity) pairs annotated.
The samples were collected from [KLUE-TC](https://klue-benchmark.com/tasks/66/overview/description) and
analyst reports from [Naver Finance](https://finance.naver.com).
Annotation of the dataset is described in the paper [Removing Non-Stationary Knowledge From Pre-Trained Language Models for Entity-Level Sentiment Classification in Finance](https://arxiv.org/abs/2301.03136).
### Supported Tasks and Leaderboards
This dataset supports the following tasks:
* Aspect-Based Sentiment Classification
### Languages
Korean
## Dataset Structure
### Data Instances
Each instance consists of a single sentence, aspect, and corresponding polarity (POSITIVE/NEGATIVE/NEUTRAL).
```
{
"title": "LGU+ 1분기 영업익 1천706억원…마케팅 비용 감소",
"aspect": "LG U+",
'sentiment': 'NEUTRAL',
'url': 'https://news.naver.com/main/read.nhn?mode=LS2D&mid=shm&sid1=105&sid2=227&oid=001&aid=0008363739',
'annotator_id': 'A_01',
'Type': 'single'
}
```
### Data Fields
* title:
* aspect:
* sentiment:
* url:
* annotator_id:
* url:
### Data Splits
The dataset currently does not contain standard data splits.
## Additional Information
You can download the data via:
```
from datasets import load_dataset
dataset = load_dataset("amphora/KorFin-ASC")
```
Please find more information about the code and how the data was collected in the paper [Removing Non-Stationary Knowledge From Pre-Trained Language Models for Entity-Level Sentiment Classification in Finance](https://arxiv.org/abs/2301.03136).
The best-performing model on this dataset can be found at [link](https://huggingface.co/amphora/KorFinASC-XLM-RoBERTa).
### Licensing Information
KorFin-ASC is licensed under the terms of the [cc-by-sa-4.0](https://creativecommons.org/licenses/by-sa/4.0/)
### Citation Information
Please cite this data using:
```
@article{son2023removing,
title={Removing Non-Stationary Knowledge From Pre-Trained Language Models for Entity-Level Sentiment Classification in Finance},
author={Son, Guijin and Lee, Hanwool and Kang, Nahyeon and Hahm, Moonjeong},
journal={arXiv preprint arXiv:2301.03136},
year={2023}
}
```
### Contributions
Thanks to [@Albertmade](https://github.com/h-albert-lee), [@amphora](https://github.com/guijinSON) for making this dataset. |
false | |
false |
<div align="center">
<img width="640" alt="keremberke/excavator-detector" src="https://huggingface.co/datasets/keremberke/excavator-detector/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['excavators', 'dump truck', 'wheel loader']
```
### Number of Images
```json
{'test': 144, 'train': 2245, 'valid': 267}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/excavator-detector", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/mohamed-sabek-6zmr6/excavators-cwlh0/dataset/3](https://universe.roboflow.com/mohamed-sabek-6zmr6/excavators-cwlh0/dataset/3?ref=roboflow2huggingface)
### Citation
```
@misc{ excavators-cwlh0_dataset,
title = { Excavators Dataset },
type = { Open Source Dataset },
author = { Mohamed Sabek },
howpublished = { \\url{ https://universe.roboflow.com/mohamed-sabek-6zmr6/excavators-cwlh0 } },
url = { https://universe.roboflow.com/mohamed-sabek-6zmr6/excavators-cwlh0 },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-01-16 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.ai on April 4, 2022 at 8:56 AM GMT
It includes 2656 images.
Excavator are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 640x640 (Stretch)
No image augmentation techniques were applied.
|
true |
# Dataset Card for "super_glue"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/google-research-datasets/boolean-questions](https://github.com/google-research-datasets/boolean-questions)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 55.66 MB
- **Size of the generated dataset:** 238.01 MB
- **Total amount of disk used:** 293.67 MB
### Dataset Summary
SuperGLUE (https://super.gluebenchmark.com/) is a new benchmark styled after
GLUE with a new set of more difficult language understanding tasks, improved
resources, and a new public leaderboard.
BoolQ (Boolean Questions, Clark et al., 2019a) is a QA task where each example consists of a short
passage and a yes/no question about the passage. The questions are provided anonymously and
unsolicited by users of the Google search engine, and afterwards paired with a paragraph from a
Wikipedia article containing the answer. Following the original work, we evaluate with accuracy.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### axb
- **Size of downloaded dataset files:** 0.03 MB
- **Size of the generated dataset:** 0.23 MB
- **Total amount of disk used:** 0.26 MB
An example of 'test' looks as follows.
```
```
#### axg
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.05 MB
- **Total amount of disk used:** 0.06 MB
An example of 'test' looks as follows.
```
```
#### boolq
- **Size of downloaded dataset files:** 3.93 MB
- **Size of the generated dataset:** 9.92 MB
- **Total amount of disk used:** 13.85 MB
An example of 'train' looks as follows.
```
```
#### cb
- **Size of downloaded dataset files:** 0.07 MB
- **Size of the generated dataset:** 0.19 MB
- **Total amount of disk used:** 0.27 MB
An example of 'train' looks as follows.
```
```
#### copa
- **Size of downloaded dataset files:** 0.04 MB
- **Size of the generated dataset:** 0.12 MB
- **Total amount of disk used:** 0.16 MB
An example of 'train' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### axb
- `sentence1`: a `string` feature.
- `sentence2`: a `string` feature.
- `idx`: a `int32` feature.
- `label`: a classification label, with possible values including `entailment` (0), `not_entailment` (1).
#### axg
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `idx`: a `int32` feature.
- `label`: a classification label, with possible values including `entailment` (0), `not_entailment` (1).
#### boolq
- `question`: a `string` feature.
- `passage`: a `string` feature.
- `idx`: a `int32` feature.
- `label`: a classification label, with possible values including `False` (0), `True` (1).
#### cb
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `idx`: a `int32` feature.
- `label`: a classification label, with possible values including `entailment` (0), `contradiction` (1), `neutral` (2).
#### copa
- `premise`: a `string` feature.
- `choice1`: a `string` feature.
- `choice2`: a `string` feature.
- `question`: a `string` feature.
- `idx`: a `int32` feature.
- `label`: a classification label, with possible values including `choice1` (0), `choice2` (1).
### Data Splits
#### axb
| |test|
|---|---:|
|axb|1104|
#### axg
| |test|
|---|---:|
|axg| 356|
#### boolq
| |train|validation|test|
|-----|----:|---------:|---:|
|boolq| 9427| 3270|3245|
#### cb
| |train|validation|test|
|---|----:|---------:|---:|
|cb | 250| 56| 250|
#### copa
| |train|validation|test|
|----|----:|---------:|---:|
|copa| 400| 100| 500|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{clark2019boolq,
title={BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions},
author={Clark, Christopher and Lee, Kenton and Chang, Ming-Wei, and Kwiatkowski, Tom and Collins, Michael, and Toutanova, Kristina},
booktitle={NAACL},
year={2019}
}
@article{wang2019superglue,
title={SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},
author={Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R},
journal={arXiv preprint arXiv:1905.00537},
year={2019}
}
Note that each SuperGLUE dataset has its own citation. Please see the source to
get the correct citation for each contained dataset.
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
false |
# Dataset Card for LFID Seismic Data
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [LIFD DataSets homepage](https://github.com/cemac/LIFD_ML_Datasets)
- **Repository:** LIFD GitHub Repo](https://github.com/cemac/LIFD_ML_Datasets)
- **Point of Contact:** [*coming soon*]()
### Dataset Summary
A description of the dataset:
### Supported Tasks and Leaderboards
*coming soon - Kaggle links?*
### Data Fields
SAC files
## Dataset Creation
All seismic data were downloaded through the IRIS Wilber 3 system (https://ds.iris.edu/wilber3/) or IRIS Web Services (https://service.iris.edu/), including the following seismic networks: (1) the AZ (ANZA; UC San Diego, 1982); (2) the TA (Transportable Array; IRIS, 2003); (3) the US (USNSN, Albuquerque, 1990); (4) the IU (GSN; Albuquerque, 1988).
### Source Data
## Additional Information
### Dataset Curators
### Licensing Information
### Citation Information
### Contributions
|
false | # AutoTrain Dataset for project: attempt
## Dataset Description
This dataset has been automatically processed by AutoTrain for project attempt.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<800x1000 RGB PIL image>",
"target": 13
},
{
"image": "<254x512 RGB PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['Adult Chara', 'Adult Chara and Young Chara', 'Chara', 'Female Kris', 'Kris', 'Kris and Adult Chara', 'Kris and Chara', 'Kris and Female Chara', 'Kris and Male Chara', 'Kris and The Player', 'Kris and a Soul', 'Kris next to the Ghost of Chara', 'Male Kris', 'Male Kris and Female Kris', 'StoryShift Chara', 'StoryShift Chara and Young Chara', 'Teen Chara and Young Chara', 'Teenager Chara and Young Chara', 'Young Chara'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 277 |
| valid | 80 |
|
false |
# Dataset Card for Beans
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** (https://huggingface.co/datasets/poolrf2001/FaceMask)
- **Repository:** (https://huggingface.co/datasets/poolrf2001/FaceMask)
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** N/A
### Dataset Summary
Beans leaf dataset with images of diseased and health leaves.
### Supported Tasks and Leaderboards
- `image-classification`: Based on a leaf image, the goal of this task is to predict the disease type (Angular Leaf Spot and Bean Rust), if any.
### Languages
English
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```
{
'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=128x128 at 0x16BAA72A4A8>,
'labels': 1
}
```
### Data Fields
The data instances have the following fields:
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
- `labels`: an `int` classification label.
Class Label Mappings:
```json
{
"mask_weared_incorrect": 0,
"with_mask": 1,
"without_mask": 2,
}
```
### Data Splits
| |train|validation|test|
|-------------|----:|---------:|---:|
|# of examples|1500 |180 |180 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@ONLINE {beansdata,
author="Pool",
title="FaceMask dataset",
month="January",
year="2023",
url="https://github.com/poolrf2001/maskFace"
}
```
### Contributions
|
false | # AutoTrain Dataset for project: alt
## Dataset Description
This dataset has been automatically processed by AutoTrain for project alt.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<600x600 RGB PIL image>",
"target": 1
},
{
"image": "<1024x590 RGB PIL image>",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['Adult Chara', 'Adult Chara and Young Chara', 'Chara', 'Female Kris', 'Kris', 'Kris and Adult Chara', 'Kris and Chara', 'Kris and Female Chara', 'Kris and Male Chara', 'Kris and The Player', 'Kris and a Soul', 'Kris next to the Ghost of Chara', 'Male Kris', 'Male Kris and Female Kris', 'StoryShift Chara', 'StoryShift Chara and Young Chara', 'Teen Chara and Young Chara', 'Teenager Chara and Young Chara', 'Young Chara'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 243 |
| valid | 243 |
|
false | # AutoTrain Dataset for project: testttt
## Dataset Description
This dataset has been automatically processed by AutoTrain for project testttt.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<113x220 RGB PIL image>",
"target": 2
},
{
"image": "<1280x720 RGB PIL image>",
"target": 2
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['Adult Chara', 'Adult Chara and Young Chara', 'Chara', 'Female Kris', 'Kris', 'Kris and Adult Chara', 'Kris and Chara', 'Kris and Female Chara', 'Kris and Male Chara', 'Kris and The Player', 'Kris and a Soul', 'Kris next to the Ghost of Chara', 'Male Kris', 'Male Kris and Female Kris', 'StoryShift Chara', 'StoryShift Chara and Young Chara', 'Teen Chara and Young Chara', 'Teenager Chara and Young Chara', 'Young Chara'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 184 |
| valid | 58 |
|
false | # AutoTrain Dataset for project: let
## Dataset Description
This dataset has been automatically processed by AutoTrain for project let.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<600x600 RGB PIL image>",
"target": 1
},
{
"image": "<1024x590 RGB PIL image>",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['Adult Chara', 'Adult Chara and Young Chara', 'Chara', 'Female Kris', 'Kris', 'Kris and Adult Chara', 'Kris and Chara', 'Kris and Female Chara', 'Kris and Male Chara', 'Kris and The Player', 'Kris and a Soul', 'Kris next to the Ghost of Chara', 'Male Kris', 'Male Kris and Female Kris', 'StoryShift Chara', 'StoryShift Chara and Young Chara', 'Teen Chara and Young Chara', 'Teenager Chara and Young Chara', 'Young Chara'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 242 |
| valid | 242 |
|
false |
# Арифметические задачи для диалоговой системы
Датасет содержит сэмплы с простыми математическими заданиями примерно такого вида:
```
- Фонарик Федора работает от 2 батареек, а фонарик Лехи от 6. Сколько батареек нужно фонарикам Федора и Лехи в сумме?
- 2+6=8, столько батареек потребуется.
- Теперь прибавь к результату 469, что получилось?
- 8 плюс 469 равно 477
- Подели на 53, что получилось?
- 9
```
Основная масса задач связана с арифметическими действиями. Есть некоторое количество задач
на поиск корней квадратного уравнения:
```
- Найди действительные корни квадратного уравнения a⋅x²+b⋅x+c для a=45, b=225, c=-270
- Тут два действительных корня -6 и 1
```
Также есть пополняемый набор задач с раскрытым ходом решения:
```
- В болотистых лесах проживает 8 сусликов. Охотник съедает по одному суслику каждые 9 дней. Сколько сусликов останется через 12 дней?
- За 12 дней охотник пообедает 1 раз. Поэтому останется 8-1=7 сусликов.
```
Некоторые задачи построены так, чтобы заставить модель обращать внимание не просто на
наличие чисел, а на контекст их употребления:
```
- Вика принесла в школу 5 мандаринов. Друзья попросили ее поделиться с ними мандаринами. Она отдала им 3 штуки. Сколько мандаринов Вика отдала?
- 3
```
Иногда числа в задаче не имеют отношения к сути задачи, что должно еще сильнее побуждать решающую модель учитывать контекст:
```
- Перемножив восемь и семь, учитель средней школы №77 получил 5084. Он верно посчитал?
- Учитель средней школы №77 ошибся, так как 8*7=56, а не 5084
```
## Формат данных
Каждый сэмпл содержит список связанных реплик без префикса "- ", образующих цепочку арифметических заданий, в которых
условие новой задачи требует анализа как минимум предыдущей реплики.
## Лексическая вариативность ответов
Для многих задач ответ сформулирован не просто как число, в него добавлен сопутствующий текст:
```
- Чему равно 2+2?
- 2+2 равно 4
```
## Метрики генеративных моделей
После файнтюна (1 эпоха, lr=1e-5) на 90% датасета, получаются такие метрики на тестовой части:
```
Модель Среднее отклонение числового ответа Доля верных ответов
в сравнении с верным
sberbank-ai/rugpt3small_based_on_gpt2 8.03e+02% 0.057
sberbank-ai/rugpt3medium_based_on_gpt2 2.89e+02% 0.085
sberbank-ai/rugpt3large_based_on_gpt2 1.58e+02% 0.131
facebook/xglm-2.9B 8.13e+02% 0.224
```
## Генератор сэмплов
При формировании датасета использовался движок шаблонной генерации из этого репозитория: [https://github.com/Koziev/math](https://github.com/Koziev/math).
## Использование датасета
Датасет используется для тренировки [чатбота](https://github.com/Koziev/chatbot).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.