url stringlengths 58 61 | repository_url stringclasses 1
value | labels_url stringlengths 72 75 | comments_url stringlengths 67 70 | events_url stringlengths 65 68 | html_url stringlengths 46 51 | id int64 599M 3.64B | node_id stringlengths 18 32 | number int64 1 7.87k | title stringlengths 1 290 | user dict | labels listlengths 0 4 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 4 | milestone dict | comments int64 0 70 | created_at stringdate 2020-04-14 10:18:02 2025-11-18 08:33:04 | updated_at stringdate 2020-04-27 16:04:17 2025-11-18 16:07:04 | closed_at stringlengths 3 25 | author_association stringclasses 4
values | type float64 | active_lock_reason float64 | sub_issues_summary dict | issue_dependencies_summary dict | body stringlengths 0 228k ⌀ | closed_by dict | reactions dict | timeline_url stringlengths 67 70 | performed_via_github_app float64 | state_reason stringclasses 4
values | draft float64 0 1 ⌀ | pull_request dict | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/2214 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2214/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2214/comments | https://api.github.com/repos/huggingface/datasets/issues/2214/events | https://github.com/huggingface/datasets/issues/2214 | 856,333,657 | MDU6SXNzdWU4NTYzMzM2NTc= | 2,214 | load_metric error: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings' | {
"avatar_url": "https://avatars.githubusercontent.com/u/414788?v=4",
"events_url": "https://api.github.com/users/nsaphra/events{/privacy}",
"followers_url": "https://api.github.com/users/nsaphra/followers",
"following_url": "https://api.github.com/users/nsaphra/following{/other_user}",
"gists_url": "https://... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | 4 | 2021-04-12 20:26:01+00:00 | 2021-04-23 15:20:02+00:00 | 2021-04-23 15:20:02+00:00 | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | I'm having the same problem as [Notebooks issue 10](https://github.com/huggingface/notebooks/issues/10) on datasets 1.2.1, and it seems to be an issue with the datasets package.
```python
>>> from datasets import load_metric
>>> metric = load_metric("glue", "sst2")
Traceback (most recent call last):
File "<std... | {
"avatar_url": "https://avatars.githubusercontent.com/u/414788?v=4",
"events_url": "https://api.github.com/users/nsaphra/events{/privacy}",
"followers_url": "https://api.github.com/users/nsaphra/followers",
"following_url": "https://api.github.com/users/nsaphra/following{/other_user}",
"gists_url": "https://... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2214/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2214/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2213 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2213/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2213/comments | https://api.github.com/repos/huggingface/datasets/issues/2213/events | https://github.com/huggingface/datasets/pull/2213 | 856,025,320 | MDExOlB1bGxSZXF1ZXN0NjEzNjcwODk2 | 2,213 | Fix lc_quad download checksum | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | 0 | 2021-04-12 14:16:59+00:00 | 2021-04-14 22:04:54+00:00 | 2021-04-14 13:42:25+00:00 | COLLABORATOR | null | null | null | null | Fixes #2211 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2213/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2213/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2213.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2213",
"merged_at": "2021-04-14T13:42:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2213.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2212 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2212/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2212/comments | https://api.github.com/repos/huggingface/datasets/issues/2212/events | https://github.com/huggingface/datasets/issues/2212 | 855,999,133 | MDU6SXNzdWU4NTU5OTkxMzM= | 2,212 | Can't reach "https://storage.googleapis.com/illuin/fquad/train.json.zip" when trying to load fquad dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/21348833?v=4",
"events_url": "https://api.github.com/users/hanss0n/events{/privacy}",
"followers_url": "https://api.github.com/users/hanss0n/followers",
"following_url": "https://api.github.com/users/hanss0n/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 5 | 2021-04-12 13:49:56+00:00 | 2023-10-03 16:09:19+00:00 | 2023-10-03 16:09:18+00:00 | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | I'm trying to load the [fquad dataset](https://huggingface.co/datasets/fquad) by running:
```Python
fquad = load_dataset("fquad")
```
which produces the following error:
```
Using custom data configuration default
Downloading and preparing dataset fquad/default (download: 3.14 MiB, generated: 6.62 MiB, ... | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2212/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2212/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2211 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2211/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2211/comments | https://api.github.com/repos/huggingface/datasets/issues/2211/events | https://github.com/huggingface/datasets/issues/2211 | 855,988,410 | MDU6SXNzdWU4NTU5ODg0MTA= | 2,211 | Getting checksum error when trying to load lc_quad dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/21348833?v=4",
"events_url": "https://api.github.com/users/hanss0n/events{/privacy}",
"followers_url": "https://api.github.com/users/hanss0n/followers",
"following_url": "https://api.github.com/users/hanss0n/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 2 | 2021-04-12 13:38:58+00:00 | 2021-04-14 13:42:25+00:00 | 2021-04-14 13:42:25+00:00 | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | I'm having issues loading the [lc_quad](https://huggingface.co/datasets/fquad) dataset by running:
```Python
lc_quad = load_dataset("lc_quad")
```
which is giving me the following error:
```
Using custom data configuration default
Downloading and preparing dataset lc_quad/default (download: 3.69 MiB, ge... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2211/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2211/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2210 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2210/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2210/comments | https://api.github.com/repos/huggingface/datasets/issues/2210/events | https://github.com/huggingface/datasets/issues/2210 | 855,709,400 | MDU6SXNzdWU4NTU3MDk0MDA= | 2,210 | dataloading slow when using HUGE dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4",
"events_url": "https://api.github.com/users/hwijeen/events{/privacy}",
"followers_url": "https://api.github.com/users/hwijeen/followers",
"following_url": "https://api.github.com/users/hwijeen/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 2 | 2021-04-12 08:33:02+00:00 | 2021-04-13 02:03:05+00:00 | 2021-04-13 02:03:05+00:00 | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | Hi,
When I use datasets with 600GB data, the dataloading speed increases significantly.
I am experimenting with two datasets, and one is about 60GB and the other 600GB.
Simply speaking, my code uses `datasets.set_format("torch")` function and let pytorch-lightning handle ddp training.
When looking at the pytorch... | {
"avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4",
"events_url": "https://api.github.com/users/hwijeen/events{/privacy}",
"followers_url": "https://api.github.com/users/hwijeen/followers",
"following_url": "https://api.github.com/users/hwijeen/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2210/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2210/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2209 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2209/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2209/comments | https://api.github.com/repos/huggingface/datasets/issues/2209/events | https://github.com/huggingface/datasets/pull/2209 | 855,638,232 | MDExOlB1bGxSZXF1ZXN0NjEzMzQwMTI2 | 2,209 | Add code of conduct to the project | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | [] | null | 0 | 2021-04-12 07:16:14+00:00 | 2021-04-12 17:55:52+00:00 | 2021-04-12 17:55:52+00:00 | MEMBER | null | null | null | null | Add code of conduct to the project and link it from README and CONTRIBUTING.
This was already done in `transformers`. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2209/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2209/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2209.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2209",
"merged_at": "2021-04-12T17:55:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2209.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2208 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2208/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2208/comments | https://api.github.com/repos/huggingface/datasets/issues/2208/events | https://github.com/huggingface/datasets/pull/2208 | 855,343,835 | MDExOlB1bGxSZXF1ZXN0NjEzMTAxMzMw | 2,208 | Remove Python2 leftovers | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | 1 | 2021-04-11 16:08:03+00:00 | 2021-04-14 22:05:36+00:00 | 2021-04-14 13:40:51+00:00 | COLLABORATOR | null | null | null | null | This PR removes Python2 leftovers since this project aims for Python3.6+ (and as of 2020 Python2 is no longer officially supported) | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2208/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2208/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2208.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2208",
"merged_at": "2021-04-14T13:40:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2208.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2207 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2207/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2207/comments | https://api.github.com/repos/huggingface/datasets/issues/2207/events | https://github.com/huggingface/datasets/issues/2207 | 855,267,383 | MDU6SXNzdWU4NTUyNjczODM= | 2,207 | making labels consistent across the datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | 2 | 2021-04-11 10:03:56+00:00 | 2022-06-01 16:23:08+00:00 | 2022-06-01 16:21:10+00:00 | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | Hi
For accessing the labels one can type
```
>>> a.features['label']
ClassLabel(num_classes=3, names=['entailment', 'neutral', 'contradiction'], names_file=None, id=None)
```
The labels however are not consistent with the actual labels sometimes, for instance in case of XNLI, the actual labels are 0,1,2, but if ... | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2207/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2207/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2206 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2206/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2206/comments | https://api.github.com/repos/huggingface/datasets/issues/2206/events | https://github.com/huggingface/datasets/issues/2206 | 855,252,415 | MDU6SXNzdWU4NTUyNTI0MTU= | 2,206 | Got pyarrow error when loading a dataset while adding special tokens into the tokenizer | {
"avatar_url": "https://avatars.githubusercontent.com/u/38536635?v=4",
"events_url": "https://api.github.com/users/yana-xuyan/events{/privacy}",
"followers_url": "https://api.github.com/users/yana-xuyan/followers",
"following_url": "https://api.github.com/users/yana-xuyan/following{/other_user}",
"gists_url"... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | 7 | 2021-04-11 08:40:09+00:00 | 2021-11-10 12:18:30+00:00 | 2021-11-10 12:04:28+00:00 | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below:
Traceback (most recent call last):
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1687, in _map_sin... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2206/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2206/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2205 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2205/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2205/comments | https://api.github.com/repos/huggingface/datasets/issues/2205/events | https://github.com/huggingface/datasets/pull/2205 | 855,207,605 | MDExOlB1bGxSZXF1ZXN0NjEzMDAwMzYw | 2,205 | Updating citation information on LinCE readme | {
"avatar_url": "https://avatars.githubusercontent.com/u/5833357?v=4",
"events_url": "https://api.github.com/users/gaguilar/events{/privacy}",
"followers_url": "https://api.github.com/users/gaguilar/followers",
"following_url": "https://api.github.com/users/gaguilar/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | null | 0 | 2021-04-11 03:18:05+00:00 | 2021-04-12 17:53:34+00:00 | 2021-04-12 17:53:34+00:00 | CONTRIBUTOR | null | null | null | null | Hi!
I just updated the citation information in this PR. It had an additional bibtex from one of the datasets used in LinCE and then the LinCE bibtex. I removed the former and added a link that shows the full list of citations for each dataset.
Thanks! | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2205/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2205/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2205.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2205",
"merged_at": "2021-04-12T17:53:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2205.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2204 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2204/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2204/comments | https://api.github.com/repos/huggingface/datasets/issues/2204/events | https://github.com/huggingface/datasets/pull/2204 | 855,144,431 | MDExOlB1bGxSZXF1ZXN0NjEyOTU1MzM2 | 2,204 | Add configurable options to `seqeval` metric | {
"avatar_url": "https://avatars.githubusercontent.com/u/44571847?v=4",
"events_url": "https://api.github.com/users/marrodion/events{/privacy}",
"followers_url": "https://api.github.com/users/marrodion/followers",
"following_url": "https://api.github.com/users/marrodion/following{/other_user}",
"gists_url": "... | [] | closed | false | null | [] | null | 0 | 2021-04-10 19:58:19+00:00 | 2021-04-15 13:49:46+00:00 | 2021-04-15 13:49:46+00:00 | CONTRIBUTOR | null | null | null | null | Fixes #2148
Adds options to use strict mode, different schemes of evaluation, sample weight and adjust zero_division behavior, if encountered.
`seqeval` provides schemes as objects, hence dynamic import from string, to avoid making the user do the import (thanks to @albertvillanova for the `importlib` idea). | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2204/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2204/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2204.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2204",
"merged_at": "2021-04-15T13:49:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2204.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2203 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2203/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2203/comments | https://api.github.com/repos/huggingface/datasets/issues/2203/events | https://github.com/huggingface/datasets/pull/2203 | 855,053,595 | MDExOlB1bGxSZXF1ZXN0NjEyODg4MzA5 | 2,203 | updated banking77 train and test data | {
"avatar_url": "https://avatars.githubusercontent.com/u/6765330?v=4",
"events_url": "https://api.github.com/users/hsali/events{/privacy}",
"followers_url": "https://api.github.com/users/hsali/followers",
"following_url": "https://api.github.com/users/hsali/following{/other_user}",
"gists_url": "https://api.g... | [] | closed | false | null | [] | null | 2 | 2021-04-10 12:10:10+00:00 | 2021-04-23 14:33:39+00:00 | 2021-04-23 14:33:39+00:00 | NONE | null | null | null | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2203/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2203/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2203.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2203",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2203.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2203"
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/2202 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2202/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2202/comments | https://api.github.com/repos/huggingface/datasets/issues/2202/events | https://github.com/huggingface/datasets/pull/2202 | 854,501,109 | MDExOlB1bGxSZXF1ZXN0NjEyNDM2ODMx | 2,202 | Add classes GenerateMode, DownloadConfig and Version to the documentation | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | 0 | 2021-04-09 12:58:19+00:00 | 2021-04-12 17:58:00+00:00 | 2021-04-12 17:57:59+00:00 | MEMBER | null | null | null | null | Add documentation for classes `GenerateMode`, `DownloadConfig` and `Version`.
Update the docstring of `load_dataset` to create cross-reference links to the classes.
Related to #2187. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2202/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2202/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2202.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2202",
"merged_at": "2021-04-12T17:57:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2202.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2201 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2201/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2201/comments | https://api.github.com/repos/huggingface/datasets/issues/2201/events | https://github.com/huggingface/datasets/pull/2201 | 854,499,563 | MDExOlB1bGxSZXF1ZXN0NjEyNDM1NTE3 | 2,201 | Fix ArrowWriter overwriting features in ArrowBasedBuilder | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 0 | 2021-04-09 12:56:19+00:00 | 2021-04-12 13:32:17+00:00 | 2021-04-12 13:32:16+00:00 | MEMBER | null | null | null | null | This should fix the issues with CSV loading experienced in #2153 and #2200.
The CSV builder is an ArrowBasedBuilder that had an issue with its ArrowWriter used to write the arrow file from the csv data.
The writer wasn't initialized with the features passed by the user. Therefore the writer was inferring the featur... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2201/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2201/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2201.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2201",
"merged_at": "2021-04-12T13:32:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2201.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2200 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2200/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2200/comments | https://api.github.com/repos/huggingface/datasets/issues/2200/events | https://github.com/huggingface/datasets/issues/2200 | 854,449,656 | MDU6SXNzdWU4NTQ0NDk2NTY= | 2,200 | _prepare_split will overwrite DatasetBuilder.info.features | {
"avatar_url": "https://avatars.githubusercontent.com/u/4157614?v=4",
"events_url": "https://api.github.com/users/Gforky/events{/privacy}",
"followers_url": "https://api.github.com/users/Gforky/followers",
"following_url": "https://api.github.com/users/Gforky/following{/other_user}",
"gists_url": "https://ap... | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists... | null | 2 | 2021-04-09 11:47:13+00:00 | 2021-06-04 10:37:35+00:00 | 2021-06-04 10:37:35+00:00 | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | Hi, here is my issue:
I initialized a Csv datasetbuilder with specific features:
```
def get_dataset_features(data_args):
features = {}
if data_args.text_features:
features.update({text_feature: hf_features.Value("string") for text_feature in data_args.text_features.strip().split(",")})
if da... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2200/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2200/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2199 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2199/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2199/comments | https://api.github.com/repos/huggingface/datasets/issues/2199/events | https://github.com/huggingface/datasets/pull/2199 | 854,417,318 | MDExOlB1bGxSZXF1ZXN0NjEyMzY0ODU3 | 2,199 | Fix backward compatibility in Dataset.load_from_disk | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | 3 | 2021-04-09 11:01:10+00:00 | 2021-04-09 15:57:05+00:00 | 2021-04-09 15:57:05+00:00 | MEMBER | null | null | null | null | Fix backward compatibility when loading from disk an old dataset saved to disk with indices using key "_indices_data_files".
Related to #2195. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2199/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2199/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2199.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2199",
"merged_at": "2021-04-09T15:57:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2199.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2198 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2198/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2198/comments | https://api.github.com/repos/huggingface/datasets/issues/2198/events | https://github.com/huggingface/datasets/pull/2198 | 854,357,481 | MDExOlB1bGxSZXF1ZXN0NjEyMzE0MTIz | 2,198 | added file_permission in load_dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
... | [] | closed | false | null | [] | null | 1 | 2021-04-09 09:39:06+00:00 | 2021-04-16 14:11:46+00:00 | 2021-04-16 14:11:46+00:00 | CONTRIBUTOR | null | null | null | null | As discussed in #2065 I've added `file_permission` argument in `load_dataset`.
Added mainly 2 things here:
1) Permission of downloaded datasets when converted to .arrow files can be changed with argument `file_permission` argument in `load_dataset` (default is 0o644 only)
2) Incase the user uses `map` later on t... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2198/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2198/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2198.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2198",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2198.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2198"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2197 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2197/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2197/comments | https://api.github.com/repos/huggingface/datasets/issues/2197/events | https://github.com/huggingface/datasets/pull/2197 | 854,356,559 | MDExOlB1bGxSZXF1ZXN0NjEyMzEzMzQw | 2,197 | fix missing indices_files in load_form_disk | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 0 | 2021-04-09 09:37:57+00:00 | 2021-04-09 09:54:40+00:00 | 2021-04-09 09:54:39+00:00 | MEMBER | null | null | null | null | This should fix #2195
`load_from_disk` was failing if there was no "_indices_files" field in state.json. This can happen if the dataset has no indices mapping | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2197/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2197/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2197.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2197",
"merged_at": "2021-04-09T09:54:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2197.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2196 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2196/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2196/comments | https://api.github.com/repos/huggingface/datasets/issues/2196/events | https://github.com/huggingface/datasets/issues/2196 | 854,126,114 | MDU6SXNzdWU4NTQxMjYxMTQ= | 2,196 | `load_dataset` caches two arrow files? | {
"avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4",
"events_url": "https://api.github.com/users/hwijeen/events{/privacy}",
"followers_url": "https://api.github.com/users/hwijeen/followers",
"following_url": "https://api.github.com/users/hwijeen/following{/other_user}",
"gists_url": "https:... | [
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] | closed | false | null | [] | null | 3 | 2021-04-09 03:49:19+00:00 | 2021-04-12 05:25:29+00:00 | 2021-04-12 05:25:29+00:00 | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | Hi,
I am using datasets to load large json file of 587G.
I checked the cached folder and found that there are two arrow files created:
* `cache-ed205e500a7dc44c.arrow` - 355G
* `json-train.arrow` - 582G
Why is the first file created?
If I delete it, would I still be able to `load_from_disk`? | {
"avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4",
"events_url": "https://api.github.com/users/hwijeen/events{/privacy}",
"followers_url": "https://api.github.com/users/hwijeen/followers",
"following_url": "https://api.github.com/users/hwijeen/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2196/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2196/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2195 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2195/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2195/comments | https://api.github.com/repos/huggingface/datasets/issues/2195/events | https://github.com/huggingface/datasets/issues/2195 | 854,070,194 | MDU6SXNzdWU4NTQwNzAxOTQ= | 2,195 | KeyError: '_indices_files' in `arrow_dataset.py` | {
"avatar_url": "https://avatars.githubusercontent.com/u/15007950?v=4",
"events_url": "https://api.github.com/users/samsontmr/events{/privacy}",
"followers_url": "https://api.github.com/users/samsontmr/followers",
"following_url": "https://api.github.com/users/samsontmr/following{/other_user}",
"gists_url": "... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | 2 | 2021-04-09 01:37:12+00:00 | 2021-04-09 09:55:09+00:00 | 2021-04-09 09:54:39+00:00 | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | After pulling the latest master, I'm getting a crash when `load_from_disk` tries to load my local dataset.
Trace:
```
Traceback (most recent call last):
File "load_data.py", line 11, in <module>
dataset = load_from_disk(SRC)
File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/load.py", line ... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2195/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2195/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2194 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2194/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2194/comments | https://api.github.com/repos/huggingface/datasets/issues/2194/events | https://github.com/huggingface/datasets/issues/2194 | 853,909,452 | MDU6SXNzdWU4NTM5MDk0NTI= | 2,194 | py3.7: TypeError: can't pickle _LazyModule objects | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://a... | [] | closed | false | null | [] | null | 1 | 2021-04-08 21:02:48+00:00 | 2021-04-09 16:56:50+00:00 | 2021-04-09 01:52:57+00:00 | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | While this works fine with py3.8, under py3.7, with a totally new conda env and transformers install:
```
git clone https://github.com/huggingface/transformers
cd transformers
pip install -e .[testing]
export BS=1; rm -rf /tmp/test-clm; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python \
examples/language... | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://a... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2194/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2194/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2193 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2193/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2193/comments | https://api.github.com/repos/huggingface/datasets/issues/2193/events | https://github.com/huggingface/datasets/issues/2193 | 853,725,707 | MDU6SXNzdWU4NTM3MjU3MDc= | 2,193 | Filtering/mapping on one column is very slow | {
"avatar_url": "https://avatars.githubusercontent.com/u/39116809?v=4",
"events_url": "https://api.github.com/users/norabelrose/events{/privacy}",
"followers_url": "https://api.github.com/users/norabelrose/followers",
"following_url": "https://api.github.com/users/norabelrose/following{/other_user}",
"gists_u... | [
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] | closed | false | null | [] | null | 12 | 2021-04-08 18:16:14+00:00 | 2021-04-26 16:13:59+00:00 | 2021-04-26 16:13:59+00:00 | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_colu... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2193/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2193/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2192 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2192/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2192/comments | https://api.github.com/repos/huggingface/datasets/issues/2192/events | https://github.com/huggingface/datasets/pull/2192 | 853,547,910 | MDExOlB1bGxSZXF1ZXN0NjExNjE5NTY0 | 2,192 | Fix typo in huggingface hub | {
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_u... | [] | closed | false | null | [] | null | 0 | 2021-04-08 14:42:24+00:00 | 2021-04-08 15:47:41+00:00 | 2021-04-08 15:47:40+00:00 | MEMBER | null | null | null | null | pip knows how to resolve to `huggingface_hub`, but conda doesn't!
The `packaging` dependency is also required for the build to complete. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2192/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2192/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2192.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2192",
"merged_at": "2021-04-08T15:47:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2192.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2191 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2191/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2191/comments | https://api.github.com/repos/huggingface/datasets/issues/2191/events | https://github.com/huggingface/datasets/pull/2191 | 853,364,204 | MDExOlB1bGxSZXF1ZXN0NjExNDY1Nzc0 | 2,191 | Refactorize tests to use Dataset as context manager | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"color": "B67A40",
"default": false,
"description": "Restructuring existing code without changing its external behavior",
"id": 2851292821,
"name": "refactoring",
"node_id": "MDU6TGFiZWwyODUxMjkyODIx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/refactoring"
}
] | closed | false | null | [] | {
"closed_at": "2021-04-20T16:50:46Z",
"closed_issues": 4,
"created_at": "2021-04-09T13:07:51Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/u... | 4 | 2021-04-08 11:21:04+00:00 | 2021-04-19 07:53:11+00:00 | 2021-04-19 07:53:10+00:00 | MEMBER | null | null | null | null | Refactorize Dataset tests to use Dataset as context manager. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2191/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2191/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2191.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2191",
"merged_at": "2021-04-19T07:53:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2191.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2190 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2190/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2190/comments | https://api.github.com/repos/huggingface/datasets/issues/2190/events | https://github.com/huggingface/datasets/issues/2190 | 853,181,564 | MDU6SXNzdWU4NTMxODE1NjQ= | 2,190 | News_commentary Dataset Translation Pairs are of Incorrect Language Specified Pairs | {
"avatar_url": "https://avatars.githubusercontent.com/u/8571003?v=4",
"events_url": "https://api.github.com/users/anassalamah/events{/privacy}",
"followers_url": "https://api.github.com/users/anassalamah/followers",
"following_url": "https://api.github.com/users/anassalamah/following{/other_user}",
"gists_ur... | [] | closed | false | null | [] | null | 2 | 2021-04-08 07:53:43+00:00 | 2021-05-24 10:03:55+00:00 | 2021-05-24 10:03:55+00:00 | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | I used load_dataset to load the news_commentary dataset for "ar-en" translation pairs but found translations from Arabic to Hindi.
```
train_ds = load_dataset("news_commentary", "ar-en", split='train[:98%]')
val_ds = load_dataset("news_commentary", "ar-en", split='train[98%:]')
# filtering out examples that a... | {
"avatar_url": "https://avatars.githubusercontent.com/u/8571003?v=4",
"events_url": "https://api.github.com/users/anassalamah/events{/privacy}",
"followers_url": "https://api.github.com/users/anassalamah/followers",
"following_url": "https://api.github.com/users/anassalamah/following{/other_user}",
"gists_ur... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2190/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2190/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2189 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2189/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2189/comments | https://api.github.com/repos/huggingface/datasets/issues/2189/events | https://github.com/huggingface/datasets/issues/2189 | 853,052,891 | MDU6SXNzdWU4NTMwNTI4OTE= | 2,189 | save_to_disk doesn't work when we use concatenate_datasets function before creating the final dataset_object. | {
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | null | 1 | 2021-04-08 04:42:53+00:00 | 2022-06-01 16:32:15+00:00 | 2022-06-01 16:32:15+00:00 | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | As you can see, it saves the entire dataset.
@lhoestq
You can check by going through the following example,
```
from datasets import load_from_disk,concatenate_datasets
loaded_data=load_from_disk('/home/gsir059/HNSW-ori/my_knowledge_dataset')
n=20
kb_list=[loaded_data.shard(n, i, contiguous=True) for i... | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2189/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2189/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2188 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2188/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2188/comments | https://api.github.com/repos/huggingface/datasets/issues/2188/events | https://github.com/huggingface/datasets/issues/2188 | 853,044,166 | MDU6SXNzdWU4NTMwNDQxNjY= | 2,188 | Duplicate data in Timit dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/78190188?v=4",
"events_url": "https://api.github.com/users/thanh-p/events{/privacy}",
"followers_url": "https://api.github.com/users/thanh-p/followers",
"following_url": "https://api.github.com/users/thanh-p/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 2 | 2021-04-08 04:21:54+00:00 | 2021-04-08 12:13:19+00:00 | 2021-04-08 12:13:19+00:00 | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | I ran a simple code to list all texts in Timit dataset and the texts were all the same.
Is this dataset corrupted?
**Code:**
timit = load_dataset("timit_asr")
print(*timit['train']['text'], sep='\n')
**Result:**
Would such an act of refusal be useful?
Would such an act of refusal be useful?
Would such an act of... | {
"avatar_url": "https://avatars.githubusercontent.com/u/78190188?v=4",
"events_url": "https://api.github.com/users/thanh-p/events{/privacy}",
"followers_url": "https://api.github.com/users/thanh-p/followers",
"following_url": "https://api.github.com/users/thanh-p/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2188/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2188/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2187 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2187/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2187/comments | https://api.github.com/repos/huggingface/datasets/issues/2187/events | https://github.com/huggingface/datasets/issues/2187 | 852,939,736 | MDU6SXNzdWU4NTI5Mzk3MzY= | 2,187 | Question (potential issue?) related to datasets caching | {
"avatar_url": "https://avatars.githubusercontent.com/u/17202292?v=4",
"events_url": "https://api.github.com/users/ioana-blue/events{/privacy}",
"followers_url": "https://api.github.com/users/ioana-blue/followers",
"following_url": "https://api.github.com/users/ioana-blue/following{/other_user}",
"gists_url"... | [
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] | open | false | null | [] | null | 15 | 2021-04-08 00:16:28+00:00 | 2023-01-03 18:30:38+00:00 | NaT | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the following:
```
04/07/2021 18:34:42 - WARNING - datasets.build... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2187/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2187/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2186 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2186/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2186/comments | https://api.github.com/repos/huggingface/datasets/issues/2186/events | https://github.com/huggingface/datasets/pull/2186 | 852,840,819 | MDExOlB1bGxSZXF1ZXN0NjExMDMxNzE0 | 2,186 | GEM: new challenge sets | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | null | 1 | 2021-04-07 21:39:07+00:00 | 2021-04-07 21:56:35+00:00 | 2021-04-07 21:56:35+00:00 | MEMBER | null | null | null | null | This PR updates the GEM dataset to:
- remove extraneous fields in WikiAuto after https://github.com/huggingface/datasets/pull/2171 fixed the source
- add context and services to Schema Guided Dialog
- Add new or update challenge sets for MLSUM ES and DE, XSUM, and SGD | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "htt... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2186/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2186/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2186.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2186",
"merged_at": "2021-04-07T21:56:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2186.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2185 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2185/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2185/comments | https://api.github.com/repos/huggingface/datasets/issues/2185/events | https://github.com/huggingface/datasets/issues/2185 | 852,684,395 | MDU6SXNzdWU4NTI2ODQzOTU= | 2,185 | .map() and distributed training | {
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | 8 | 2021-04-07 18:22:14+00:00 | 2021-10-23 07:11:15+00:00 | 2021-04-09 15:38:31+00:00 | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | Hi,
I have a question regarding distributed training and the `.map` call on a dataset.
I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`.
`dataset` is then tokenized:
```python
datasets = load_from_disk(dataset_path=my_path)
[...]
def tokeni... | {
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url"... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2185/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2185/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2184 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2184/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2184/comments | https://api.github.com/repos/huggingface/datasets/issues/2184/events | https://github.com/huggingface/datasets/pull/2184 | 852,597,258 | MDExOlB1bGxSZXF1ZXN0NjEwODIxMTc0 | 2,184 | Implementation of class_encode_column | {
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "... | [] | closed | false | null | [] | null | 1 | 2021-04-07 16:47:43+00:00 | 2021-04-16 11:44:37+00:00 | 2021-04-16 11:26:59+00:00 | CONTRIBUTOR | null | null | null | null | Addresses #2176
I'm happy to discuss the API and internals! | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2184/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2184/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2184.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2184",
"merged_at": "2021-04-16T11:26:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2184.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2183 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2183/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2183/comments | https://api.github.com/repos/huggingface/datasets/issues/2183/events | https://github.com/huggingface/datasets/pull/2183 | 852,518,411 | MDExOlB1bGxSZXF1ZXN0NjEwNzU3MjUz | 2,183 | Fix s3fs tests for py36 and py37+ | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 0 | 2021-04-07 15:17:11+00:00 | 2021-04-08 08:54:45+00:00 | 2021-04-08 08:54:44+00:00 | MEMBER | null | null | null | null | Recently several changes happened:
1. latest versions of `fsspec` require python>3.7 for async features
2. `s3fs` added a dependency on `aiobotocore`, which is not compatible with the `moto` s3 mock context manager
This PR fixes both issues, by pinning `fsspec` and `s3fs` for python 3.6, and by using `moto` in ser... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2183/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2183/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2183.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2183",
"merged_at": "2021-04-08T08:54:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2183.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2182 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2182/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2182/comments | https://api.github.com/repos/huggingface/datasets/issues/2182/events | https://github.com/huggingface/datasets/pull/2182 | 852,384,872 | MDExOlB1bGxSZXF1ZXN0NjEwNjQ2MDIy | 2,182 | Set default in-memory value depending on the dataset size | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | {
"closed_at": "2021-04-20T16:50:46Z",
"closed_issues": 4,
"created_at": "2021-04-09T13:07:51Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/u... | 4 | 2021-04-07 13:00:18+00:00 | 2021-04-20 14:20:12+00:00 | 2021-04-20 10:04:04+00:00 | MEMBER | null | null | null | null | Set a default value for `in_memory` depending on the size of the dataset to be loaded.
Close #2179.
TODO:
- [x] Add a section in the docs about this.
- ~Add a warning if someone tries to specify `cache_file_name=` in `map`, `filter` etc. on a dataset that is in memory, since the computation is not going to be c... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2182/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2182/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2182.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2182",
"merged_at": "2021-04-20T10:04:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2182.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2181 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2181/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2181/comments | https://api.github.com/repos/huggingface/datasets/issues/2181/events | https://github.com/huggingface/datasets/issues/2181 | 852,261,607 | MDU6SXNzdWU4NTIyNjE2MDc= | 2,181 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries) | {
"avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4",
"events_url": "https://api.github.com/users/hwijeen/events{/privacy}",
"followers_url": "https://api.github.com/users/hwijeen/followers",
"following_url": "https://api.github.com/users/hwijeen/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 9 | 2021-04-07 10:26:46+00:00 | 2021-04-12 07:15:55+00:00 | 2021-04-12 07:15:55+00:00 | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as follows:
```
Traceback (most recent call last):
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-pack... | {
"avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4",
"events_url": "https://api.github.com/users/hwijeen/events{/privacy}",
"followers_url": "https://api.github.com/users/hwijeen/followers",
"following_url": "https://api.github.com/users/hwijeen/following{/other_user}",
"gists_url": "https:... | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2181/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2181/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2180 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2180/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2180/comments | https://api.github.com/repos/huggingface/datasets/issues/2180/events | https://github.com/huggingface/datasets/pull/2180 | 852,258,635 | MDExOlB1bGxSZXF1ZXN0NjEwNTQxOTA2 | 2,180 | Add tel to xtreme tatoeba | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 0 | 2021-04-07 10:23:15+00:00 | 2021-04-07 15:50:35+00:00 | 2021-04-07 15:50:34+00:00 | MEMBER | null | null | null | null | This should fix issue #2149 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2180/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2180/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2180.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2180",
"merged_at": "2021-04-07T15:50:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2180.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2179 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2179/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2179/comments | https://api.github.com/repos/huggingface/datasets/issues/2179/events | https://github.com/huggingface/datasets/issues/2179 | 852,237,957 | MDU6SXNzdWU4NTIyMzc5NTc= | 2,179 | Load small datasets in-memory instead of using memory map | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "c5def5",
"default": fals... | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | 0 | 2021-04-07 09:58:16+00:00 | 2021-04-20 10:04:04+00:00 | 2021-04-20 10:04:03+00:00 | MEMBER | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | Currently all datasets are loaded using memory mapping by default in `load_dataset`.
However this might not be necessary for small datasets. If a dataset is small enough, then it can be loaded in-memory and:
- its memory footprint would be small so it's ok
- in-memory computations/queries would be faster
- the cach... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2179/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2179/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2178 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2178/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2178/comments | https://api.github.com/repos/huggingface/datasets/issues/2178/events | https://github.com/huggingface/datasets/pull/2178 | 852,215,058 | MDExOlB1bGxSZXF1ZXN0NjEwNTA1Mjg1 | 2,178 | Fix cast memory usage by using map on subtables | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | {
"closed_at": "2021-04-20T16:50:46Z",
"closed_issues": 4,
"created_at": "2021-04-09T13:07:51Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/u... | 3 | 2021-04-07 09:30:50+00:00 | 2021-04-20 14:20:44+00:00 | 2021-04-13 09:28:16+00:00 | MEMBER | null | null | null | null | The `cast` operation on a pyarrow Table may create new arrays in memory.
This is an issue since users expect memory mapped datasets to not fill up the RAM.
To fix that I used `map` to write a new arrow file on disk when cast is used.
To make things more convenient I introduced the `arrow` formatting of a dataset, ... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2178/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2178/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2178.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2178",
"merged_at": "2021-04-13T09:28:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2178.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2177 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2177/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2177/comments | https://api.github.com/repos/huggingface/datasets/issues/2177/events | https://github.com/huggingface/datasets/pull/2177 | 852,065,307 | MDExOlB1bGxSZXF1ZXN0NjEwMzc5MDYx | 2,177 | add social thumbnial | {
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | 0 | 2021-04-07 06:40:06+00:00 | 2021-04-07 08:16:01+00:00 | 2021-04-07 08:16:01+00:00 | CONTRIBUTOR | null | null | null | null | # What does this PR do?
I added OpenGraph/ Twitter Card support to the docs to create nice social thumbnails.

To be able to add these I needed to install `sphinxext-op... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2177/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2177/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2177.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2177",
"merged_at": "2021-04-07T08:16:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2177.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2176 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2176/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2176/comments | https://api.github.com/repos/huggingface/datasets/issues/2176/events | https://github.com/huggingface/datasets/issues/2176 | 851,865,795 | MDU6SXNzdWU4NTE4NjU3OTU= | 2,176 | Converting a Value to a ClassLabel | {
"avatar_url": "https://avatars.githubusercontent.com/u/7272031?v=4",
"events_url": "https://api.github.com/users/nelson-liu/events{/privacy}",
"followers_url": "https://api.github.com/users/nelson-liu/followers",
"following_url": "https://api.github.com/users/nelson-liu/following{/other_user}",
"gists_url":... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | 2 | 2021-04-06 22:54:16+00:00 | 2022-06-01 16:31:49+00:00 | 2022-06-01 16:31:49+00:00 | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | Hi!
In the docs for `cast`, it's noted that `For non-trivial conversion, e.g. string <-> ClassLabel you should use map() to update the Dataset.`
Would it be possible to have an example that demonstrates such a string <-> ClassLabel conversion using `map`? Thanks! | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2176/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2176/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2175 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2175/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2175/comments | https://api.github.com/repos/huggingface/datasets/issues/2175/events | https://github.com/huggingface/datasets/issues/2175 | 851,836,096 | MDU6SXNzdWU4NTE4MzYwOTY= | 2,175 | dataset.search_batch() function outputs all -1 indices sometime. | {
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | null | 6 | 2021-04-06 21:50:49+00:00 | 2021-04-16 12:21:16+00:00 | 2021-04-16 12:21:15+00:00 | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**.
During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/retrieval_... | {
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "htt... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2175/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2175/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2174 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2174/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2174/comments | https://api.github.com/repos/huggingface/datasets/issues/2174/events | https://github.com/huggingface/datasets/pull/2174 | 851,383,675 | MDExOlB1bGxSZXF1ZXN0NjA5ODE2OTQ2 | 2,174 | Pin docutils for better doc | {
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 0 | 2021-04-06 12:40:20+00:00 | 2021-04-06 12:55:53+00:00 | 2021-04-06 12:55:53+00:00 | CONTRIBUTOR | null | null | null | null | The latest release of docutils make the navbar in the documentation weird and the Markdown wrongly interpreted:

We had the same problem in Transformers and solved it by pinning docutils (a dep of sphinx... | {
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2174/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2174/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2174.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2174",
"merged_at": "2021-04-06T12:55:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2174.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2173 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2173/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2173/comments | https://api.github.com/repos/huggingface/datasets/issues/2173/events | https://github.com/huggingface/datasets/pull/2173 | 851,359,284 | MDExOlB1bGxSZXF1ZXN0NjA5Nzk2NzI2 | 2,173 | Add OpenSLR dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4",
"events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}",
"followers_url": "https://api.github.com/users/cahya-wirawan/followers",
"following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}",
"gi... | [] | closed | false | null | [] | null | 0 | 2021-04-06 12:08:34+00:00 | 2021-04-12 16:54:46+00:00 | 2021-04-12 16:54:46+00:00 | CONTRIBUTOR | null | null | null | null | OpenSLR (https://openslr.org/) is a site devoted to hosting speech and language resources, such as training corpora for speech recognition, and software related to speech recognition. There are around 80 speech datasets listed in OpenSLR, currently this PR includes only 9 speech datasets SLR41, SLR42, SLR43, SLR44, SLR... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2173/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2173/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2173.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2173",
"merged_at": "2021-04-12T16:54:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2173.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2172 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2172/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2172/comments | https://api.github.com/repos/huggingface/datasets/issues/2172/events | https://github.com/huggingface/datasets/pull/2172 | 851,229,399 | MDExOlB1bGxSZXF1ZXN0NjA5Njg4ODgx | 2,172 | Pin fsspec lower than 0.9.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 0 | 2021-04-06 09:19:09+00:00 | 2021-04-06 09:49:27+00:00 | 2021-04-06 09:49:26+00:00 | MEMBER | null | null | null | null | Today's release of `fsspec` 0.9.0 implied a new release of `s3fs` 0.6.0 but this version breaks the CI (see [here](https://app.circleci.com/pipelines/github/huggingface/datasets/5312/workflows/490f3240-cd1c-4dd1-bb60-b416771c5584/jobs/32734) for example)
I'm pinning `fsspec` until this has been resolved | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2172/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2172/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2172.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2172",
"merged_at": "2021-04-06T09:49:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2172.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2171 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2171/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2171/comments | https://api.github.com/repos/huggingface/datasets/issues/2171/events | https://github.com/huggingface/datasets/pull/2171 | 851,090,662 | MDExOlB1bGxSZXF1ZXN0NjA5NTY4MDcw | 2,171 | Fixed the link to wikiauto training data. | {
"avatar_url": "https://avatars.githubusercontent.com/u/11708999?v=4",
"events_url": "https://api.github.com/users/mounicam/events{/privacy}",
"followers_url": "https://api.github.com/users/mounicam/followers",
"following_url": "https://api.github.com/users/mounicam/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | null | 3 | 2021-04-06 07:13:11+00:00 | 2021-04-06 16:05:42+00:00 | 2021-04-06 16:05:09+00:00 | CONTRIBUTOR | null | null | null | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "htt... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2171/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2171/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2171.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2171",
"merged_at": "2021-04-06T16:05:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2171.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | |
https://api.github.com/repos/huggingface/datasets/issues/2170 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2170/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2170/comments | https://api.github.com/repos/huggingface/datasets/issues/2170/events | https://github.com/huggingface/datasets/issues/2170 | 850,913,228 | MDU6SXNzdWU4NTA5MTMyMjg= | 2,170 | Wikipedia historic dumps are deleted but hf/datasets hardcodes dump date | {
"avatar_url": "https://avatars.githubusercontent.com/u/946903?v=4",
"events_url": "https://api.github.com/users/leezu/events{/privacy}",
"followers_url": "https://api.github.com/users/leezu/followers",
"following_url": "https://api.github.com/users/leezu/following{/other_user}",
"gists_url": "https://api.gi... | [] | open | false | null | [] | null | 1 | 2021-04-06 03:13:18+00:00 | 2021-06-16 01:10:50+00:00 | NaT | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | Wikimedia does not keep all historical dumps. For example, as of today https://dumps.wikimedia.org/kowiki/ only provides
```
20201220/ 02-Feb-2021 01:36 -
20210101/ 21-Feb-2021 01:26 -
20210120/ ... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2170/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2170/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2169 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2169/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2169/comments | https://api.github.com/repos/huggingface/datasets/issues/2169/events | https://github.com/huggingface/datasets/pull/2169 | 850,456,180 | MDExOlB1bGxSZXF1ZXN0NjA5MDI2ODUz | 2,169 | Updated WER metric implementation to avoid memory issues | {
"avatar_url": "https://avatars.githubusercontent.com/u/5707233?v=4",
"events_url": "https://api.github.com/users/diego-fustes/events{/privacy}",
"followers_url": "https://api.github.com/users/diego-fustes/followers",
"following_url": "https://api.github.com/users/diego-fustes/following{/other_user}",
"gists... | [] | closed | false | null | [] | null | 1 | 2021-04-05 15:43:20+00:00 | 2021-04-06 15:02:58+00:00 | 2021-04-06 15:02:58+00:00 | NONE | null | null | null | null | This is in order to fix this issue:
https://github.com/huggingface/datasets/issues/2078
| {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2169/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2169/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2169.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2169",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2169.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2169"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2168 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2168/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2168/comments | https://api.github.com/repos/huggingface/datasets/issues/2168/events | https://github.com/huggingface/datasets/pull/2168 | 849,957,941 | MDExOlB1bGxSZXF1ZXN0NjA4NjA4Nzg5 | 2,168 | Preserve split type when realoding dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | 5 | 2021-04-04 20:46:21+00:00 | 2021-04-19 10:57:05+00:00 | 2021-04-19 09:08:55+00:00 | COLLABORATOR | null | null | null | null | Fixes #2167
Using `eval` is not ideal for security reasons (in web apps I assume), but without it the code would be much more complex IMO.
In terms of style, instead of explicitly importing a private member (`_RelativeInstruction`), we can add these imports at the top of the module:
```python
from . import arr... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2168/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2168/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2168.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2168",
"merged_at": "2021-04-19T09:08:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2168.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2167 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2167/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2167/comments | https://api.github.com/repos/huggingface/datasets/issues/2167/events | https://github.com/huggingface/datasets/issues/2167 | 849,944,891 | MDU6SXNzdWU4NDk5NDQ4OTE= | 2,167 | Split type not preserved when reloading the dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | 0 | 2021-04-04 19:29:54+00:00 | 2021-04-19 09:08:55+00:00 | 2021-04-19 09:08:55+00:00 | COLLABORATOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | A minimal reproducible example:
```python
>>> from datasets import load_dataset, Dataset
>>> dset = load_dataset("sst", split="train")
>>> dset.save_to_disk("sst")
>>> type(dset.split)
<class 'datasets.splits.NamedSplit'>
>>> dset = Dataset.load_from_disk("sst")
>>> type(dset.split) # NamedSplit expected
<cla... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2167/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2167/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2166 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2166/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2166/comments | https://api.github.com/repos/huggingface/datasets/issues/2166/events | https://github.com/huggingface/datasets/issues/2166 | 849,778,545 | MDU6SXNzdWU4NDk3Nzg1NDU= | 2,166 | Regarding Test Sets for the GEM datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/17217068?v=4",
"events_url": "https://api.github.com/users/vyraun/events{/privacy}",
"followers_url": "https://api.github.com/users/vyraun/followers",
"following_url": "https://api.github.com/users/vyraun/following{/other_user}",
"gists_url": "https://a... | [
{
"color": "72f99f",
"default": false,
"description": "Discussions on the datasets",
"id": 2067401494,
"name": "Dataset discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAxNDk0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion"
}
] | closed | false | null | [] | null | 2 | 2021-04-04 02:02:45+00:00 | 2021-04-06 08:13:12+00:00 | 2021-04-06 08:13:12+00:00 | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | @yjernite Hi, are the test sets for the GEM datasets scheduled to be [added soon](https://gem-benchmark.com/shared_task)?
e.g.
```
from datasets import load_dataset
DATASET_NAME="common_gen"
data = load_dataset("gem", DATASET_NAME)
```
The test set doesn't have the target or references.
```
data['test... | {
"avatar_url": "https://avatars.githubusercontent.com/u/17217068?v=4",
"events_url": "https://api.github.com/users/vyraun/events{/privacy}",
"followers_url": "https://api.github.com/users/vyraun/followers",
"following_url": "https://api.github.com/users/vyraun/following{/other_user}",
"gists_url": "https://a... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2166/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2166/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2165 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2165/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2165/comments | https://api.github.com/repos/huggingface/datasets/issues/2165/events | https://github.com/huggingface/datasets/issues/2165 | 849,771,665 | MDU6SXNzdWU4NDk3NzE2NjU= | 2,165 | How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/24562381?v=4",
"events_url": "https://api.github.com/users/y-rokutan/events{/privacy}",
"followers_url": "https://api.github.com/users/y-rokutan/followers",
"following_url": "https://api.github.com/users/y-rokutan/following{/other_user}",
"gists_url": "... | [] | closed | false | null | [] | null | 7 | 2021-04-04 01:01:48+00:00 | 2021-08-24 15:55:35+00:00 | 2021-04-07 15:06:04+00:00 | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | Hi,
I'm trying to pretraine deep-speed model using HF arxiv dataset like:
```
train_ds = nlp.load_dataset('scientific_papers', 'arxiv')
train_ds.set_format(
type="torch",
columns=["input_ids", "attention_mask", "global_attention_mask", "labels"],
)
engine, _, _, _ = deepspeed.initialize(
... | {
"avatar_url": "https://avatars.githubusercontent.com/u/24562381?v=4",
"events_url": "https://api.github.com/users/y-rokutan/events{/privacy}",
"followers_url": "https://api.github.com/users/y-rokutan/followers",
"following_url": "https://api.github.com/users/y-rokutan/following{/other_user}",
"gists_url": "... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2165/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2165/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2164 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2164/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2164/comments | https://api.github.com/repos/huggingface/datasets/issues/2164/events | https://github.com/huggingface/datasets/pull/2164 | 849,739,759 | MDExOlB1bGxSZXF1ZXN0NjA4NDQ0MTE3 | 2,164 | Replace assertTrue(isinstance with assertIsInstance in tests | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | 0 | 2021-04-03 21:07:02+00:00 | 2021-04-06 14:41:09+00:00 | 2021-04-06 14:41:08+00:00 | COLLABORATOR | null | null | null | null | Replaces all the occurrences of the `assertTrue(isinstance(` pattern with `assertIsInstance`. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2164/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2164/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2164.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2164",
"merged_at": "2021-04-06T14:41:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2164.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2163 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2163/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2163/comments | https://api.github.com/repos/huggingface/datasets/issues/2163/events | https://github.com/huggingface/datasets/pull/2163 | 849,669,366 | MDExOlB1bGxSZXF1ZXN0NjA4Mzk0NDMz | 2,163 | Concat only unique fields in DatasetInfo.from_merge | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | 3 | 2021-04-03 14:31:30+00:00 | 2021-04-06 14:40:00+00:00 | 2021-04-06 14:39:59+00:00 | COLLABORATOR | null | null | null | null | I thought someone from the community with less experience would be interested in fixing this issue, but that wasn't the case.
Fixes #2103 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2163/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2163/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2163.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2163",
"merged_at": "2021-04-06T14:39:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2163.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2162 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2162/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2162/comments | https://api.github.com/repos/huggingface/datasets/issues/2162/events | https://github.com/huggingface/datasets/issues/2162 | 849,129,201 | MDU6SXNzdWU4NDkxMjkyMDE= | 2,162 | visualization for cc100 is broken | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url"... | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | null | [] | null | 3 | 2021-04-02 10:11:13+00:00 | 2022-10-05 13:20:24+00:00 | 2022-10-05 13:20:24+00:00 | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | Hi
visualization through dataset viewer for cc100 is broken
https://huggingface.co/datasets/viewer/
thanks a lot
| {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2162/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2162/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2161 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2161/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2161/comments | https://api.github.com/repos/huggingface/datasets/issues/2161/events | https://github.com/huggingface/datasets/issues/2161 | 849,127,041 | MDU6SXNzdWU4NDkxMjcwNDE= | 2,161 | any possibility to download part of large datasets only? | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | 6 | 2021-04-02 10:06:46+00:00 | 2022-10-05 13:26:51+00:00 | 2022-10-05 13:26:51+00:00 | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | Hi
Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2161/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2161/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2160 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2160/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2160/comments | https://api.github.com/repos/huggingface/datasets/issues/2160/events | https://github.com/huggingface/datasets/issues/2160 | 849,052,921 | MDU6SXNzdWU4NDkwNTI5MjE= | 2,160 | data_args.preprocessing_num_workers almost freezes | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | 2 | 2021-04-02 07:56:13+00:00 | 2021-04-02 10:14:32+00:00 | 2021-04-02 10:14:31+00:00 | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | Hi @lhoestq
I am running this code from huggingface transformers https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py
to speed up tokenization, since I am running on multiple datasets, I am using data_args.preprocessing_num_workers = 4 with opus100 corpus but this moves ... | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url"... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2160/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2160/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2159 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2159/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2159/comments | https://api.github.com/repos/huggingface/datasets/issues/2159/events | https://github.com/huggingface/datasets/issues/2159 | 848,851,962 | MDU6SXNzdWU4NDg4NTE5NjI= | 2,159 | adding ccnet dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url"... | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | 1 | 2021-04-01 23:28:36+00:00 | 2021-04-02 10:05:19+00:00 | 2021-04-02 10:05:19+00:00 | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | ## Adding a Dataset
- **Name:** ccnet
- **Description:**
Common Crawl
- **Paper:**
https://arxiv.org/abs/1911.00359
- **Data:**
https://github.com/facebookresearch/cc_net
- **Motivation:**
this is one of the most comprehensive clean monolingual datasets across a variety of languages. Quite importan... | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url"... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2159/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2159/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2158 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2158/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2158/comments | https://api.github.com/repos/huggingface/datasets/issues/2158/events | https://github.com/huggingface/datasets/issues/2158 | 848,506,746 | MDU6SXNzdWU4NDg1MDY3NDY= | 2,158 | viewer "fake_news_english" error | {
"avatar_url": "https://avatars.githubusercontent.com/u/9447991?v=4",
"events_url": "https://api.github.com/users/emanuelevivoli/events{/privacy}",
"followers_url": "https://api.github.com/users/emanuelevivoli/followers",
"following_url": "https://api.github.com/users/emanuelevivoli/following{/other_user}",
... | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | null | [] | null | 2 | 2021-04-01 14:13:20+00:00 | 2022-10-05 13:22:02+00:00 | 2022-10-05 13:22:02+00:00 | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | When I visit the [Huggingface - viewer](https://huggingface.co/datasets/viewer/) web site, under the dataset "fake_news_english" I've got this error:
> ImportError: To be able to use this dataset, you need to install the following dependencies['openpyxl'] using 'pip install # noqa: requires this pandas optional depe... | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2158/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2158/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2157 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2157/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2157/comments | https://api.github.com/repos/huggingface/datasets/issues/2157/events | https://github.com/huggingface/datasets/pull/2157 | 847,205,239 | MDExOlB1bGxSZXF1ZXN0NjA2MjM1NjUx | 2,157 | updated user permissions based on umask | {
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
... | [] | closed | false | null | [] | null | 0 | 2021-03-31 19:38:29+00:00 | 2021-04-06 07:19:19+00:00 | 2021-04-06 07:19:19+00:00 | CONTRIBUTOR | null | null | null | null | Updated user permissions based on running user's umask (#2065). Let me know if `0o666` is looking good or should I change it to `~umask` only (to give execute permissions as well) | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2157/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2157/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2157.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2157",
"merged_at": "2021-04-06T07:19:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2157.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2156 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2156/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2156/comments | https://api.github.com/repos/huggingface/datasets/issues/2156/events | https://github.com/huggingface/datasets/pull/2156 | 847,198,295 | MDExOlB1bGxSZXF1ZXN0NjA2MjI5MTky | 2,156 | User permissions | {
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
... | [] | closed | false | null | [] | null | 0 | 2021-03-31 19:33:48+00:00 | 2021-03-31 19:34:24+00:00 | 2021-03-31 19:34:24+00:00 | CONTRIBUTOR | null | null | null | null | Updated user permissions based on running user's umask. Let me know if `0o666` is looking good or should I change it to `~umask` only (to give execute permissions as well) | {
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2156/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2156/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2156.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2156",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2156.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2156"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2155 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2155/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2155/comments | https://api.github.com/repos/huggingface/datasets/issues/2155/events | https://github.com/huggingface/datasets/pull/2155 | 846,786,897 | MDExOlB1bGxSZXF1ZXN0NjA1ODU3MTU4 | 2,155 | Add table classes to the documentation | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 1 | 2021-03-31 14:36:10+00:00 | 2021-04-01 16:46:30+00:00 | 2021-03-31 15:42:08+00:00 | MEMBER | null | null | null | null | Following #2025 , I added the table classes to the documentation
cc @albertvillanova | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2155/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2155/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2155.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2155",
"merged_at": "2021-03-31T15:42:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2155.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2154 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2154/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2154/comments | https://api.github.com/repos/huggingface/datasets/issues/2154/events | https://github.com/huggingface/datasets/pull/2154 | 846,763,960 | MDExOlB1bGxSZXF1ZXN0NjA1ODM2Mjc1 | 2,154 | Adding the NorNE dataset for Norwegian POS and NER | {
"avatar_url": "https://avatars.githubusercontent.com/u/173537?v=4",
"events_url": "https://api.github.com/users/versae/events{/privacy}",
"followers_url": "https://api.github.com/users/versae/followers",
"following_url": "https://api.github.com/users/versae/following{/other_user}",
"gists_url": "https://api... | [] | closed | false | null | [] | null | 1 | 2021-03-31 14:22:50+00:00 | 2021-04-01 09:27:00+00:00 | 2021-04-01 09:16:08+00:00 | CONTRIBUTOR | null | null | null | null | NorNE is a manually annotated corpus of named entities which extends the annotation of the existing Norwegian Dependency Treebank. Comprising both of the official standards of written Norwegian (Bokmål and Nynorsk), the corpus contains around 600,000 tokens and annotates a rich set of entity types including persons, or... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2154/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2154/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2154.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2154",
"merged_at": "2021-04-01T09:16:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2154.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2153 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2153/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2153/comments | https://api.github.com/repos/huggingface/datasets/issues/2153/events | https://github.com/huggingface/datasets/issues/2153 | 846,181,502 | MDU6SXNzdWU4NDYxODE1MDI= | 2,153 | load_dataset ignoring features | {
"avatar_url": "https://avatars.githubusercontent.com/u/37592763?v=4",
"events_url": "https://api.github.com/users/GuillemGSubies/events{/privacy}",
"followers_url": "https://api.github.com/users/GuillemGSubies/followers",
"following_url": "https://api.github.com/users/GuillemGSubies/following{/other_user}",
... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists... | null | 3 | 2021-03-31 08:30:09+00:00 | 2022-10-05 13:29:12+00:00 | 2022-10-05 13:29:12+00:00 | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | First of all, I'm sorry if it is a repeated issue or the changes are already in master, I searched and I didn't find anything.
I'm using datasets 1.5.0

As you can see, when I load the dataset, the C... | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2153/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2153/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2152 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2152/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2152/comments | https://api.github.com/repos/huggingface/datasets/issues/2152/events | https://github.com/huggingface/datasets/pull/2152 | 845,751,273 | MDExOlB1bGxSZXF1ZXN0NjA0ODk0MDkz | 2,152 | Update README.md | {
"avatar_url": "https://avatars.githubusercontent.com/u/22306304?v=4",
"events_url": "https://api.github.com/users/JieyuZhao/events{/privacy}",
"followers_url": "https://api.github.com/users/JieyuZhao/followers",
"following_url": "https://api.github.com/users/JieyuZhao/following{/other_user}",
"gists_url": "... | [] | closed | false | null | [] | null | 0 | 2021-03-31 03:21:19+00:00 | 2021-04-01 10:20:37+00:00 | 2021-04-01 10:20:36+00:00 | CONTRIBUTOR | null | null | null | null | Updated some descriptions of Wino_Bias dataset. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2152/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2152/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2152.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2152",
"merged_at": "2021-04-01T10:20:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2152.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2151 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2151/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2151/comments | https://api.github.com/repos/huggingface/datasets/issues/2151/events | https://github.com/huggingface/datasets/pull/2151 | 844,886,081 | MDExOlB1bGxSZXF1ZXN0NjA0MDg5MDMw | 2,151 | Add support for axis in concatenate datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | {
"closed_at": "2021-04-20T16:50:46Z",
"closed_issues": 4,
"created_at": "2021-04-09T13:07:51Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/u... | 5 | 2021-03-30 16:58:44+00:00 | 2021-06-23 17:41:02+00:00 | 2021-04-19 16:07:18+00:00 | MEMBER | null | null | null | null | Add support for `axis` (0 or 1) in `concatenate_datasets`.
Close #853. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2151/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2151/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2151.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2151",
"merged_at": "2021-04-19T16:07:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2151.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2150 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2150/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2150/comments | https://api.github.com/repos/huggingface/datasets/issues/2150/events | https://github.com/huggingface/datasets/pull/2150 | 844,776,448 | MDExOlB1bGxSZXF1ZXN0NjAzOTg3OTcx | 2,150 | Allow pickling of big in-memory tables | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 0 | 2021-03-30 15:51:56+00:00 | 2021-03-31 10:37:15+00:00 | 2021-03-31 10:37:14+00:00 | MEMBER | null | null | null | null | This should fix issue #2134
Pickling is limited to <4GiB objects, it's not possible to pickle a big arrow table (for multiprocessing for example).
For big tables, we have to write them on disk and only pickle the path to the table. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2150/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2150/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2150.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2150",
"merged_at": "2021-03-31T10:37:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2150.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2149 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2149/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2149/comments | https://api.github.com/repos/huggingface/datasets/issues/2149/events | https://github.com/huggingface/datasets/issues/2149 | 844,734,076 | MDU6SXNzdWU4NDQ3MzQwNzY= | 2,149 | Telugu subset missing for xtreme tatoeba dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/50871412?v=4",
"events_url": "https://api.github.com/users/cosmeowpawlitan/events{/privacy}",
"followers_url": "https://api.github.com/users/cosmeowpawlitan/followers",
"following_url": "https://api.github.com/users/cosmeowpawlitan/following{/other_user}"... | [] | closed | false | null | [] | null | 2 | 2021-03-30 15:26:34+00:00 | 2022-10-05 13:28:30+00:00 | 2022-10-05 13:28:30+00:00 | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | from nlp import load_dataset
train_dataset = load_dataset('xtreme', 'tatoeba.tel')['validation']
ValueError: BuilderConfig tatoeba.tel not found.
but language tel is actually included in xtreme:
https://github.com/google-research/xtreme/blob/master/utils_preprocess.py
def tatoeba_preprocess(args):
lang3_dict ... | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2149/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2149/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2148 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2148/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2148/comments | https://api.github.com/repos/huggingface/datasets/issues/2148/events | https://github.com/huggingface/datasets/issues/2148 | 844,700,910 | MDU6SXNzdWU4NDQ3MDA5MTA= | 2,148 | Add configurable options to `seqeval` metric | {
"avatar_url": "https://avatars.githubusercontent.com/u/44571847?v=4",
"events_url": "https://api.github.com/users/marrodion/events{/privacy}",
"followers_url": "https://api.github.com/users/marrodion/followers",
"following_url": "https://api.github.com/users/marrodion/following{/other_user}",
"gists_url": "... | [] | closed | false | null | [] | null | 1 | 2021-03-30 15:04:06+00:00 | 2021-04-15 13:49:46+00:00 | 2021-04-15 13:49:46+00:00 | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | Right now `load_metric("seqeval")` only works in the default mode of evaluation (equivalent to conll evaluation).
However, seqeval library [supports](https://github.com/chakki-works/seqeval#support-features) different evaluation schemes (IOB1, IOB2, etc.), which can be plugged in just by supporting additional kwargs... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2148/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2148/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2147 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2147/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2147/comments | https://api.github.com/repos/huggingface/datasets/issues/2147/events | https://github.com/huggingface/datasets/pull/2147 | 844,687,831 | MDExOlB1bGxSZXF1ZXN0NjAzOTA3NjM4 | 2,147 | Render docstring return type as inline | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | [] | null | 0 | 2021-03-30 14:55:43+00:00 | 2021-03-31 13:11:05+00:00 | 2021-03-31 13:11:05+00:00 | MEMBER | null | null | null | null | This documentation setting will avoid having the return type in a separate line under `Return type`.
See e.g. current docs for `Dataset.to_csv`. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2147/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2147/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2147.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2147",
"merged_at": "2021-03-31T13:11:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2147.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2146 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2146/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2146/comments | https://api.github.com/repos/huggingface/datasets/issues/2146/events | https://github.com/huggingface/datasets/issues/2146 | 844,673,244 | MDU6SXNzdWU4NDQ2NzMyNDQ= | 2,146 | Dataset file size on disk is very large with 3D Array | {
"avatar_url": "https://avatars.githubusercontent.com/u/22685854?v=4",
"events_url": "https://api.github.com/users/jblemoine/events{/privacy}",
"followers_url": "https://api.github.com/users/jblemoine/followers",
"following_url": "https://api.github.com/users/jblemoine/following{/other_user}",
"gists_url": "... | [] | open | false | null | [] | null | 6 | 2021-03-30 14:46:09+00:00 | 2021-04-16 13:07:02+00:00 | NaT | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | Hi,
I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8.
The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.json`.
`{
"description": "",
"citation": ""... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2146/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2146/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2145 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2145/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2145/comments | https://api.github.com/repos/huggingface/datasets/issues/2145/events | https://github.com/huggingface/datasets/pull/2145 | 844,603,518 | MDExOlB1bGxSZXF1ZXN0NjAzODMxOTE2 | 2,145 | Implement Dataset add_column | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | {
"closed_at": "2021-05-31T16:20:53Z",
"closed_issues": 3,
"created_at": "2021-04-09T13:16:31Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/u... | 1 | 2021-03-30 14:02:14+00:00 | 2021-04-29 14:50:44+00:00 | 2021-04-29 14:50:43+00:00 | MEMBER | null | null | null | null | Implement `Dataset.add_column`.
Close #1954. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2145/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2145/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2145.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2145",
"merged_at": "2021-04-29T14:50:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2145.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2144 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2144/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2144/comments | https://api.github.com/repos/huggingface/datasets/issues/2144/events | https://github.com/huggingface/datasets/issues/2144 | 844,352,067 | MDU6SXNzdWU4NDQzNTIwNjc= | 2,144 | Loading wikipedia 20200501.en throws pyarrow related error | {
"avatar_url": "https://avatars.githubusercontent.com/u/26637405?v=4",
"events_url": "https://api.github.com/users/TomPyonsuke/events{/privacy}",
"followers_url": "https://api.github.com/users/TomPyonsuke/followers",
"following_url": "https://api.github.com/users/TomPyonsuke/following{/other_user}",
"gists_u... | [] | open | false | null | [] | null | 6 | 2021-03-30 10:38:31+00:00 | 2021-04-01 09:21:17+00:00 | NaT | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | **Problem description**
I am getting the following error when trying to load wikipedia/20200501.en dataset.
**Error log**
Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, total: 34.06 GiB) to /usr/local/workspace/NAS_NLP/cache/wikiped... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2144/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2144/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2143 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2143/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2143/comments | https://api.github.com/repos/huggingface/datasets/issues/2143/events | https://github.com/huggingface/datasets/pull/2143 | 844,313,228 | MDExOlB1bGxSZXF1ZXN0NjAzNTc0NjI0 | 2,143 | task casting via load_dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://a... | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://a... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_ur... | null | 0 | 2021-03-30 10:00:42+00:00 | 2021-06-11 13:20:41+00:00 | 2021-06-11 13:20:36+00:00 | CONTRIBUTOR | null | null | null | null | wip
not satisfied with the API, it means as a dataset implementer I need to write a function with boilerplate and write classes for each `<dataset><task>` "facet". | {
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2143/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2143/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2143.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2143",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2143.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2143"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2142 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2142/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2142/comments | https://api.github.com/repos/huggingface/datasets/issues/2142/events | https://github.com/huggingface/datasets/pull/2142 | 843,919,420 | MDExOlB1bGxSZXF1ZXN0NjAzMjQwMzUy | 2,142 | Gem V1.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | null | 0 | 2021-03-29 23:47:02+00:00 | 2021-03-30 00:10:02+00:00 | 2021-03-30 00:10:02+00:00 | MEMBER | null | null | null | null | This branch updates the GEM benchmark to its 1.1 version which includes:
- challenge sets for most tasks
- detokenized TurkCorpus to match the rest of the text simplification subtasks
- fixed inputs for TurkCorpus and ASSET test sets
- 18 languages in WikiLingua
cc @sebastianGehrmann | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "htt... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2142/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2142/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2142.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2142",
"merged_at": "2021-03-30T00:10:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2142.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2141 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2141/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2141/comments | https://api.github.com/repos/huggingface/datasets/issues/2141/events | https://github.com/huggingface/datasets/pull/2141 | 843,914,790 | MDExOlB1bGxSZXF1ZXN0NjAzMjM2MjUw | 2,141 | added spans field for the wikiann datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https:/... | [] | closed | false | null | [] | null | 3 | 2021-03-29 23:38:26+00:00 | 2021-03-31 13:27:50+00:00 | 2021-03-31 13:27:50+00:00 | CONTRIBUTOR | null | null | null | null | Hi @lhoestq
I tried to add spans to the wikiann datasets.
Thanks a lot for kindly having a look.
This addresses https://github.com/huggingface/datasets/issues/2130.
Best regards
Rabeeh | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2141/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2141/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2141.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2141",
"merged_at": "2021-03-31T13:27:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2141.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2140 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2140/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2140/comments | https://api.github.com/repos/huggingface/datasets/issues/2140/events | https://github.com/huggingface/datasets/pull/2140 | 843,830,451 | MDExOlB1bGxSZXF1ZXN0NjAzMTYxMjYx | 2,140 | add banking77 dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/32985207?v=4",
"events_url": "https://api.github.com/users/dkajtoch/events{/privacy}",
"followers_url": "https://api.github.com/users/dkajtoch/followers",
"following_url": "https://api.github.com/users/dkajtoch/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | null | 1 | 2021-03-29 21:32:23+00:00 | 2021-04-09 09:32:18+00:00 | 2021-04-09 09:32:18+00:00 | CONTRIBUTOR | null | null | null | null | Intent classification/detection dataset from banking category with 77 unique intents. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2140/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2140/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2140.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2140",
"merged_at": "2021-04-09T09:32:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2140.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2139 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2139/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2139/comments | https://api.github.com/repos/huggingface/datasets/issues/2139/events | https://github.com/huggingface/datasets/issues/2139 | 843,662,613 | MDU6SXNzdWU4NDM2NjI2MTM= | 2,139 | TypeError when using save_to_disk in a dataset loaded with ReadInstruction split | {
"avatar_url": "https://avatars.githubusercontent.com/u/22480495?v=4",
"events_url": "https://api.github.com/users/PedroMLF/events{/privacy}",
"followers_url": "https://api.github.com/users/PedroMLF/followers",
"following_url": "https://api.github.com/users/PedroMLF/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | null | 2 | 2021-03-29 18:23:54+00:00 | 2021-03-30 09:12:53+00:00 | 2021-03-30 09:12:53+00:00 | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | Hi,
Loading a dataset with `load_dataset` using a split defined via `ReadInstruction` and then saving it to disk results in the following error: `TypeError: Object of type ReadInstruction is not JSON serializable`.
Here is the minimal reproducible example:
```python
from datasets import load_dataset
from dat... | {
"avatar_url": "https://avatars.githubusercontent.com/u/22480495?v=4",
"events_url": "https://api.github.com/users/PedroMLF/events{/privacy}",
"followers_url": "https://api.github.com/users/PedroMLF/followers",
"following_url": "https://api.github.com/users/PedroMLF/following{/other_user}",
"gists_url": "htt... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2139/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2139/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2138 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2138/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2138/comments | https://api.github.com/repos/huggingface/datasets/issues/2138/events | https://github.com/huggingface/datasets/pull/2138 | 843,508,402 | MDExOlB1bGxSZXF1ZXN0NjAyODc4NzU2 | 2,138 | Add CER metric | {
"avatar_url": "https://avatars.githubusercontent.com/u/6931004?v=4",
"events_url": "https://api.github.com/users/chutaklee/events{/privacy}",
"followers_url": "https://api.github.com/users/chutaklee/followers",
"following_url": "https://api.github.com/users/chutaklee/following{/other_user}",
"gists_url": "h... | [] | closed | false | null | [] | null | 0 | 2021-03-29 15:52:27+00:00 | 2021-04-06 16:16:11+00:00 | 2021-04-06 07:14:38+00:00 | CONTRIBUTOR | null | null | null | null | Add Character Error Rate (CER) metric that is used in evaluation in ASR. I also have written unittests (hopefully thorough enough) but I'm not sure how to integrate them into the existed codebase.
```python
from cer import CER
cer = CER()
class TestCER(unittest.TestCase):
def test_cer_case_senstive(self)... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2138/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2138/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2138.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2138",
"merged_at": "2021-04-06T07:14:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2138.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2137 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2137/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2137/comments | https://api.github.com/repos/huggingface/datasets/issues/2137/events | https://github.com/huggingface/datasets/pull/2137 | 843,502,835 | MDExOlB1bGxSZXF1ZXN0NjAyODc0MDYw | 2,137 | Fix missing infos from concurrent dataset loading | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 0 | 2021-03-29 15:46:12+00:00 | 2021-03-31 10:35:56+00:00 | 2021-03-31 10:35:55+00:00 | MEMBER | null | null | null | null | This should fix issue #2131
When calling `load_dataset` at the same time from 2 workers, one of the worker could have missing split infos when reloading the dataset from the cache.
| {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2137/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2137/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2137.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2137",
"merged_at": "2021-03-31T10:35:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2137.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2136 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2136/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2136/comments | https://api.github.com/repos/huggingface/datasets/issues/2136/events | https://github.com/huggingface/datasets/pull/2136 | 843,492,015 | MDExOlB1bGxSZXF1ZXN0NjAyODY0ODY5 | 2,136 | fix dialogue action slot name and value | {
"avatar_url": "https://avatars.githubusercontent.com/u/31605305?v=4",
"events_url": "https://api.github.com/users/adamlin120/events{/privacy}",
"followers_url": "https://api.github.com/users/adamlin120/followers",
"following_url": "https://api.github.com/users/adamlin120/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | 0 | 2021-03-29 15:34:13+00:00 | 2021-03-31 12:48:02+00:00 | 2021-03-31 12:48:01+00:00 | CONTRIBUTOR | null | null | null | null | fix #2128 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2136/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2136/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2136.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2136",
"merged_at": "2021-03-31T12:48:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2136.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2135 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2135/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2135/comments | https://api.github.com/repos/huggingface/datasets/issues/2135/events | https://github.com/huggingface/datasets/issues/2135 | 843,246,344 | MDU6SXNzdWU4NDMyNDYzNDQ= | 2,135 | en language data from MLQA dataset is missing | {
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https:/... | [] | closed | false | null | [] | null | 3 | 2021-03-29 10:47:50+00:00 | 2021-03-30 10:20:23+00:00 | 2021-03-30 10:20:23+00:00 | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | Hi
I need mlqa-translate-train.en dataset, but it is missing from the MLQA dataset. could you have a look please? @lhoestq thank you for your help to fix this issue. | {
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https:/... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2135/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2135/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2134 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2134/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2134/comments | https://api.github.com/repos/huggingface/datasets/issues/2134/events | https://github.com/huggingface/datasets/issues/2134 | 843,242,849 | MDU6SXNzdWU4NDMyNDI4NDk= | 2,134 | Saving large in-memory datasets with save_to_disk crashes because of pickling | {
"avatar_url": "https://avatars.githubusercontent.com/u/5815801?v=4",
"events_url": "https://api.github.com/users/prokopCerny/events{/privacy}",
"followers_url": "https://api.github.com/users/prokopCerny/followers",
"following_url": "https://api.github.com/users/prokopCerny/following{/other_user}",
"gists_ur... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists... | null | 6 | 2021-03-29 10:43:15+00:00 | 2021-05-03 17:59:21+00:00 | 2021-05-03 17:59:21+00:00 | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | Using Datasets 1.5.0 on Python 3.7.
Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively faster when done in memory, and I have the ability to requisition a lot of RAM, so... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2134/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2134/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2133 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2133/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2133/comments | https://api.github.com/repos/huggingface/datasets/issues/2133/events | https://github.com/huggingface/datasets/issues/2133 | 843,149,680 | MDU6SXNzdWU4NDMxNDk2ODA= | 2,133 | bug in mlqa dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | 3 | 2021-03-29 09:03:09+00:00 | 2021-03-30 17:40:57+00:00 | 2021-03-30 17:40:57+00:00 | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | Hi
Looking into MLQA dataset for langauge "ar":
```
"question": [
"\u0645\u062a\u0649 \u0628\u062f\u0627\u062a \u0627\u0644\u0645\u062c\u0644\u0629 \u0627\u0644\u0645\u062f\u0631\u0633\u064a\u0629 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645 \u0628\u0627\u0644\u0646\u0634\u0631?",
"\u0643\u0... | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url"... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2133/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2133/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2132 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2132/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2132/comments | https://api.github.com/repos/huggingface/datasets/issues/2132/events | https://github.com/huggingface/datasets/issues/2132 | 843,142,822 | MDU6SXNzdWU4NDMxNDI4MjI= | 2,132 | TydiQA dataset is mixed and is not split per language | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url"... | [] | open | false | null | [] | null | 3 | 2021-03-29 08:56:21+00:00 | 2021-04-04 09:57:15+00:00 | NaT | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | Hi @lhoestq
Currently TydiQA is mixed and user can only access the whole training set of all languages:
https://www.tensorflow.org/datasets/catalog/tydi_qa
for using this dataset, one need to train/evaluate in each separate language, and having them mixed, makes it hard to use this dataset. This is much convenien... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2132/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2132/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2131 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2131/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2131/comments | https://api.github.com/repos/huggingface/datasets/issues/2131/events | https://github.com/huggingface/datasets/issues/2131 | 843,133,112 | MDU6SXNzdWU4NDMxMzMxMTI= | 2,131 | When training with Multi-Node Multi-GPU the worker 2 has TypeError: 'NoneType' object | {
"avatar_url": "https://avatars.githubusercontent.com/u/23011317?v=4",
"events_url": "https://api.github.com/users/andy-yangz/events{/privacy}",
"followers_url": "https://api.github.com/users/andy-yangz/followers",
"following_url": "https://api.github.com/users/andy-yangz/following{/other_user}",
"gists_url"... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists... | null | 3 | 2021-03-29 08:45:58+00:00 | 2021-04-10 11:08:55+00:00 | 2021-04-10 11:08:55+00:00 | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | version: 1.5.0
met a very strange error, I am training large scale language model, and need train on 2 machines(workers).
And sometimes I will get this error `TypeError: 'NoneType' object is not iterable`
This is traceback
```
71 | | Traceback (most recent call last):
-- | -- | --
72 | | File "run_gpt.py"... | {
"avatar_url": "https://avatars.githubusercontent.com/u/23011317?v=4",
"events_url": "https://api.github.com/users/andy-yangz/events{/privacy}",
"followers_url": "https://api.github.com/users/andy-yangz/followers",
"following_url": "https://api.github.com/users/andy-yangz/following{/other_user}",
"gists_url"... | {
"+1": 0,
"-1": 0,
"confused": 1,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2131/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2131/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2130 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2130/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2130/comments | https://api.github.com/repos/huggingface/datasets/issues/2130/events | https://github.com/huggingface/datasets/issues/2130 | 843,111,936 | MDU6SXNzdWU4NDMxMTE5MzY= | 2,130 | wikiann dataset is missing columns | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url"... | [
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | closed | false | null | [] | null | 5 | 2021-03-29 08:23:00+00:00 | 2021-08-27 14:44:18+00:00 | 2021-08-27 14:44:18+00:00 | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | Hi
Wikiann dataset needs to have "spans" columns, which is necessary to be able to use this dataset, but this column is missing from huggingface datasets, could you please have a look? thank you @lhoestq | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2130/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2130/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2129 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2129/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2129/comments | https://api.github.com/repos/huggingface/datasets/issues/2129/events | https://github.com/huggingface/datasets/issues/2129 | 843,033,656 | MDU6SXNzdWU4NDMwMzM2NTY= | 2,129 | How to train BERT model with next sentence prediction? | {
"avatar_url": "https://avatars.githubusercontent.com/u/836541?v=4",
"events_url": "https://api.github.com/users/jnishi/events{/privacy}",
"followers_url": "https://api.github.com/users/jnishi/followers",
"following_url": "https://api.github.com/users/jnishi/following{/other_user}",
"gists_url": "https://api... | [] | closed | false | null | [] | null | 4 | 2021-03-29 06:48:03+00:00 | 2021-04-01 04:58:40+00:00 | 2021-04-01 04:58:40+00:00 | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | Hello.
I'm trying to pretrain the BERT model with next sentence prediction. Is there any function that supports next sentence prediction
like ` TextDatasetForNextSentencePrediction` of `huggingface/transformers` ?
| {
"avatar_url": "https://avatars.githubusercontent.com/u/836541?v=4",
"events_url": "https://api.github.com/users/jnishi/events{/privacy}",
"followers_url": "https://api.github.com/users/jnishi/followers",
"following_url": "https://api.github.com/users/jnishi/following{/other_user}",
"gists_url": "https://api... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2129/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2129/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2128 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2128/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2128/comments | https://api.github.com/repos/huggingface/datasets/issues/2128/events | https://github.com/huggingface/datasets/issues/2128 | 843,023,910 | MDU6SXNzdWU4NDMwMjM5MTA= | 2,128 | Dialogue action slot name and value are reversed in MultiWoZ 2.2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/31605305?v=4",
"events_url": "https://api.github.com/users/adamlin120/events{/privacy}",
"followers_url": "https://api.github.com/users/adamlin120/followers",
"following_url": "https://api.github.com/users/adamlin120/following{/other_user}",
"gists_url"... | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | [] | null | 1 | 2021-03-29 06:34:02+00:00 | 2021-03-31 12:48:01+00:00 | 2021-03-31 12:48:01+00:00 | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | Hi @yjernite, thank you for adding MultiWoZ 2.2 in the huggingface datasets platform. It is beneficial!
I spot an error that the order of Dialogue action slot names and values are reversed.
https://github.com/huggingface/datasets/blob/649b2c469779bc4221e1b6969aa2496d63eb5953/datasets/multi_woz_v22/multi_woz_v22.p... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2128/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2128/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2127 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2127/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2127/comments | https://api.github.com/repos/huggingface/datasets/issues/2127/events | https://github.com/huggingface/datasets/pull/2127 | 843,017,199 | MDExOlB1bGxSZXF1ZXN0NjAyNDYxMzc3 | 2,127 | make documentation more clear to use different cloud storage | {
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | 0 | 2021-03-29 06:24:06+00:00 | 2021-03-29 12:16:24+00:00 | 2021-03-29 12:16:24+00:00 | CONTRIBUTOR | null | null | null | null | This PR extends the cloud storage documentation. To show you can use a different `fsspec` implementation. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2127/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2127/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2127.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2127",
"merged_at": "2021-03-29T12:16:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2127.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2126 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2126/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2126/comments | https://api.github.com/repos/huggingface/datasets/issues/2126/events | https://github.com/huggingface/datasets/pull/2126 | 842,779,966 | MDExOlB1bGxSZXF1ZXN0NjAyMjcyMjg4 | 2,126 | Replace legacy torch.Tensor constructor with torch.tensor | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | 0 | 2021-03-28 16:57:30+00:00 | 2021-03-29 09:27:14+00:00 | 2021-03-29 09:27:13+00:00 | COLLABORATOR | null | null | null | null | The title says it all (motivated by [this issue](https://github.com/pytorch/pytorch/issues/53146) in the pytorch repo). | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2126/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2126/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2126.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2126",
"merged_at": "2021-03-29T09:27:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2126.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2125 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2125/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2125/comments | https://api.github.com/repos/huggingface/datasets/issues/2125/events | https://github.com/huggingface/datasets/issues/2125 | 842,690,570 | MDU6SXNzdWU4NDI2OTA1NzA= | 2,125 | Is dataset timit_asr broken? | {
"avatar_url": "https://avatars.githubusercontent.com/u/42398050?v=4",
"events_url": "https://api.github.com/users/kosuke-kitahara/events{/privacy}",
"followers_url": "https://api.github.com/users/kosuke-kitahara/followers",
"following_url": "https://api.github.com/users/kosuke-kitahara/following{/other_user}"... | [] | closed | false | null | [] | null | 2 | 2021-03-28 08:30:18+00:00 | 2021-03-28 12:29:25+00:00 | 2021-03-28 12:29:25+00:00 | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | Using `timit_asr` dataset, I saw all records are the same.
``` python
from datasets import load_dataset, load_metric
timit = load_dataset("timit_asr")
from datasets import ClassLabel
import random
import pandas as pd
from IPython.display import display, HTML
def show_random_elements(dataset, num_example... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42398050?v=4",
"events_url": "https://api.github.com/users/kosuke-kitahara/events{/privacy}",
"followers_url": "https://api.github.com/users/kosuke-kitahara/followers",
"following_url": "https://api.github.com/users/kosuke-kitahara/following{/other_user}"... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2125/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2125/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2124 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2124/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2124/comments | https://api.github.com/repos/huggingface/datasets/issues/2124/events | https://github.com/huggingface/datasets/issues/2124 | 842,627,729 | MDU6SXNzdWU4NDI2Mjc3Mjk= | 2,124 | Adding ScaNN library to do MIPS? | {
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "htt... | [] | open | false | null | [] | null | 1 | 2021-03-28 00:07:00+00:00 | 2021-03-29 13:23:43+00:00 | NaT | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | @lhoestq Hi I am thinking of adding this new google library to do the MIPS similar to **add_faiss_idex**. As the paper suggests, it is really fast when it comes to retrieving the nearest neighbors.
https://github.com/google-research/google-research/tree/master/scann

d... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2123/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2123/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2122 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2122/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2122/comments | https://api.github.com/repos/huggingface/datasets/issues/2122/events | https://github.com/huggingface/datasets/pull/2122 | 842,194,588 | MDExOlB1bGxSZXF1ZXN0NjAxODE3MjI0 | 2,122 | Fast table queries with interpolation search | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 0 | 2021-03-26 18:09:20+00:00 | 2021-08-04 18:11:59+00:00 | 2021-04-06 14:33:01+00:00 | MEMBER | null | null | null | null | ## Intro
This should fix issue #1803
Currently querying examples in a dataset is O(n) because of the underlying pyarrow ChunkedArrays implementation.
To fix this I implemented interpolation search that is pretty effective since datasets usually verifies the condition of evenly distributed chunks (the default ch... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 5,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 5,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2122/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2122/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2122.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2122",
"merged_at": "2021-04-06T14:33:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2122.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2121 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2121/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2121/comments | https://api.github.com/repos/huggingface/datasets/issues/2121/events | https://github.com/huggingface/datasets/pull/2121 | 842,148,633 | MDExOlB1bGxSZXF1ZXN0NjAxNzc4NDc4 | 2,121 | Add Validation For README | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | 7 | 2021-03-26 17:02:17+00:00 | 2021-05-10 13:17:18+00:00 | 2021-05-10 09:41:41+00:00 | CONTRIBUTOR | null | null | null | null | Hi @lhoestq, @yjernite
This is a simple Readme parser. All classes specific to different sections can inherit `Section` class, and we can define more attributes in each.
Let me know if this is going in the right direction :)
Currently the output looks like this, for `to_dict()` on `FashionMNIST` `README.md`:
... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2121/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2121/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2121.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2121",
"merged_at": "2021-05-10T09:41:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2121.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2120 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2120/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2120/comments | https://api.github.com/repos/huggingface/datasets/issues/2120/events | https://github.com/huggingface/datasets/issues/2120 | 841,954,521 | MDU6SXNzdWU4NDE5NTQ1MjE= | 2,120 | dataset viewer does not work anymore | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url"... | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | null | [] | null | 2 | 2021-03-26 13:22:13+00:00 | 2021-03-26 15:52:22+00:00 | 2021-03-26 15:52:22+00:00 | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | Hi
I normally use this link to see all datasets and how I can load them
https://huggingface.co/datasets/viewer/
Now I am getting
502 Bad Gateway
nginx/1.18.0 (Ubuntu)
could you bring this webpage back ? this was very helpful @lhoestq
thanks for your help | {
"avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4",
"events_url": "https://api.github.com/users/srush/events{/privacy}",
"followers_url": "https://api.github.com/users/srush/followers",
"following_url": "https://api.github.com/users/srush/following{/other_user}",
"gists_url": "https://api.git... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2120/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2120/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2119 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2119/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2119/comments | https://api.github.com/repos/huggingface/datasets/issues/2119/events | https://github.com/huggingface/datasets/pull/2119 | 841,567,199 | MDExOlB1bGxSZXF1ZXN0NjAxMjg2MjIy | 2,119 | copy.deepcopy os.environ instead of copy | {
"avatar_url": "https://avatars.githubusercontent.com/u/5506053?v=4",
"events_url": "https://api.github.com/users/NihalHarish/events{/privacy}",
"followers_url": "https://api.github.com/users/NihalHarish/followers",
"following_url": "https://api.github.com/users/NihalHarish/following{/other_user}",
"gists_ur... | [] | closed | false | null | [] | null | 0 | 2021-03-26 03:58:38+00:00 | 2021-03-26 15:13:52+00:00 | 2021-03-26 15:13:52+00:00 | CONTRIBUTOR | null | null | null | null | Fixes: https://github.com/huggingface/datasets/issues/2115
- bug fix: using envrion.copy() returns a dict.
- using deepcopy(environ) returns an `_environ` object
- Changing the datatype of the _environ object can break code, if subsequent libraries perform operations using apis exclusive to the environ object, lik... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2119/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2119/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2119.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2119",
"merged_at": "2021-03-26T15:13:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2119.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true |
https://api.github.com/repos/huggingface/datasets/issues/2118 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2118/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2118/comments | https://api.github.com/repos/huggingface/datasets/issues/2118/events | https://github.com/huggingface/datasets/pull/2118 | 841,563,329 | MDExOlB1bGxSZXF1ZXN0NjAxMjgzMDUx | 2,118 | Remove os.environ.copy in Dataset.map | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | 1 | 2021-03-26 03:48:17+00:00 | 2021-03-26 12:03:23+00:00 | 2021-03-26 12:00:05+00:00 | COLLABORATOR | null | null | null | null | Replace `os.environ.copy` with in-place modification
Fixes #2115 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2118/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2118/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2118.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2118",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2118.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2118"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2117 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2117/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2117/comments | https://api.github.com/repos/huggingface/datasets/issues/2117/events | https://github.com/huggingface/datasets/issues/2117 | 841,535,283 | MDU6SXNzdWU4NDE1MzUyODM= | 2,117 | load_metric from local "glue.py" meet error 'NoneType' object is not callable | {
"avatar_url": "https://avatars.githubusercontent.com/u/54012361?v=4",
"events_url": "https://api.github.com/users/Frankie123421/events{/privacy}",
"followers_url": "https://api.github.com/users/Frankie123421/followers",
"following_url": "https://api.github.com/users/Frankie123421/following{/other_user}",
"g... | [] | closed | false | null | [] | null | 3 | 2021-03-26 02:35:22+00:00 | 2021-08-25 21:44:05+00:00 | 2021-03-26 02:40:26+00:00 | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | actual_task = "mnli" if task == "mnli-mm" else task
dataset = load_dataset(path='/home/glue.py', name=actual_task)
metric = load_metric(path='/home/glue.py', name=actual_task)
---------------------------------------------------------------------------
TypeError Traceback (most recent... | {
"avatar_url": "https://avatars.githubusercontent.com/u/54012361?v=4",
"events_url": "https://api.github.com/users/Frankie123421/events{/privacy}",
"followers_url": "https://api.github.com/users/Frankie123421/followers",
"following_url": "https://api.github.com/users/Frankie123421/following{/other_user}",
"g... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2117/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2117/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2116 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2116/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2116/comments | https://api.github.com/repos/huggingface/datasets/issues/2116/events | https://github.com/huggingface/datasets/issues/2116 | 841,481,292 | MDU6SXNzdWU4NDE0ODEyOTI= | 2,116 | Creating custom dataset results in error while calling the map() function | {
"avatar_url": "https://avatars.githubusercontent.com/u/13940397?v=4",
"events_url": "https://api.github.com/users/GeetDsa/events{/privacy}",
"followers_url": "https://api.github.com/users/GeetDsa/followers",
"following_url": "https://api.github.com/users/GeetDsa/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 1 | 2021-03-26 00:37:46+00:00 | 2021-03-31 14:30:32+00:00 | 2021-03-31 14:30:32+00:00 | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | calling `map()` of `datasets` library results into an error while defining a Custom dataset.
Reproducible example:
```
import datasets
class MyDataset(datasets.Dataset):
def __init__(self, sentences):
"Initialization"
self.samples = sentences
def __len__(self):
"Denotes the ... | {
"avatar_url": "https://avatars.githubusercontent.com/u/13940397?v=4",
"events_url": "https://api.github.com/users/GeetDsa/events{/privacy}",
"followers_url": "https://api.github.com/users/GeetDsa/followers",
"following_url": "https://api.github.com/users/GeetDsa/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2116/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2116/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2115 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2115/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2115/comments | https://api.github.com/repos/huggingface/datasets/issues/2115/events | https://github.com/huggingface/datasets/issues/2115 | 841,283,974 | MDU6SXNzdWU4NDEyODM5NzQ= | 2,115 | The datasets.map() implementation modifies the datatype of os.environ object | {
"avatar_url": "https://avatars.githubusercontent.com/u/19983848?v=4",
"events_url": "https://api.github.com/users/leleamol/events{/privacy}",
"followers_url": "https://api.github.com/users/leleamol/followers",
"following_url": "https://api.github.com/users/leleamol/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | null | 0 | 2021-03-25 20:29:19+00:00 | 2021-03-26 15:13:52+00:00 | 2021-03-26 15:13:52+00:00 | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | In our testing, we noticed that the datasets.map() implementation is modifying the datatype of python os.environ object from '_Environ' to 'dict'.
This causes following function calls to fail as follows:
`
x = os.environ.get("TEST_ENV_VARIABLE_AFTER_dataset_map", default=None)
TypeError: get() takes... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2115/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2115/timeline | null | completed | null | null | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.