url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
758M
1.95B
node_id
stringlengths
18
32
number
int64
1.2k
6.31k
title
stringlengths
1
290
user
dict
labels
listlengths
0
3
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
listlengths
0
30
created_at
timestamp[ns, tz=UTC]
updated_at
timestamp[ns, tz=UTC]
closed_at
timestamp[ns, tz=UTC]
author_association
stringclasses
3 values
active_lock_reason
float64
draft
float64
0
1
pull_request
dict
body
stringlengths
0
36.2k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
float64
state_reason
stringclasses
3 values
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/3298
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3298/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3298/comments
https://api.github.com/repos/huggingface/datasets/issues/3298/events
https://github.com/huggingface/datasets/issues/3298
1,058,420,201
I_kwDODunzps4_FjXp
3,298
Agnews dataset viewer is not working
{ "avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4", "events_url": "https://api.github.com/users/pietrolesci/events{/privacy}", "followers_url": "https://api.github.com/users/pietrolesci/followers", "following_url": "https://api.github.com/users/pietrolesci/following{/other_user}", "gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pietrolesci", "id": 61748653, "login": "pietrolesci", "node_id": "MDQ6VXNlcjYxNzQ4NjUz", "organizations_url": "https://api.github.com/users/pietrolesci/orgs", "received_events_url": "https://api.github.com/users/pietrolesci/received_events", "repos_url": "https://api.github.com/users/pietrolesci/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions", "type": "User", "url": "https://api.github.com/users/pietrolesci" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
null
[]
null
[ "Hi ! Thanks for reporting\r\nWe've already fixed the code that generates the preview for this dataset, we'll release the fix soon :)", "Hi @lhoestq, thanks for your feedback!", "Fixed in the viewer.\r\n\r\nhttps://huggingface.co/datasets/ag_news" ]
2021-11-19T11:18:59Z
2021-12-21T16:24:05Z
2021-12-21T16:24:05Z
NONE
null
null
null
## Dataset viewer issue for '*name of the dataset*' **Link:** https://huggingface.co/datasets/ag_news Hi there, the `ag_news` dataset viewer is not working. Am I the one who added this dataset? No
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3298/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3298/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1362
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1362/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1362/comments
https://api.github.com/repos/huggingface/datasets/issues/1362/events
https://github.com/huggingface/datasets/pull/1362
760,138,233
MDExOlB1bGxSZXF1ZXN0NTM1MDIwMDAz
1,362
adding opus_infopankki
{ "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patil-suraj", "id": 27137566, "login": "patil-suraj", "node_id": "MDQ6VXNlcjI3MTM3NTY2", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "repos_url": "https://api.github.com/users/patil-suraj/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "type": "User", "url": "https://api.github.com/users/patil-suraj" }
[]
closed
false
null
[]
null
[ "Thanks Quentin !" ]
2020-12-09T08:57:10Z
2020-12-09T18:16:20Z
2020-12-09T18:13:48Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1362.diff", "html_url": "https://github.com/huggingface/datasets/pull/1362", "merged_at": "2020-12-09T18:13:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/1362.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1362" }
Adding opus_infopankki http://opus.nlpl.eu/infopankki-v1.php
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1362/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1362/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3095
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3095/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3095/comments
https://api.github.com/repos/huggingface/datasets/issues/3095/events
https://github.com/huggingface/datasets/issues/3095
1,027,453,146
I_kwDODunzps49PbDa
3,095
`cast_column` makes audio decoding fail
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "cc @anton-l @albertvillanova ", "Thanks for reporting, @patrickvonplaten.\r\n\r\nI think the issue is related to mp3 resampling, not to `cast_column`.\r\n\r\nYou can check that `cast_column` works OK with non-mp3 audio files:\r\n```python\r\nfrom datasets import load_dataset\r\nimport datasets\r\nds = load_datas...
2021-10-15T13:36:58Z
2023-04-07T09:43:20Z
2021-10-15T15:38:30Z
MEMBER
null
null
null
## Describe the bug After changing the sampling rate automatic decoding fails. ## Steps to reproduce the bug ```python from datasets import load_dataset import datasets ds = load_dataset("common_voice", "ab", split="train") ds = ds.cast_column("audio", datasets.features.Audio(sampling_rate=16_000)) print(ds[0]["audio"]) # <- this fails currently ``` yields: ``` TypeError: forward() takes 2 positional arguments but 4 were given ``` ## Expected results no failure ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> Copy-and-paste the text below in your GitHub issue. - `datasets` version: 1.13.2 (master) - Platform: Linux-5.11.0-1019-aws-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 5.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3095/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3095/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4389
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4389/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4389/comments
https://api.github.com/repos/huggingface/datasets/issues/4389/events
https://github.com/huggingface/datasets/pull/4389
1,244,693,690
PR_kwDODunzps44RKMn
4,389
Fix bug in gem dataset for wiki_auto_asset_turk config
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-05-23T07:19:49Z
2022-05-23T10:38:26Z
2022-05-23T10:29:55Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4389.diff", "html_url": "https://github.com/huggingface/datasets/pull/4389", "merged_at": "2022-05-23T10:29:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/4389.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4389" }
This PR fixes some URLs. Fix #4386.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4389/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4389/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2892
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2892/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2892/comments
https://api.github.com/repos/huggingface/datasets/issues/2892/events
https://github.com/huggingface/datasets/issues/2892
993,274,572
MDU6SXNzdWU5OTMyNzQ1NzI=
2,892
Error when encoding a dataset with None objects with a Sequence feature
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "This has been fixed by https://github.com/huggingface/datasets/pull/2900\r\nWe're doing a new release 1.12 today to make the fix available :)" ]
2021-09-10T14:11:43Z
2021-09-13T14:18:13Z
2021-09-13T14:17:42Z
MEMBER
null
null
null
There is an error when encoding a dataset with None objects with a Sequence feature To reproduce: ```python from datasets import Dataset, Features, Value, Sequence data = {"a": [[0], None]} features = Features({"a": Sequence(Value("int32"))}) dataset = Dataset.from_dict(data, features=features) ``` raises ```python --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-24-40add67f8751> in <module> 2 data = {"a": [[0], None]} 3 features = Features({"a": Sequence(Value("int32"))}) ----> 4 dataset = Dataset.from_dict(data, features=features) [...] ~/datasets/features.py in encode_nested_example(schema, obj) 888 if isinstance(obj, str): # don't interpret a string as a list 889 raise ValueError("Got a string but expected a list instead: '{}'".format(obj)) --> 890 return [encode_nested_example(schema.feature, o) for o in obj] 891 # Object with special encoding: 892 # ClassLabel will convert from string to int, TranslationVariableLanguages does some checks TypeError: 'NoneType' object is not iterable ``` Instead, if should run without error, as if the `features` were not passed
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2892/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2892/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3876
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3876/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3876/comments
https://api.github.com/repos/huggingface/datasets/issues/3876/events
https://github.com/huggingface/datasets/pull/3876
1,164,045,075
PR_kwDODunzps40LYC8
3,876
Fix download_mode in dataset_module_factory
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3876). All of your documentation changes will be reflected on that endpoint." ]
2022-03-09T14:54:33Z
2022-03-10T08:47:00Z
2022-03-10T08:46:59Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3876.diff", "html_url": "https://github.com/huggingface/datasets/pull/3876", "merged_at": "2022-03-10T08:46:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/3876.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3876" }
Fix `download_mode` value set in `dataset_module_factory`. Before the fix, it was set to `bool` (default to `False`). Also set properly its default value in all public functions.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3876/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3876/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1350
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1350/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1350/comments
https://api.github.com/repos/huggingface/datasets/issues/1350/events
https://github.com/huggingface/datasets/pull/1350
759,879,789
MDExOlB1bGxSZXF1ZXN0NTM0ODA1OTY3
1,350
add LeNER-Br dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/5097052?v=4", "events_url": "https://api.github.com/users/jonatasgrosman/events{/privacy}", "followers_url": "https://api.github.com/users/jonatasgrosman/followers", "following_url": "https://api.github.com/users/jonatasgrosman/following{/other_user}", "gists_url": "https://api.github.com/users/jonatasgrosman/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jonatasgrosman", "id": 5097052, "login": "jonatasgrosman", "node_id": "MDQ6VXNlcjUwOTcwNTI=", "organizations_url": "https://api.github.com/users/jonatasgrosman/orgs", "received_events_url": "https://api.github.com/users/jonatasgrosman/received_events", "repos_url": "https://api.github.com/users/jonatasgrosman/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jonatasgrosman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonatasgrosman/subscriptions", "type": "User", "url": "https://api.github.com/users/jonatasgrosman" }
[]
closed
false
null
[]
null
[ "I don't know what happened, my first commit passed on all checks, but after just a README.md update one of the scripts failed, is it normal? 😕 ", "Looks like a flaky connection error, I've launched a re-run, it should be fine :)", "The RemoteDatasetTest error in the CI is just a connection error, we can ignor...
2020-12-09T00:06:38Z
2020-12-10T14:11:33Z
2020-12-10T14:11:33Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1350.diff", "html_url": "https://github.com/huggingface/datasets/pull/1350", "merged_at": "2020-12-10T14:11:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/1350.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1350" }
Adding the LeNER-Br dataset, a Portuguese language dataset for named entity recognition
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1350/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1350/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2155
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2155/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2155/comments
https://api.github.com/repos/huggingface/datasets/issues/2155/events
https://github.com/huggingface/datasets/pull/2155
846,786,897
MDExOlB1bGxSZXF1ZXN0NjA1ODU3MTU4
2,155
Add table classes to the documentation
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "Just note that docstrings injected from PyArrow do not follow the same convention for formatting types in `Args` or `Returns` as we do... Not a big problem, anyway! 😄 " ]
2021-03-31T14:36:10Z
2021-04-01T16:46:30Z
2021-03-31T15:42:08Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2155.diff", "html_url": "https://github.com/huggingface/datasets/pull/2155", "merged_at": "2021-03-31T15:42:08Z", "patch_url": "https://github.com/huggingface/datasets/pull/2155.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2155" }
Following #2025 , I added the table classes to the documentation cc @albertvillanova
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2155/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2155/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5665
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5665/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5665/comments
https://api.github.com/repos/huggingface/datasets/issues/5665/events
https://github.com/huggingface/datasets/issues/5665
1,637,193,648
I_kwDODunzps5hlZew
5,665
Feature request: IterableDataset.push_to_hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NielsRogge", "id": 48327001, "login": "NielsRogge", "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "repos_url": "https://api.github.com/users/NielsRogge/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "type": "User", "url": "https://api.github.com/users/NielsRogge" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[]
2023-03-23T09:53:04Z
2023-03-23T09:53:16Z
null
CONTRIBUTOR
null
null
null
### Feature request It'd be great to have a lazy push to hub, similar to the lazy loading we have with `IterableDataset`. Suppose you'd like to filter [LAION](https://huggingface.co/datasets/laion/laion400m) based on certain conditions, but as LAION doesn't fit into your disk, you'd like to leverage streaming: ``` from datasets import load_dataset dataset = load_dataset("laion/laion400m", streaming=True, split="train") ``` Then you could filter the dataset based on certain conditions: ``` filtered_dataset = dataset.filter(lambda example: example['HEIGHT'] > 400) ``` In order to persist this dataset and push it back to the hub, one currently needs to first load the entire filtered dataset on disk and then push: ``` from datasets import Dataset Dataset.from_generator(filtered_dataset.__iter__).push_to_hub(...) ``` It would be great if we can instead lazy push to the data to the hub (basically stream the data to the hub), not being limited by our disk size: ``` filtered_dataset.push_to_hub("my-filtered-dataset") ``` ### Motivation This feature would be very useful for people that want to filter huge datasets without having to load the entire dataset or a filtered version thereof on their local disk. ### Your contribution Happy to test out a PR :)
{ "+1": 7, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 7, "url": "https://api.github.com/repos/huggingface/datasets/issues/5665/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5665/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5959
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5959/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5959/comments
https://api.github.com/repos/huggingface/datasets/issues/5959/events
https://github.com/huggingface/datasets/issues/5959
1,757,397,507
I_kwDODunzps5ov8ID
5,959
read metric glue.py from local file
{ "avatar_url": "https://avatars.githubusercontent.com/u/31148397?v=4", "events_url": "https://api.github.com/users/JiazhaoLi/events{/privacy}", "followers_url": "https://api.github.com/users/JiazhaoLi/followers", "following_url": "https://api.github.com/users/JiazhaoLi/following{/other_user}", "gists_url": "https://api.github.com/users/JiazhaoLi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JiazhaoLi", "id": 31148397, "login": "JiazhaoLi", "node_id": "MDQ6VXNlcjMxMTQ4Mzk3", "organizations_url": "https://api.github.com/users/JiazhaoLi/orgs", "received_events_url": "https://api.github.com/users/JiazhaoLi/received_events", "repos_url": "https://api.github.com/users/JiazhaoLi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JiazhaoLi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JiazhaoLi/subscriptions", "type": "User", "url": "https://api.github.com/users/JiazhaoLi" }
[]
closed
false
null
[]
null
[ "Sorry, I solve this by call `evaluate.load('glue_metric.py','sst-2')`\r\n" ]
2023-06-14T17:59:35Z
2023-06-14T18:04:16Z
2023-06-14T18:04:16Z
NONE
null
null
null
### Describe the bug Currently, The server is off-line. I am using the glue metric from the local file downloaded from the hub. I download / cached datasets using `load_dataset('glue','sst2', cache_dir='/xxx')` to cache them and then in the off-line mode, I use `load_dataset('xxx/glue.py','sst2', cache_dir='/xxx')`. I can successfully reuse cached datasets. My problem is about the load_metric. When I run `load_dataset('xxx/glue_metric.py','sst2',cache_dir='/xxx')` , it returns ` File "xx/lib64/python3.9/site-packages/datasets/utils/deprecation_utils.py", line 46, in wrapper return deprecated_function(*args, **kwargs) File "xx//lib64/python3.9/site-packages/datasets/load.py", line 1392, in load_metric metric = metric_cls( TypeError: 'NoneType' object is not callable` Thanks in advance for help! ### Steps to reproduce the bug N/A ### Expected behavior N/A ### Environment info `datasets == 2.12.0`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5959/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5959/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5903
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5903/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5903/comments
https://api.github.com/repos/huggingface/datasets/issues/5903/events
https://github.com/huggingface/datasets/pull/5903
1,727,372,549
PR_kwDODunzps5RbV82
5,903
Relax `ci.yml` trigger for `pull_request` based on modified paths
{ "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alvarobartt", "id": 36760800, "login": "alvarobartt", "node_id": "MDQ6VXNlcjM2NzYwODAw", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "repos_url": "https://api.github.com/users/alvarobartt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "type": "User", "url": "https://api.github.com/users/alvarobartt" }
[]
open
false
null
[]
null
[ "Also this could be extended to the rest of the GitHub Action `yml` files, so let me know whether you want me to have a look into it! 🤗", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5903). All of your documentation changes will be reflected on that endpoint.", "Maybe ...
2023-05-26T10:46:52Z
2023-09-07T15:52:36Z
null
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5903.diff", "html_url": "https://github.com/huggingface/datasets/pull/5903", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5903.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5903" }
## What's in this PR? As of a previous PR at #5902, I've seen that the CI was automatically trigger on any file, in that case when modifying a Jupyter Notebook (.ipynb), which IMO could be skipped, as the modification on the Jupyter Notebook has no effect/impact on the `ci.yml` outcome. So this PR controls the paths that trigger the `ci.yml` to avoid wasting resources when not needed. ## What's pending in this PR? I would like to confirm whether this should affect both `push` and `pull_request`, since just modifications in those files won't change the `ci.yml` outcome, so maybe it's worth skipping it too in the `push` trigger.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5903/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5903/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3905
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3905/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3905/comments
https://api.github.com/repos/huggingface/datasets/issues/3905/events
https://github.com/huggingface/datasets/pull/3905
1,168,320,568
PR_kwDODunzps40ZJQJ
3,905
Perplexity Metric Card
{ "avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4", "events_url": "https://api.github.com/users/emibaylor/events{/privacy}", "followers_url": "https://api.github.com/users/emibaylor/followers", "following_url": "https://api.github.com/users/emibaylor/following{/other_user}", "gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/emibaylor", "id": 27527747, "login": "emibaylor", "node_id": "MDQ6VXNlcjI3NTI3NzQ3", "organizations_url": "https://api.github.com/users/emibaylor/orgs", "received_events_url": "https://api.github.com/users/emibaylor/received_events", "repos_url": "https://api.github.com/users/emibaylor/repos", "site_admin": false, "starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions", "type": "User", "url": "https://api.github.com/users/emibaylor" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3905). All of your documentation changes will be reflected on that endpoint.", "I'm wondering if we should add that perplexity can be used for analyzing datasets as well", "Otherwise, looks good! Good job, @emibaylor !" ]
2022-03-14T12:39:40Z
2022-03-16T19:38:56Z
2022-03-16T19:38:56Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3905.diff", "html_url": "https://github.com/huggingface/datasets/pull/3905", "merged_at": "2022-03-16T19:38:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/3905.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3905" }
Add Perplexity metric card Note that it is currently still missing the citation, but I plan to add it later today.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3905/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3905/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4070
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4070/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4070/comments
https://api.github.com/repos/huggingface/datasets/issues/4070/events
https://github.com/huggingface/datasets/pull/4070
1,186,810,205
PR_kwDODunzps41VMYq
4,070
Create metric card for seqeval
{ "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sashavor", "id": 14205986, "login": "sashavor", "node_id": "MDQ6VXNlcjE0MjA1OTg2", "organizations_url": "https://api.github.com/users/sashavor/orgs", "received_events_url": "https://api.github.com/users/sashavor/received_events", "repos_url": "https://api.github.com/users/sashavor/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "type": "User", "url": "https://api.github.com/users/sashavor" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-03-30T18:08:01Z
2022-04-01T19:02:58Z
2022-04-01T18:57:25Z
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4070.diff", "html_url": "https://github.com/huggingface/datasets/pull/4070", "merged_at": "2022-04-01T18:57:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/4070.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4070" }
Proposing metric card for seqeval. Not sure which values to report for Popular papers though.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4070/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4070/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2858
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2858/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2858/comments
https://api.github.com/repos/huggingface/datasets/issues/2858/events
https://github.com/huggingface/datasets/pull/2858
984,145,568
MDExOlB1bGxSZXF1ZXN0NzIzNjEzNzQ0
2,858
Fix s3fs version in CI
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2021-08-31T18:05:43Z
2021-09-06T13:33:35Z
2021-08-31T21:29:51Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2858.diff", "html_url": "https://github.com/huggingface/datasets/pull/2858", "merged_at": "2021-08-31T21:29:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/2858.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2858" }
The latest s3fs version has new constrains on aiobotocore, and therefore on boto3 and botocore This PR changes the constrains to avoid the new conflicts In particular it pins the version of s3fs.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2858/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2858/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2560
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2560/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2560/comments
https://api.github.com/repos/huggingface/datasets/issues/2560/events
https://github.com/huggingface/datasets/pull/2560
932,143,634
MDExOlB1bGxSZXF1ZXN0Njc5NTMyODk4
2,560
fix Dataset.map when num_procs > num rows
{ "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/connor-mccarthy", "id": 55268212, "login": "connor-mccarthy", "node_id": "MDQ6VXNlcjU1MjY4MjEy", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "site_admin": false, "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "type": "User", "url": "https://api.github.com/users/connor-mccarthy" }
[]
closed
false
null
[]
null
[ "Hi ! Thanks for fixing this :)\r\n\r\nLooks like you have tons of changes due to code formatting.\r\nWe're using `black` for this, with a custom line length. To run our code formatting, you just need to run\r\n```\r\nmake style\r\n```\r\n\r\nThen for the windows error in the CI, I'm looking into it. It's probably ...
2021-06-29T02:24:11Z
2021-06-29T15:00:18Z
2021-06-29T14:53:31Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2560.diff", "html_url": "https://github.com/huggingface/datasets/pull/2560", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2560.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2560" }
closes #2470 ## Testing notes To run updated tests: ```sh pytest tests/test_arrow_dataset.py -k "BaseDatasetTest and test_map_multiprocessing" -s ``` With Python code (to view warning): ```python from datasets import Dataset dataset = Dataset.from_dict({"x": ["sample"]}) print(len(dataset)) dataset.map(lambda x: x, num_proc=10) ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2560/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2560/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6274
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6274/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6274/comments
https://api.github.com/repos/huggingface/datasets/issues/6274/events
https://github.com/huggingface/datasets/issues/6274
1,921,036,328
I_kwDODunzps5ygLAo
6,274
FileNotFoundError for dataset with multiple builder config
{ "avatar_url": "https://avatars.githubusercontent.com/u/97120485?v=4", "events_url": "https://api.github.com/users/LouisChen15/events{/privacy}", "followers_url": "https://api.github.com/users/LouisChen15/followers", "following_url": "https://api.github.com/users/LouisChen15/following{/other_user}", "gists_url": "https://api.github.com/users/LouisChen15/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LouisChen15", "id": 97120485, "login": "LouisChen15", "node_id": "U_kgDOBcnw5Q", "organizations_url": "https://api.github.com/users/LouisChen15/orgs", "received_events_url": "https://api.github.com/users/LouisChen15/received_events", "repos_url": "https://api.github.com/users/LouisChen15/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LouisChen15/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LouisChen15/subscriptions", "type": "User", "url": "https://api.github.com/users/LouisChen15" }
[]
closed
false
null
[]
null
[ "Please tell me if the above info is not enough for solving the problem. I will then make my dataset public temporarily so that you can really reproduce the bug. " ]
2023-10-01T23:45:56Z
2023-10-02T20:09:38Z
2023-10-02T20:09:38Z
NONE
null
null
null
### Describe the bug When there is only one config and only the dataset name is entered when using datasets.load_dataset(), it works fine. But if I create a second builder_config for my dataset and enter the config name when using datasets.load_dataset(), the following error will happen. FileNotFoundError: [Errno 2] No such file or directory: 'C:/Users/chenx/.cache/huggingface/datasets/my_dataset/0_shot_multiple_choice/1.0.0/97c3854a012cfd6b045e3be4c864739902af2d818bb9235b047baa94c302e9a2.incomplete/my_dataset-test-00000-00000-of-NNNNN.arrow' The "XXX.incomplete folder" in the cache folder of my dataset will disappear before "generating test split", which does not happen when config name is not entered and the config name is "default" C:\Users\chenx\.cache\huggingface\datasets\my_dataset\0_shot_multiple_choice\1.0.0 The folder that is supposed to remain under the above directory will disappear, and the data generator will not have a place to generate data into. ### Steps to reproduce the bug test = load_dataset('my_dataset', '0_shot_multiple_choice') ### Expected behavior FileNotFoundError: [Errno 2] No such file or directory: 'C:/Users/chenx/.cache/huggingface/datasets/my_dataset/0_shot_multiple_choice/1.0.0/97c3854a012cfd6b045e3be4c864739902af2d818bb9235b047baa94c302e9a2.incomplete/my_dataset-test-00000-00000-of-NNNNN.arrow' ### Environment info datasets 2.14.5 python 3.8.18
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6274/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6274/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1440
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1440/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1440/comments
https://api.github.com/repos/huggingface/datasets/issues/1440/events
https://github.com/huggingface/datasets/pull/1440
760,973,057
MDExOlB1bGxSZXF1ZXN0NTM1NzEyNDY1
1,440
Adding english plaintext jokes dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/22298787?v=4", "events_url": "https://api.github.com/users/purvimisal/events{/privacy}", "followers_url": "https://api.github.com/users/purvimisal/followers", "following_url": "https://api.github.com/users/purvimisal/following{/other_user}", "gists_url": "https://api.github.com/users/purvimisal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/purvimisal", "id": 22298787, "login": "purvimisal", "node_id": "MDQ6VXNlcjIyMjk4Nzg3", "organizations_url": "https://api.github.com/users/purvimisal/orgs", "received_events_url": "https://api.github.com/users/purvimisal/received_events", "repos_url": "https://api.github.com/users/purvimisal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/purvimisal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/purvimisal/subscriptions", "type": "User", "url": "https://api.github.com/users/purvimisal" }
[]
closed
false
null
[]
null
[ "Hi @purvimisal, thanks for your contributions!\r\n\r\nThis jokes dataset has come up before, and after a conversation with the initial submitter, we decided not to add it then. Humor is important, but looking at the actual data points in this set raises several concerns :) \r\n\r\nThe main issue is the Reddit part...
2020-12-10T07:04:17Z
2020-12-13T05:22:00Z
2020-12-12T05:55:43Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1440.diff", "html_url": "https://github.com/huggingface/datasets/pull/1440", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1440.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1440" }
This PR adds a dataset of 200k English plaintext Jokes from three sources: Reddit, Stupidstuff, and Wocka. Link: https://github.com/taivop/joke-dataset This is my second PR. First was: [#1269 ](https://github.com/huggingface/datasets/pull/1269)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1440/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1440/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1618
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1618/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1618/comments
https://api.github.com/repos/huggingface/datasets/issues/1618/events
https://github.com/huggingface/datasets/issues/1618
772,248,730
MDU6SXNzdWU3NzIyNDg3MzA=
1,618
Can't filter language:EN on https://huggingface.co/datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/4547987?v=4", "events_url": "https://api.github.com/users/davidefiocco/events{/privacy}", "followers_url": "https://api.github.com/users/davidefiocco/followers", "following_url": "https://api.github.com/users/davidefiocco/following{/other_user}", "gists_url": "https://api.github.com/users/davidefiocco/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/davidefiocco", "id": 4547987, "login": "davidefiocco", "node_id": "MDQ6VXNlcjQ1NDc5ODc=", "organizations_url": "https://api.github.com/users/davidefiocco/orgs", "received_events_url": "https://api.github.com/users/davidefiocco/received_events", "repos_url": "https://api.github.com/users/davidefiocco/repos", "site_admin": false, "starred_url": "https://api.github.com/users/davidefiocco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidefiocco/subscriptions", "type": "User", "url": "https://api.github.com/users/davidefiocco" }
[]
closed
false
null
[]
null
[ "cc'ing @mapmeld ", "Full language list is now deployed to https://huggingface.co/datasets ! Recommend close", "Cool @mapmeld ! My 2 cents (for a next iteration), it would be cool to have a small search widget in the filter dropdown as you have a ton of languages now here! Closing this in the meantime." ]
2020-12-21T15:23:23Z
2020-12-22T17:17:00Z
2020-12-22T17:16:09Z
NONE
null
null
null
When visiting https://huggingface.co/datasets, I don't see an obvious way to filter only English datasets. This is unexpected for me, am I missing something? I'd expect English to be selectable in the language widget. This problem reproduced on Mozilla Firefox and MS Edge: ![screenshot](https://user-images.githubusercontent.com/4547987/102792244-892e1f00-43a8-11eb-9e89-4826ca201a87.png)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1618/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1618/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2995
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2995/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2995/comments
https://api.github.com/repos/huggingface/datasets/issues/2995/events
https://github.com/huggingface/datasets/pull/2995
1,013,143,868
PR_kwDODunzps4sjThd
2,995
Fix trivia_qa unfiltered
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "CI fails due to missing tags, but they will be added in https://github.com/huggingface/datasets/pull/2949" ]
2021-10-01T09:53:43Z
2021-10-01T10:04:11Z
2021-10-01T10:04:10Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2995.diff", "html_url": "https://github.com/huggingface/datasets/pull/2995", "merged_at": "2021-10-01T10:04:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/2995.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2995" }
Fix https://github.com/huggingface/datasets/issues/2993
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2995/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2995/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5600
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5600/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5600/comments
https://api.github.com/repos/huggingface/datasets/issues/5600/events
https://github.com/huggingface/datasets/issues/5600
1,606,585,596
I_kwDODunzps5fwoz8
5,600
Dataloader getitem not working for DreamboothDatasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/76955987?v=4", "events_url": "https://api.github.com/users/salahiguiliz/events{/privacy}", "followers_url": "https://api.github.com/users/salahiguiliz/followers", "following_url": "https://api.github.com/users/salahiguiliz/following{/other_user}", "gists_url": "https://api.github.com/users/salahiguiliz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/salahiguiliz", "id": 76955987, "login": "salahiguiliz", "node_id": "MDQ6VXNlcjc2OTU1OTg3", "organizations_url": "https://api.github.com/users/salahiguiliz/orgs", "received_events_url": "https://api.github.com/users/salahiguiliz/received_events", "repos_url": "https://api.github.com/users/salahiguiliz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/salahiguiliz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/salahiguiliz/subscriptions", "type": "User", "url": "https://api.github.com/users/salahiguiliz" }
[]
closed
false
null
[]
null
[ "Hi! \r\n\r\n> (see example of DreamboothDatasets)\r\n\r\n\r\nCould you please provide a link to it? If you are referring to the example in the `diffusers` repo, your issue is unrelated to `datasets` as that example uses `Dataset` from PyTorch to load data." ]
2023-03-02T11:00:27Z
2023-03-13T17:59:35Z
2023-03-13T17:59:35Z
NONE
null
null
null
### Describe the bug Dataloader getitem is not working as before (see example of [DreamboothDatasets](https://github.com/huggingface/peft/blob/main/examples/lora_dreambooth/train_dreambooth.py#L451C14-L529)) moving Datasets to 2.8.0 solved the issue. ### Steps to reproduce the bug 1- using DreamBoothDataset to load some images 2- error after loading when trying to visualise the images ### Expected behavior I was expecting a numpy array of the image ### Environment info - Platform: Linux-5.10.147+-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 9.0.0 - Pandas version: 1.3.5
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5600/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5600/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5398
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5398/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5398/comments
https://api.github.com/repos/huggingface/datasets/issues/5398/events
https://github.com/huggingface/datasets/issues/5398
1,514,425,231
I_kwDODunzps5aREuP
5,398
Unpin pydantic
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
2022-12-30T10:37:31Z
2022-12-30T10:43:41Z
2022-12-30T10:43:41Z
MEMBER
null
null
null
Once `pydantic` fixes their issue in their 1.10.3 version, unpin it. See issue: - #5394 See temporary fix: - #5395
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5398/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5398/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5938
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5938/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5938/comments
https://api.github.com/repos/huggingface/datasets/issues/5938/events
https://github.com/huggingface/datasets/pull/5938
1,749,462,851
PR_kwDODunzps5SmbkI
5,938
Make get_from_cache use custom temp filename that is locked
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-06-09T09:01:13Z
2023-06-14T13:35:38Z
2023-06-14T13:27:24Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5938.diff", "html_url": "https://github.com/huggingface/datasets/pull/5938", "merged_at": "2023-06-14T13:27:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/5938.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5938" }
This PR ensures that the temporary filename created is the same as the one that is locked, while writing to the cache. This PR stops using `tempfile` to generate the temporary filename. Additionally, the behavior now is aligned for both `resume_download` `True` and `False`. Refactor temp_file_manager so that it uses the filename that is locked: - Use: `cache_path + ".incomplete"`, when the locked one is `cache_path + ".lock"` Before it was using `tempfile` inside `cache_dir`, which was not locked: although very improbable name collision (8 random characters), this was not impossible when huge number of multiple processes. Maybe related to "Stale file handle" issues caused by `tempfile`: - [ ] https://huggingface.co/datasets/tapaco/discussions/4 - [ ] https://huggingface.co/datasets/xcsr/discussions/1 - [ ] https://huggingface.co/datasets/covost2/discussions/3 ``` Error code: ConfigNamesError Exception: OSError Message: [Errno 116] Stale file handle Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 61, in compute_config_names_response for config in sorted(get_dataset_config_names(path=dataset, use_auth_token=use_auth_token)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 323, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1219, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1188, in dataset_module_factory return HubDatasetModuleFactoryWithScript( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 907, in get_module dataset_readme_path = self.download_dataset_readme_file() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 896, in download_dataset_readme_file return cached_path( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 183, in cached_path output_path = get_from_cache( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 611, in get_from_cache http_get( File "/usr/local/lib/python3.9/tempfile.py", line 496, in __exit__ result = self.file.__exit__(exc, value, tb) OSError: [Errno 116] Stale file handle ``` - the stale file handle error can be raised when `tempfile` tries to close (when exiting its context manager) a filename that has been already closed by other process - note that `tempfile` filenames are randomly generated but not locked in our code CC: @severo
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5938/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5938/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2391
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2391/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2391/comments
https://api.github.com/repos/huggingface/datasets/issues/2391/events
https://github.com/huggingface/datasets/issues/2391
898,128,099
MDU6SXNzdWU4OTgxMjgwOTk=
2,391
Missing original answers in kilt-TriviaQA
{ "avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4", "events_url": "https://api.github.com/users/PaulLerner/events{/privacy}", "followers_url": "https://api.github.com/users/PaulLerner/followers", "following_url": "https://api.github.com/users/PaulLerner/following{/other_user}", "gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PaulLerner", "id": 25532159, "login": "PaulLerner", "node_id": "MDQ6VXNlcjI1NTMyMTU5", "organizations_url": "https://api.github.com/users/PaulLerner/orgs", "received_events_url": "https://api.github.com/users/PaulLerner/received_events", "repos_url": "https://api.github.com/users/PaulLerner/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions", "type": "User", "url": "https://api.github.com/users/PaulLerner" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "That could be useful indeed! Feel free to open a PR on the dataset card if you already have some code that runs, otherwise we'll take care of it soon :) ", "I can open a PR but there is 2 details to fix:\r\n- the name for the corresponding key (e.g. `original_answer`)\r\n- how to implement it: I’m not sure what ...
2021-05-21T14:57:07Z
2021-06-14T17:29:11Z
2021-06-14T17:29:11Z
CONTRIBUTOR
null
null
null
I previously opened an issue at https://github.com/facebookresearch/KILT/issues/42 but from the answer of @fabiopetroni it seems that the problem comes from HF-datasets ## Describe the bug The `answer` field in kilt-TriviaQA, e.g. `kilt_tasks['train_triviaqa'][0]['output']['answer']` contains a list of alternative answer which are accepted for the question. However it'd be nice to know the original answer to the question (the only fields in `output` are `'answer', 'meta', 'provenance'`) ## How to fix It can be fixed by retrieving the original answer from the original TriviaQA (e.g. `trivia_qa['train'][0]['answer']['value']`), perhaps at the same place as here where one retrieves the questions https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md#loading-the-kilt-knowledge-source-and-task-data cc @yjernite who previously answered to an issue about KILT and TriviaQA :)
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2391/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2391/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5400
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5400/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5400/comments
https://api.github.com/repos/huggingface/datasets/issues/5400/events
https://github.com/huggingface/datasets/pull/5400
1,517,032,972
PR_kwDODunzps5GhaGI
5,400
Support streaming datasets with os.path.exists and Path.exists
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-01-03T07:42:37Z
2023-01-06T10:42:44Z
2023-01-06T10:35:44Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5400.diff", "html_url": "https://github.com/huggingface/datasets/pull/5400", "merged_at": "2023-01-06T10:35:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/5400.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5400" }
Support streaming datasets with `os.path.exists` and `pathlib.Path.exists`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5400/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5400/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4159
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4159/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4159/comments
https://api.github.com/repos/huggingface/datasets/issues/4159/events
https://github.com/huggingface/datasets/pull/4159
1,202,522,153
PR_kwDODunzps42Izmd
4,159
Add `TruthfulQA` dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/41410219?v=4", "events_url": "https://api.github.com/users/jon-tow/events{/privacy}", "followers_url": "https://api.github.com/users/jon-tow/followers", "following_url": "https://api.github.com/users/jon-tow/following{/other_user}", "gists_url": "https://api.github.com/users/jon-tow/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jon-tow", "id": 41410219, "login": "jon-tow", "node_id": "MDQ6VXNlcjQxNDEwMjE5", "organizations_url": "https://api.github.com/users/jon-tow/orgs", "received_events_url": "https://api.github.com/users/jon-tow/received_events", "repos_url": "https://api.github.com/users/jon-tow/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jon-tow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jon-tow/subscriptions", "type": "User", "url": "https://api.github.com/users/jon-tow" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Bump. (I'm not sure which reviewer to `@` but, previously, @lhoestq has been very helpful 🤗 )" ]
2022-04-12T23:19:04Z
2022-06-08T15:51:33Z
2022-06-08T14:43:34Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4159.diff", "html_url": "https://github.com/huggingface/datasets/pull/4159", "merged_at": "2022-06-08T14:43:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/4159.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4159" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4159/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4159/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6208
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6208/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6208/comments
https://api.github.com/repos/huggingface/datasets/issues/6208/events
https://github.com/huggingface/datasets/pull/6208
1,879,572,646
PR_kwDODunzps5ZcnpJ
6,208
Do not filter out .zip extensions from no-script datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
2023-09-04T06:07:12Z
2023-09-04T09:22:19Z
2023-09-04T09:13:32Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6208.diff", "html_url": "https://github.com/huggingface/datasets/pull/6208", "merged_at": "2023-09-04T09:13:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/6208.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6208" }
This PR is a hotfix of: - #6207 That PR introduced the filtering out of `.zip` extensions. This PR reverts that. Hot fix #6207. Maybe we should do patch releases: the bug was introduced in 2.13.1. CC: @lhoestq
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6208/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6208/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2251
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2251/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2251/comments
https://api.github.com/repos/huggingface/datasets/issues/2251/events
https://github.com/huggingface/datasets/issues/2251
865,848,705
MDU6SXNzdWU4NjU4NDg3MDU=
2,251
while running run_qa.py, ran into a value error
{ "avatar_url": "https://avatars.githubusercontent.com/u/44570724?v=4", "events_url": "https://api.github.com/users/nlee0212/events{/privacy}", "followers_url": "https://api.github.com/users/nlee0212/followers", "following_url": "https://api.github.com/users/nlee0212/following{/other_user}", "gists_url": "https://api.github.com/users/nlee0212/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nlee0212", "id": 44570724, "login": "nlee0212", "node_id": "MDQ6VXNlcjQ0NTcwNzI0", "organizations_url": "https://api.github.com/users/nlee0212/orgs", "received_events_url": "https://api.github.com/users/nlee0212/received_events", "repos_url": "https://api.github.com/users/nlee0212/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nlee0212/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nlee0212/subscriptions", "type": "User", "url": "https://api.github.com/users/nlee0212" }
[]
open
false
null
[]
null
[]
2021-04-23T07:51:03Z
2021-04-23T07:51:03Z
null
NONE
null
null
null
command: python3 run_qa.py --model_name_or_path hyunwoongko/kobart --dataset_name squad_kor_v2 --do_train --do_eval --per_device_train_batch_size 8 --learning_rate 3e-5 --num_train_epochs 3 --max_seq_length 512 --doc_stride 128 --output_dir /tmp/debug_squad/ error: ValueError: External features info don't match the dataset: Got {'id': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'context': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'answer': {'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None), 'html_answer_start': Value(dtype='int32', id=None)}, 'url': Value(dtype='string', id=None), 'raw_html': Value(dtype='string', id=None)} with type struct<answer: struct<text: string, answer_start: int32, html_answer_start: int32>, context: string, id: string, question: string, raw_html: string, title: string, url: string> but expected something like {'answer': {'answer_start': Value(dtype='int32', id=None), 'html_answer_start': Value(dtype='int32', id=None), 'text': Value(dtype='string', id=None)}, 'context': Value(dtype='string', id=None), 'id': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'raw_html': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'url': Value(dtype='string', id=None)} with type struct<answer: struct<answer_start: int32, html_answer_start: int32, text: string>, context: string, id: string, question: string, raw_html: string, title: string, url: string> I didn't encounter this error 4 hours ago. any solutions for this kind of issue? looks like gained dataset format refers to 'Data Fields', while expected refers to 'Data Instances'.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2251/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2251/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4288
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4288/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4288/comments
https://api.github.com/repos/huggingface/datasets/issues/4288/events
https://github.com/huggingface/datasets/pull/4288
1,226,821,732
PR_kwDODunzps43XLKi
4,288
Add missing `faiss` import to fix https://github.com/huggingface/datasets/issues/4287
{ "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alvarobartt", "id": 36760800, "login": "alvarobartt", "node_id": "MDQ6VXNlcjM2NzYwODAw", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "repos_url": "https://api.github.com/users/alvarobartt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "type": "User", "url": "https://api.github.com/users/alvarobartt" }
[]
closed
false
null
[]
null
[]
2022-05-05T15:21:49Z
2022-05-10T12:55:06Z
2022-05-10T12:09:48Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4288.diff", "html_url": "https://github.com/huggingface/datasets/pull/4288", "merged_at": "2022-05-10T12:09:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/4288.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4288" }
This PR fixes the issue recently mentioned in https://github.com/huggingface/datasets/issues/4287 🤗
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4288/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4288/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1990
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1990/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1990/comments
https://api.github.com/repos/huggingface/datasets/issues/1990/events
https://github.com/huggingface/datasets/issues/1990
822,384,502
MDU6SXNzdWU4MjIzODQ1MDI=
1,990
OSError: Memory mapping file failed: Cannot allocate memory
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234" }
[]
closed
false
null
[]
null
[ "Do you think this is trying to bring the dataset into memory and if I can avoid it to save on memory so it only brings a batch into memory? @lhoestq thank you", "It's not trying to bring the dataset into memory.\r\n\r\nActually, it's trying to memory map the dataset file, which is different. It allows to load l...
2021-03-04T18:21:58Z
2021-08-04T18:04:25Z
2021-08-04T18:04:25Z
NONE
null
null
null
Hi, I am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https://github.com/huggingface/transformers/blob/v4.3.2/examples/language-modeling/run_mlm.py ``` python run_mlm.py --model_name_or_path bert-base-multilingual-cased --dataset_name wikipedia --dataset_config_name 20200501.en --do_train --do_eval --output_dir /dara/test --max_seq_length 128 ``` I am using transformer version: 4.3.2 But I got memory erorr using this dataset, is there a way I could save on memory with dataset library with wikipedia dataset? Specially I need to train a model with multiple of wikipedia datasets concatenated. thank you very much @lhoestq for your help and suggestions: ``` File "run_mlm.py", line 441, in <module> main() File "run_mlm.py", line 233, in main split=f"train[{data_args.validation_split_percentage}%:]", File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/load.py", line 750, in load_dataset ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 740, in as_dataset map_tuple=True, File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/utils/py_utils.py", line 225, in map_nested return function(data_struct) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 757, in _build_single_dataset in_memory=in_memory, File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 829, in _as_dataset in_memory=in_memory, File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 215, in read return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 236, in read_files pa_table = self._read_files(files, in_memory=in_memory) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 171, in _read_files pa_table: pa.Table = self._get_dataset_from_filename(f_dict, in_memory=in_memory) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 302, in _get_dataset_from_filename pa_table = ArrowReader.read_table(filename, in_memory=in_memory) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 322, in read_table stream = stream_from(filename) File "pyarrow/io.pxi", line 782, in pyarrow.lib.memory_map File "pyarrow/io.pxi", line 743, in pyarrow.lib.MemoryMappedFile._open File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status OSError: Memory mapping file failed: Cannot allocate memory ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1990/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1990/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3788
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3788/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3788/comments
https://api.github.com/repos/huggingface/datasets/issues/3788/events
https://github.com/huggingface/datasets/issues/3788
1,150,375,720
I_kwDODunzps5EkVco
3,788
Only-data dataset loaded unexpectedly as validation split
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[ "I see two options:\r\n1. drop the \"dev\" keyword since it can be considered too generic\r\n2. improve the pattern to something more reasonable, e.g. asking for a separator before and after \"dev\"\r\n```python\r\n[\"*[ ._-]dev[ ._-]*\", \"dev[ ._-]*\"]\r\n```\r\n\r\nI think 2. is nice. If we agree on this one we ...
2022-02-25T12:11:39Z
2022-02-28T11:22:22Z
null
MEMBER
null
null
null
## Describe the bug As reported by @thomasw21 and @lhoestq, a dataset containing only a data file whose name matches the pattern `*dev*` will be returned as VALIDATION split, even if this is not the desired behavior, e.g. a file named `datosdevision.jsonl.gz`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3788/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3788/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4586
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4586/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4586/comments
https://api.github.com/repos/huggingface/datasets/issues/4586/events
https://github.com/huggingface/datasets/pull/4586
1,287,105,636
PR_kwDODunzps46e9xB
4,586
Host pn_summary data on the Hub instead of Google Drive
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-06-28T10:05:05Z
2022-06-28T14:52:56Z
2022-06-28T14:42:03Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4586.diff", "html_url": "https://github.com/huggingface/datasets/pull/4586", "merged_at": "2022-06-28T14:42:03Z", "patch_url": "https://github.com/huggingface/datasets/pull/4586.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4586" }
Fix #4581.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4586/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4586/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3347
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3347/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3347/comments
https://api.github.com/repos/huggingface/datasets/issues/3347/events
https://github.com/huggingface/datasets/pull/3347
1,067,738,902
PR_kwDODunzps4vNthw
3,347
iter_archive for zip files
{ "avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4", "events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}", "followers_url": "https://api.github.com/users/Mehdi2402/followers", "following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}", "gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Mehdi2402", "id": 56029953, "login": "Mehdi2402", "node_id": "MDQ6VXNlcjU2MDI5OTUz", "organizations_url": "https://api.github.com/users/Mehdi2402/orgs", "received_events_url": "https://api.github.com/users/Mehdi2402/received_events", "repos_url": "https://api.github.com/users/Mehdi2402/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions", "type": "User", "url": "https://api.github.com/users/Mehdi2402" }
[]
closed
false
null
[]
null
[ "And also don't always try streaming with Google Drive - it can have issues because of how Google Drive works (with quotas, restrictions, etc.) and it can indeed cause `BlockSizeError`.\r\n\r\nFeel free to host your test data elsewhere, such as in a dataset repository on https://huggingface.co (see [here](https://h...
2021-11-30T22:34:17Z
2021-12-04T00:22:22Z
2021-12-04T00:22:11Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3347.diff", "html_url": "https://github.com/huggingface/datasets/pull/3347", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3347.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3347" }
* In this PR, I added the option to iterate through zipfiles for `download_manager.py` only. * Next PR will be the same applied to `streaming_download_manager.py`. * Related issue #3272. ## Comments : * There is no `.isreg()` equivalent in zipfile library to check if file is Regular so I used `.is_dir()` instead to skip directories. * For now I got `streaming_download_manager.py` working for local zip files, but not for urls. I get the following error when I test it on an archive in google drive, so still working on it. `BlockSizeError: Got more bytes so far (>2112) than requested (22)` ## Tasks : - [x] download_manager.py - [ ] streaming_download_manager.py
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 1, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3347/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3347/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2027
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2027/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2027/comments
https://api.github.com/repos/huggingface/datasets/issues/2027/events
https://github.com/huggingface/datasets/pull/2027
828,490,444
MDExOlB1bGxSZXF1ZXN0NTkwMjkzNDA1
2,027
Update format columns in Dataset.rename_columns
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[]
2021-03-10T23:50:59Z
2021-03-11T14:38:40Z
2021-03-11T14:38:40Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2027.diff", "html_url": "https://github.com/huggingface/datasets/pull/2027", "merged_at": "2021-03-11T14:38:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/2027.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2027" }
Fixes #2026
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2027/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2027/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5185
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5185/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5185/comments
https://api.github.com/repos/huggingface/datasets/issues/5185/events
https://github.com/huggingface/datasets/issues/5185
1,432,021,611
I_kwDODunzps5VWupr
5,185
Allow passing a subset of output features to Dataset.map
{ "avatar_url": "https://avatars.githubusercontent.com/u/48946947?v=4", "events_url": "https://api.github.com/users/sanderland/events{/privacy}", "followers_url": "https://api.github.com/users/sanderland/followers", "following_url": "https://api.github.com/users/sanderland/following{/other_user}", "gists_url": "https://api.github.com/users/sanderland/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sanderland", "id": 48946947, "login": "sanderland", "node_id": "MDQ6VXNlcjQ4OTQ2OTQ3", "organizations_url": "https://api.github.com/users/sanderland/orgs", "received_events_url": "https://api.github.com/users/sanderland/received_events", "repos_url": "https://api.github.com/users/sanderland/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sanderland/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanderland/subscriptions", "type": "User", "url": "https://api.github.com/users/sanderland" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[]
2022-11-01T20:07:20Z
2022-11-01T20:07:34Z
null
CONTRIBUTOR
null
null
null
### Feature request Currently, map does one of two things to the features (if I'm not mistaken): * when you do not pass features, types are assumed to be equal to the input if they can be cast, and inferred otherwise * when you pass a full specification of features, output features are set to this However, sometimes you want to just pass some of the output types, particularly when the first of these modes makes an incorrect type. This currently crashes. ### Motivation To give a little background: this problem appears in converting labels to ids, where the labels happen to be floats rather than strings Consider the following use of map to convert from float to int ```python data = Dataset.from_dict({'y':[1.0,2.0,3.0]}) mapped = data.map(lambda r: {'y': int(r['y'])}) mapped['y'] # is floats, not ints ``` The result is a float again, since after the mapping operation it forces the old datatypes back on the data. Passing `features=Features({"y": Value(dtype="int64")})` to map works in principle, but then extending it a little to e.g. ```python def format_data(r): return {**tokenizer(r["text"]), "y": int(r["y"])} data = Dataset.from_dict({"y": [1.0, 2.0, 3.0], "text": ["one", "two", "three"]}) mapped = data.map( format_data, features=Features({'y': Value(dtype="int64")}), remove_columns=["text"], ) ``` Results in a crash in dataset internals, as it expects either all or no output features to be specified. Of course one can pass a full feature specification, but this becomes tokenizer specific and very awkward. ### Your contribution I've looked at `write_batch` and particularly `col_type = features[col] if features else None`, but checking for `col in features` here makes it fail elsewhere, but the structure makes it hard to understand how and why. I do not think I would have the time myself to get to the bottom of this anytime soon.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5185/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5185/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2642
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2642/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2642/comments
https://api.github.com/repos/huggingface/datasets/issues/2642/events
https://github.com/huggingface/datasets/issues/2642
944,175,697
MDU6SXNzdWU5NDQxNzU2OTc=
2,642
Support multi-worker with streaming dataset (IterableDataset).
{ "avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4", "events_url": "https://api.github.com/users/cccntu/events{/privacy}", "followers_url": "https://api.github.com/users/cccntu/followers", "following_url": "https://api.github.com/users/cccntu/following{/other_user}", "gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cccntu", "id": 31893406, "login": "cccntu", "node_id": "MDQ6VXNlcjMxODkzNDA2", "organizations_url": "https://api.github.com/users/cccntu/orgs", "received_events_url": "https://api.github.com/users/cccntu/received_events", "repos_url": "https://api.github.com/users/cccntu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cccntu/subscriptions", "type": "User", "url": "https://api.github.com/users/cccntu" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "Hi ! This is a great idea :)\r\nI think we could have something similar to what we have in `datasets.Dataset.map`, i.e. a `num_proc` parameter that tells how many processes to spawn to parallelize the data processing. \r\n\r\nRegarding AUTOTUNE, this could be a nice feature as well, we could see how to add it in a...
2021-07-14T08:22:58Z
2021-07-15T09:37:34Z
null
CONTRIBUTOR
null
null
null
**Is your feature request related to a problem? Please describe.** The current `.map` does not support multi-process, CPU can become bottleneck if the pre-processing is complex (e.g. t5 span masking). **Describe the solution you'd like** Ideally `.map` should support multi-worker like tfds, with `AUTOTUNE`. **Describe alternatives you've considered** A simpler solution is to shard the dataset and process it in parallel with pytorch dataloader. The shard does not need to be of equal size. * https://pytorch.org/docs/stable/data.html#torch.utils.data.IterableDataset **Additional context**
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2642/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2642/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2878
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2878/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2878/comments
https://api.github.com/repos/huggingface/datasets/issues/2878/events
https://github.com/huggingface/datasets/issues/2878
990,093,316
MDU6SXNzdWU5OTAwOTMzMTY=
2,878
NotADirectoryError: [WinError 267] During load_from_disk
{ "avatar_url": "https://avatars.githubusercontent.com/u/1875064?v=4", "events_url": "https://api.github.com/users/Grassycup/events{/privacy}", "followers_url": "https://api.github.com/users/Grassycup/followers", "following_url": "https://api.github.com/users/Grassycup/following{/other_user}", "gists_url": "https://api.github.com/users/Grassycup/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Grassycup", "id": 1875064, "login": "Grassycup", "node_id": "MDQ6VXNlcjE4NzUwNjQ=", "organizations_url": "https://api.github.com/users/Grassycup/orgs", "received_events_url": "https://api.github.com/users/Grassycup/received_events", "repos_url": "https://api.github.com/users/Grassycup/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Grassycup/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Grassycup/subscriptions", "type": "User", "url": "https://api.github.com/users/Grassycup" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[]
2021-09-07T15:15:05Z
2021-09-07T15:15:05Z
null
NONE
null
null
null
## Describe the bug Trying to load saved dataset or dataset directory from Amazon S3 on a Windows machine fails. Performing the same operation succeeds on non-windows environment (AWS Sagemaker). ## Steps to reproduce the bug ```python # Followed https://huggingface.co/docs/datasets/filesystems.html#loading-a-processed-dataset-from-s3 from datasets import load_from_disk from datasets.filesystems import S3FileSystem s3_file = "output of save_to_disk" s3_filesystem = S3FileSystem() load_from_disk(s3_file, fs=s3_filesystem) ``` ## Expected results load_from_disk succeeds without error ## Actual results Seems like it succeeds in pulling the file into a windows temp directory, as it exists in my system, but fails to process it. ``` Exception ignored in: <finalize object at 0x26409231ce0; dead> Traceback (most recent call last): File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\weakref.py", line 566, in __call__ return info.func(*info.args, **(info.kwargs or {})) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 817, in _cleanup cls._rmtree(name) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 813, in _rmtree _shutil.rmtree(name, onerror=onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 740, in rmtree return _rmtree_unsafe(path, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) [Previous line repeated 2 more times] File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 618, in _rmtree_unsafe onerror(os.unlink, fullname, sys.exc_info()) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 805, in onerror cls._rmtree(path) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 813, in _rmtree _shutil.rmtree(name, onerror=onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 740, in rmtree return _rmtree_unsafe(path, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 599, in _rmtree_unsafe onerror(os.scandir, path, sys.exc_info()) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 596, in _rmtree_unsafe with os.scandir(path) as scandir_it: NotADirectoryError: [WinError 267] The directory name is invalid: 'C:\\Users\\grassycup\\AppData\\Local\\Temp\\tmp45f_qbma\\tests3bucket\\output\\test_output\\train\\dataset.arrow' Exception ignored in: <finalize object at 0x264091c7880; dead> Traceback (most recent call last): File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\weakref.py", line 566, in __call__ return info.func(*info.args, **(info.kwargs or {})) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 817, in _cleanup cls._rmtree(name) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 813, in _rmtree _shutil.rmtree(name, onerror=onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 740, in rmtree return _rmtree_unsafe(path, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) [Previous line repeated 2 more times] File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 618, in _rmtree_unsafe onerror(os.unlink, fullname, sys.exc_info()) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 805, in onerror cls._rmtree(path) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 813, in _rmtree _shutil.rmtree(name, onerror=onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 740, in rmtree return _rmtree_unsafe(path, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 599, in _rmtree_unsafe onerror(os.scandir, path, sys.exc_info()) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 596, in _rmtree_unsafe with os.scandir(path) as scandir_it: NotADirectoryError: [WinError 267] The directory name is invalid: 'C:\\Users\\grassycup\\AppData\\Local\\Temp\\tmp45f_qbma\\tests3bucket\\output\\test_output\\train\\dataset.arrow' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: Windows-10-10.0.19042-SP0 - Python version: 3.8.11 - PyArrow version: 3.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2878/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2878/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6286
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6286/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6286/comments
https://api.github.com/repos/huggingface/datasets/issues/6286/events
https://github.com/huggingface/datasets/pull/6286
1,932,640,128
PR_kwDODunzps5cPKNK
6,286
Create DefunctDatasetError
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
2023-10-09T09:23:23Z
2023-10-10T07:13:22Z
2023-10-10T07:03:04Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6286.diff", "html_url": "https://github.com/huggingface/datasets/pull/6286", "merged_at": "2023-10-10T07:03:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/6286.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6286" }
Create `DefunctDatasetError` as a specific error to be raised when a dataset is defunct and no longer accessible. See Hub discussion: https://huggingface.co/datasets/the_pile_books3/discussions/7#6523c13a94f3a1a2092d251b
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6286/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6286/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2999
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2999/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2999/comments
https://api.github.com/repos/huggingface/datasets/issues/2999/events
https://github.com/huggingface/datasets/pull/2999
1,013,536,933
PR_kwDODunzps4skgCm
2,999
Set trivia_qa writer batch size
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2021-10-01T16:23:26Z
2021-10-01T16:34:55Z
2021-10-01T16:34:55Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2999.diff", "html_url": "https://github.com/huggingface/datasets/pull/2999", "merged_at": "2021-10-01T16:34:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/2999.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2999" }
Save some RAM when generating trivia_qa
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2999/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2999/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5748
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5748/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5748/comments
https://api.github.com/repos/huggingface/datasets/issues/5748/events
https://github.com/huggingface/datasets/pull/5748
1,667,517,024
PR_kwDODunzps5OSgNH
5,748
[BUG FIX] Issue 5739
{ "avatar_url": "https://avatars.githubusercontent.com/u/1772912?v=4", "events_url": "https://api.github.com/users/ericxsun/events{/privacy}", "followers_url": "https://api.github.com/users/ericxsun/followers", "following_url": "https://api.github.com/users/ericxsun/following{/other_user}", "gists_url": "https://api.github.com/users/ericxsun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ericxsun", "id": 1772912, "login": "ericxsun", "node_id": "MDQ6VXNlcjE3NzI5MTI=", "organizations_url": "https://api.github.com/users/ericxsun/orgs", "received_events_url": "https://api.github.com/users/ericxsun/received_events", "repos_url": "https://api.github.com/users/ericxsun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ericxsun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ericxsun/subscriptions", "type": "User", "url": "https://api.github.com/users/ericxsun" }
[]
open
false
null
[]
null
[]
2023-04-14T05:07:31Z
2023-04-14T05:07:31Z
null
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5748.diff", "html_url": "https://github.com/huggingface/datasets/pull/5748", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5748.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5748" }
A fix for https://github.com/huggingface/datasets/issues/5739
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5748/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5748/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1532
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1532/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1532/comments
https://api.github.com/repos/huggingface/datasets/issues/1532/events
https://github.com/huggingface/datasets/pull/1532
764,772,184
MDExOlB1bGxSZXF1ZXN0NTM4NjgxODcz
1,532
adding hate-speech-and-offensive-language
{ "avatar_url": "https://avatars.githubusercontent.com/u/15351802?v=4", "events_url": "https://api.github.com/users/MisbahKhan789/events{/privacy}", "followers_url": "https://api.github.com/users/MisbahKhan789/followers", "following_url": "https://api.github.com/users/MisbahKhan789/following{/other_user}", "gists_url": "https://api.github.com/users/MisbahKhan789/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MisbahKhan789", "id": 15351802, "login": "MisbahKhan789", "node_id": "MDQ6VXNlcjE1MzUxODAy", "organizations_url": "https://api.github.com/users/MisbahKhan789/orgs", "received_events_url": "https://api.github.com/users/MisbahKhan789/received_events", "repos_url": "https://api.github.com/users/MisbahKhan789/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MisbahKhan789/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MisbahKhan789/subscriptions", "type": "User", "url": "https://api.github.com/users/MisbahKhan789" }
[]
closed
false
null
[]
null
[ "made suggested changes and a new PR created here : https://github.com/huggingface/datasets/pull/1597" ]
2020-12-13T02:16:31Z
2020-12-17T18:36:54Z
2020-12-17T18:10:05Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1532.diff", "html_url": "https://github.com/huggingface/datasets/pull/1532", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1532.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1532" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1532/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1532/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1850
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1850/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1850/comments
https://api.github.com/repos/huggingface/datasets/issues/1850/events
https://github.com/huggingface/datasets/pull/1850
804,412,249
MDExOlB1bGxSZXF1ZXN0NTcwMTg0MDAx
1,850
Add cord 19 dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/5583410?v=4", "events_url": "https://api.github.com/users/ggdupont/events{/privacy}", "followers_url": "https://api.github.com/users/ggdupont/followers", "following_url": "https://api.github.com/users/ggdupont/following{/other_user}", "gists_url": "https://api.github.com/users/ggdupont/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ggdupont", "id": 5583410, "login": "ggdupont", "node_id": "MDQ6VXNlcjU1ODM0MTA=", "organizations_url": "https://api.github.com/users/ggdupont/orgs", "received_events_url": "https://api.github.com/users/ggdupont/received_events", "repos_url": "https://api.github.com/users/ggdupont/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ggdupont/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ggdupont/subscriptions", "type": "User", "url": "https://api.github.com/users/ggdupont" }
[]
closed
false
null
[]
null
[ "Cleaned-up version of previous PR: https://github.com/huggingface/datasets/pull/1129", "@lhoestq FYI", "Before merging I might tweak a little bit the dummy data to avoid having to check if the `document_parses` and `embeddings` directories exist or not. I'll do that later today", "Looks all good now ! Thanks...
2021-02-09T10:22:08Z
2021-02-09T15:16:26Z
2021-02-09T15:16:26Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1850.diff", "html_url": "https://github.com/huggingface/datasets/pull/1850", "merged_at": "2021-02-09T15:16:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/1850.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1850" }
Initial version only reading the metadata in CSV. ### Checklist: - [x] Create the dataset script /datasets/my_dataset/my_dataset.py using the template - [x] Fill the _DESCRIPTION and _CITATION variables - [x] Implement _infos(), _split_generators() and _generate_examples() - [x] Make sure that the BUILDER_CONFIGS class attribute is filled with the different configurations of the dataset and that the BUILDER_CONFIG_CLASS is specified if there is a custom config class. - [x] Generate the metadata file dataset_infos.json for all configurations - [x] Generate the dummy data dummy_data.zip files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card README.md using the template and at least fill the tags - [x] Both tests for the real data and the dummy data pass. ### Extras: - [x] add more metadata - [x] add full text - [x] add pre-computed document embedding
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1850/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1850/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5292
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5292/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5292/comments
https://api.github.com/repos/huggingface/datasets/issues/5292/events
https://github.com/huggingface/datasets/issues/5292
1,463,053,832
I_kwDODunzps5XNG4I
5,292
Missing documentation build for versions 2.7.1 and 2.6.2
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d4c5f9", "default": false, "description": "Maintenance tasks", "id": 4296013012, "name": "maintenance", "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "- Build docs for 2.6.2:\r\n - Commit: a6a5a1cf4cdf1e0be65168aed5a327f543001fe8\r\n - Build docs GH Action: https://github.com/huggingface/datasets/actions/runs/3539470622/jobs/5941404044\r\n- Build docs for 2.7.1:\r\n - Commit: 5ef1ab1cc06c2b7a574bf2df454cd9fcb071ccb2\r\n - Build docs GH Action: https://github...
2022-11-24T09:42:10Z
2022-11-24T10:10:02Z
2022-11-24T10:10:02Z
MEMBER
null
null
null
After the patch releases [2.7.1](https://github.com/huggingface/datasets/releases/tag/2.7.1) and [2.6.2](https://github.com/huggingface/datasets/releases/tag/2.6.2), the online docs were not properly built (the build_documentation workflow was not triggered). There was a fix by: - #5291 However, both documentations were built from main branch, instead of their corresponding version branch. We are rebuilding them.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5292/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5292/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1668
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1668/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1668/comments
https://api.github.com/repos/huggingface/datasets/issues/1668/events
https://github.com/huggingface/datasets/pull/1668
776,552,854
MDExOlB1bGxSZXF1ZXN0NTQ3MDIxODI0
1,668
xed_en_fi dataset Cleanup
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-12-30T17:11:18Z
2020-12-30T17:22:44Z
2020-12-30T17:22:43Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1668.diff", "html_url": "https://github.com/huggingface/datasets/pull/1668", "merged_at": "2020-12-30T17:22:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/1668.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1668" }
Fix ClassLabel feature type and minor mistakes in the dataset card
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1668/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1668/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4933
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4933/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4933/comments
https://api.github.com/repos/huggingface/datasets/issues/4933/events
https://github.com/huggingface/datasets/issues/4933
1,363,013,023
I_kwDODunzps5RPe2f
4,933
Dataset/DatasetDict.filter() cannot have `batched=True` due to `mask` (numpy array?) being non-iterable.
{ "avatar_url": "https://avatars.githubusercontent.com/u/4812544?v=4", "events_url": "https://api.github.com/users/tianjianjiang/events{/privacy}", "followers_url": "https://api.github.com/users/tianjianjiang/followers", "following_url": "https://api.github.com/users/tianjianjiang/following{/other_user}", "gists_url": "https://api.github.com/users/tianjianjiang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tianjianjiang", "id": 4812544, "login": "tianjianjiang", "node_id": "MDQ6VXNlcjQ4MTI1NDQ=", "organizations_url": "https://api.github.com/users/tianjianjiang/orgs", "received_events_url": "https://api.github.com/users/tianjianjiang/received_events", "repos_url": "https://api.github.com/users/tianjianjiang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tianjianjiang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tianjianjiang/subscriptions", "type": "User", "url": "https://api.github.com/users/tianjianjiang" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi ! When `batched=True`, you filter function must take a batch as input, and return a list of booleans.\r\n\r\nIn your case, something like\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n\r\nds_mc4_ja = load_dataset(\"mc4\", \"ja\") # This will take 6+ hours... perhaps test it with a toy dataset instea...
2022-09-06T09:47:48Z
2022-09-06T11:44:27Z
2022-09-06T11:44:27Z
CONTRIBUTOR
null
null
null
## Describe the bug `Dataset/DatasetDict.filter()` cannot have `batched=True` due to `mask` (numpy array?) being non-iterable. ## Steps to reproduce the bug (In a python 3.7.12 env, I've tried 2.4.0 and 2.3.2 with both `pyarraw==9.0.0` and `pyarrow==8.0.0`.) ```python from datasets import load_dataset ds_mc4_ja = load_dataset("mc4", "ja") # This will take 6+ hours... perhaps test it with a toy dataset instead? ds_mc4_ja_2020 = ds_mc4_ja.filter( lambda example: example["timestamp"][:4] == "2020", batched=True, ) ``` ## Expected results No error ## Actual results ```python --------------------------------------------------------------------------- RemoteTraceback Traceback (most recent call last) RemoteTraceback: """ Traceback (most recent call last): File "/opt/conda/lib/python3.7/site-packages/multiprocess/pool.py", line 121, in worker result = (True, func(*args, **kwds)) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 557, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 524, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py", line 480, in wrapper out = func(self, *args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2779, in _map_single offset=offset, File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2655, in apply_function_on_filtered_inputs processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2347, in decorated result = f(decorated_item, *args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 4946, in get_indices_from_mask_function indices_array = [i for i, to_keep in zip(indices, mask) if to_keep] TypeError: zip argument #2 must support iteration """ The above exception was the direct cause of the following exception: TypeError Traceback (most recent call last) /tmp/ipykernel_51348/2345782281.py in <module> 7 batched=True, 8 # batch_size=10_000, ----> 9 num_proc=111, 10 ) 11 # ds_mc4_ja_clean_2020 = ds_mc4_ja.filter( /opt/conda/lib/python3.7/site-packages/datasets/dataset_dict.py in filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, fn_kwargs, num_proc, desc) 878 desc=desc, 879 ) --> 880 for k, dataset in self.items() 881 } 882 ) /opt/conda/lib/python3.7/site-packages/datasets/dataset_dict.py in <dictcomp>(.0) 878 desc=desc, 879 ) --> 880 for k, dataset in self.items() 881 } 882 ) /opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 522 } 523 # apply actual function --> 524 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 525 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 526 # re-apply format to the output /opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 478 # Call actual function 479 --> 480 out = func(self, *args, **kwargs) 481 482 # Update fingerprint of in-place transforms + update in-place history of transforms /opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 2920 new_fingerprint=new_fingerprint, 2921 input_columns=input_columns, -> 2922 desc=desc, 2923 ) 2924 new_dataset = copy.deepcopy(self) /opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 2498 2499 for index, async_result in results.items(): -> 2500 transformed_shards[index] = async_result.get() 2501 2502 assert ( /opt/conda/lib/python3.7/site-packages/multiprocess/pool.py in get(self, timeout) 655 return self._value 656 else: --> 657 raise self._value 658 659 def _set(self, i, obj): TypeError: zip argument #2 must support iteration ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.4.0 - Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-debian-10.12 - Python version: 3.7.12 - PyArrow version: 9.0.0 - Pandas version: 1.3.5 (I've tried 2.4.0 and 2.3.2 with both `pyarraw==9.0.0` and `pyarrow==8.0.0`.)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4933/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4933/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1253
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1253/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1253/comments
https://api.github.com/repos/huggingface/datasets/issues/1253/events
https://github.com/huggingface/datasets/pull/1253
758,517,391
MDExOlB1bGxSZXF1ZXN0NTMzNjc4MDE1
1,253
add thainer
{ "avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4", "events_url": "https://api.github.com/users/cstorm125/events{/privacy}", "followers_url": "https://api.github.com/users/cstorm125/followers", "following_url": "https://api.github.com/users/cstorm125/following{/other_user}", "gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cstorm125", "id": 15519308, "login": "cstorm125", "node_id": "MDQ6VXNlcjE1NTE5MzA4", "organizations_url": "https://api.github.com/users/cstorm125/orgs", "received_events_url": "https://api.github.com/users/cstorm125/received_events", "repos_url": "https://api.github.com/users/cstorm125/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions", "type": "User", "url": "https://api.github.com/users/cstorm125" }
[]
closed
false
null
[]
null
[]
2020-12-07T13:41:54Z
2020-12-08T14:44:49Z
2020-12-08T14:44:49Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1253.diff", "html_url": "https://github.com/huggingface/datasets/pull/1253", "merged_at": "2020-12-08T14:44:49Z", "patch_url": "https://github.com/huggingface/datasets/pull/1253.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1253" }
ThaiNER (v1.3) is a 6,456-sentence named entity recognition dataset created from expanding the 2,258-sentence [unnamed dataset](http://pioneer.chula.ac.th/~awirote/Data-Nutcha.zip) by [Tirasaroj and Aroonmanakun (2012)](http://pioneer.chula.ac.th/~awirote/publications/). It is used to train NER taggers in [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp). The NER tags are annotated by [Tirasaroj and Aroonmanakun (2012)]((http://pioneer.chula.ac.th/~awirote/publications/)) for 2,258 sentences and the rest by [@wannaphong](https://github.com/wannaphong/). The POS tags are done by [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp)'s `perceptron` engine trained on `orchid_ud`. [@wannaphong](https://github.com/wannaphong/) is now the only maintainer of this dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1253/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1253/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6267
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6267/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6267/comments
https://api.github.com/repos/huggingface/datasets/issues/6267/events
https://github.com/huggingface/datasets/issues/6267
1,916,443,262
I_kwDODunzps5yOpp-
6,267
Multi label class encoding
{ "avatar_url": "https://avatars.githubusercontent.com/u/1000442?v=4", "events_url": "https://api.github.com/users/jmif/events{/privacy}", "followers_url": "https://api.github.com/users/jmif/followers", "following_url": "https://api.github.com/users/jmif/following{/other_user}", "gists_url": "https://api.github.com/users/jmif/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jmif", "id": 1000442, "login": "jmif", "node_id": "MDQ6VXNlcjEwMDA0NDI=", "organizations_url": "https://api.github.com/users/jmif/orgs", "received_events_url": "https://api.github.com/users/jmif/received_events", "repos_url": "https://api.github.com/users/jmif/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jmif/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmif/subscriptions", "type": "User", "url": "https://api.github.com/users/jmif" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "You can use a `Sequence(ClassLabel(...))` feature type to represent a list of labels, and `cast_column`/`cast` to perform the \"string to label\" conversion (`class_encode_column` does support nested fields), e.g., in your case:\r\n```python\r\nfrom datasets import Dataset, Sequence, ClassLabel\r\ndata = {\r\n ...
2023-09-27T22:48:08Z
2023-10-15T21:13:08Z
null
NONE
null
null
null
### Feature request I have a multi label dataset and I'd like to be able to class encode the column and store the mapping directly in the features just as I can with a single label column. `class_encode_column` currently does not support multi labels. Here's an example of what I'd like to encode: ``` data = { 'text': ['one', 'two', 'three', 'four'], 'labels': [['a', 'b'], ['b'], ['b', 'c'], ['a', 'd']] } dataset = Dataset.from_dict(data) dataset = dataset.class_encode_column('labels') ``` I did some digging into the code base to evaluate the feasibility of this (note I'm very new to this code base) and from what I noticed the `ClassLabel` feature is still stored as an underlying raw data type of int so I thought a `MultiLabel` feature could similarly be stored as a Sequence of ints, thus not requiring significant serialization / conversion work to / from arrow. I did a POC of this [here](https://github.com/huggingface/datasets/commit/15443098e9ce053943172f7ec6fce3769d7dff6e) and included a simple test case (please excuse all the commented out tests, going for speed of POC here and didn't want to fight IDE to debug a single test). In the test I just assert that `num_classes` is the same to show that things are properly serializing, but if you break after loading from disk you'll see the dataset correct and the dataset feature is as expected. After digging more I did notice a few issues - After loading from disk I noticed type of the `labels` class is `Sequence` not `MultiLabel` (though the added `feature` attribute came through). This doesn't happen for `ClassLabel` but I couldn't find the encode / decode code paths that handle this. - I subclass `Sequence` in `MultiLabel` to leverage existing serialization, but this does miss the custom encode logic that `ClassLabel` has. I'm not sure of the best way to approach this as I haven't fully understood the encode / decode flow for datasets. I suspect my simple implementation will need some improvement as it'll require a significant amount of repeated logic to mimic `ClassLabel` behavior. ### Motivation See above - would like to support multi label class encodings. ### Your contribution This would be a big help for us and we're open to contributing but I'll likely need some guidance on how to implement to fit the encode / decode flow. Some suggestions on tests / would be great too, I'm guessing in addition to the class encode tests (that I'll need to expand) we'll need encode / decode tests.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6267/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6267/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1499
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1499/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1499/comments
https://api.github.com/repos/huggingface/datasets/issues/1499/events
https://github.com/huggingface/datasets/pull/1499
763,464,693
MDExOlB1bGxSZXF1ZXN0NTM3OTIyNjA3
1,499
update the dataset id_newspapers_2018
{ "avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4", "events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}", "followers_url": "https://api.github.com/users/cahya-wirawan/followers", "following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}", "gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cahya-wirawan", "id": 7669893, "login": "cahya-wirawan", "node_id": "MDQ6VXNlcjc2Njk4OTM=", "organizations_url": "https://api.github.com/users/cahya-wirawan/orgs", "received_events_url": "https://api.github.com/users/cahya-wirawan/received_events", "repos_url": "https://api.github.com/users/cahya-wirawan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions", "type": "User", "url": "https://api.github.com/users/cahya-wirawan" }
[]
closed
false
null
[]
null
[]
2020-12-12T08:47:12Z
2020-12-14T15:28:07Z
2020-12-14T15:28:07Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1499.diff", "html_url": "https://github.com/huggingface/datasets/pull/1499", "merged_at": "2020-12-14T15:28:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/1499.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1499" }
Hi, I need to update the link to the dataset. The link in the previous PR was to a small test dataset. Thanks
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1499/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1499/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5895
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5895/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5895/comments
https://api.github.com/repos/huggingface/datasets/issues/5895/events
https://github.com/huggingface/datasets/issues/5895
1,725,467,252
I_kwDODunzps5m2Ip0
5,895
The dir name and split strings are confused when loading ArmelR/stack-exchange-instruction dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/45357817?v=4", "events_url": "https://api.github.com/users/DongHande/events{/privacy}", "followers_url": "https://api.github.com/users/DongHande/followers", "following_url": "https://api.github.com/users/DongHande/following{/other_user}", "gists_url": "https://api.github.com/users/DongHande/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/DongHande", "id": 45357817, "login": "DongHande", "node_id": "MDQ6VXNlcjQ1MzU3ODE3", "organizations_url": "https://api.github.com/users/DongHande/orgs", "received_events_url": "https://api.github.com/users/DongHande/received_events", "repos_url": "https://api.github.com/users/DongHande/repos", "site_admin": false, "starred_url": "https://api.github.com/users/DongHande/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DongHande/subscriptions", "type": "User", "url": "https://api.github.com/users/DongHande" }
[]
closed
false
null
[]
null
[ "Thanks for reporting, @DongHande.\r\n\r\nI think the issue is caused by the metadata in the dataset card: in the header of the `README.md`, they state that the dataset has 4 splits (\"finetune\", \"reward\", \"rl\", \"evaluation\"). \r\n```yaml\r\n splits:\r\n - name: finetune\r\n num_bytes: 6674567576\r\...
2023-05-25T09:39:06Z
2023-05-29T02:32:12Z
2023-05-29T02:32:12Z
NONE
null
null
null
### Describe the bug When I load the ArmelR/stack-exchange-instruction dataset, I encounter a bug that may be raised by confusing the dir name string and the split string about the dataset. When I use the script "datasets.load_dataset('ArmelR/stack-exchange-instruction', data_dir="data/finetune", split="train", use_auth_token=True)", it fails. But it succeeds when I add the "streaming = True" parameter. The website of the dataset is https://huggingface.co/datasets/ArmelR/stack-exchange-instruction/ . The traceback logs are as below: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/load.py", line 1797, in load_dataset builder_instance.download_and_prepare( File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/builder.py", line 890, in download_and_prepare self._download_and_prepare( File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/builder.py", line 985, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/builder.py", line 1706, in _prepare_split split_info = self.info.splits[split_generator.name] File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/splits.py", line 530, in __getitem__ instructions = make_file_instructions( File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/arrow_reader.py", line 112, in make_file_instructions name2filenames = { File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/arrow_reader.py", line 113, in <dictcomp> info.name: filenames_for_dataset_split( File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/naming.py", line 70, in filenames_for_dataset_split prefix = filename_prefix_for_split(dataset_name, split) File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/naming.py", line 54, in filename_prefix_for_split if os.path.basename(name) != name: File "/home/xxx/miniconda3/envs/code/lib/python3.9/posixpath.py", line 142, in basename p = os.fspath(p) TypeError: expected str, bytes or os.PathLike object, not NoneType ### Steps to reproduce the bug 1. import datasets library function: ```from datasets import load_dataset``` 2. load dataset: ```ds=load_dataset('ArmelR/stack-exchange-instruction', data_dir="data/finetune", split="train", use_auth_token=True)``` ### Expected behavior The dataset can be loaded successfully without the streaming setting. ### Environment info Linux, python=3.9 datasets=2.12.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5895/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5895/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1579
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1579/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1579/comments
https://api.github.com/repos/huggingface/datasets/issues/1579/events
https://github.com/huggingface/datasets/pull/1579
767,808,465
MDExOlB1bGxSZXF1ZXN0NTQwMzk5OTY5
1,579
Adding CLIMATE-FEVER dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/1658969?v=4", "events_url": "https://api.github.com/users/tdiggelm/events{/privacy}", "followers_url": "https://api.github.com/users/tdiggelm/followers", "following_url": "https://api.github.com/users/tdiggelm/following{/other_user}", "gists_url": "https://api.github.com/users/tdiggelm/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tdiggelm", "id": 1658969, "login": "tdiggelm", "node_id": "MDQ6VXNlcjE2NTg5Njk=", "organizations_url": "https://api.github.com/users/tdiggelm/orgs", "received_events_url": "https://api.github.com/users/tdiggelm/received_events", "repos_url": "https://api.github.com/users/tdiggelm/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tdiggelm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tdiggelm/subscriptions", "type": "User", "url": "https://api.github.com/users/tdiggelm" }
[]
closed
false
null
[]
null
[ "I `git rebase`ed my branch to `upstream/master` as suggested in point 7 of <https://huggingface.co/docs/datasets/share_dataset.html> and subsequently used `git pull` to be able to push to my remote branch. However, I think this messed up the history.\r\n\r\nPlease let me know if I should create a clean new PR with...
2020-12-15T16:49:22Z
2020-12-22T13:43:16Z
2020-12-22T13:43:15Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1579.diff", "html_url": "https://github.com/huggingface/datasets/pull/1579", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1579.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1579" }
This PR request the addition of the CLIMATE-FEVER dataset: A dataset adopting the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet. Each claim is accompanied by five manually annotated evidence sentences retrieved from the English Wikipedia that support, refute or do not give enough information to validate the claim totalling in 7,675 claim-evidence pairs. The dataset features challenging claims that relate multiple facets and disputed cases of claims where both supporting and refuting evidence are present. More information can be found at: - Homepage: <http://climatefever.ai> - Paper: <https://arxiv.org/abs/2012.00614>
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1579/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1579/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3017
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3017/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3017/comments
https://api.github.com/repos/huggingface/datasets/issues/3017/events
https://github.com/huggingface/datasets/pull/3017
1,015,215,528
PR_kwDODunzps4spE9m
3,017
Remove unused parameter in xdirname
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
2021-10-04T13:55:53Z
2021-10-05T11:37:01Z
2021-10-05T11:37:00Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3017.diff", "html_url": "https://github.com/huggingface/datasets/pull/3017", "merged_at": "2021-10-05T11:37:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/3017.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3017" }
Minor fix to remove unused args `*p` in `xdirname`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3017/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3017/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1897
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1897/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1897/comments
https://api.github.com/repos/huggingface/datasets/issues/1897/events
https://github.com/huggingface/datasets/pull/1897
810,113,263
MDExOlB1bGxSZXF1ZXN0NTc0ODU3MTIy
1,897
Fix PandasArrayExtensionArray conversion to native type
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2021-02-17T11:48:24Z
2021-02-17T13:15:16Z
2021-02-17T13:15:15Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1897.diff", "html_url": "https://github.com/huggingface/datasets/pull/1897", "merged_at": "2021-02-17T13:15:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/1897.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1897" }
To make the conversion to csv work in #1887 , we need PandasArrayExtensionArray used for multidimensional numpy arrays to be converted to pandas native types. However previously pandas.core.internals.ExtensionBlock.to_native_types would fail with an PandasExtensionArray because 1. the PandasExtensionArray.isna method was wrong 2. the conversion of a PandasExtensionArray to a numpy array with dtype=object was returning a multidimensional array while pandas excepts a 1D array in this case (more info [here](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.api.extensions.ExtensionArray.html#pandas.api.extensions.ExtensionArray)) I fixed these two issues and now the conversion to native types works, and so is the export to csv. cc @SBrandeis
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1897/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1897/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4120
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4120/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4120/comments
https://api.github.com/repos/huggingface/datasets/issues/4120/events
https://github.com/huggingface/datasets/issues/4120
1,195,887,430
I_kwDODunzps5HR8tG
4,120
Representing dictionaries (json) objects as features
{ "avatar_url": "https://avatars.githubusercontent.com/u/8031035?v=4", "events_url": "https://api.github.com/users/yanaiela/events{/privacy}", "followers_url": "https://api.github.com/users/yanaiela/followers", "following_url": "https://api.github.com/users/yanaiela/following{/other_user}", "gists_url": "https://api.github.com/users/yanaiela/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yanaiela", "id": 8031035, "login": "yanaiela", "node_id": "MDQ6VXNlcjgwMzEwMzU=", "organizations_url": "https://api.github.com/users/yanaiela/orgs", "received_events_url": "https://api.github.com/users/yanaiela/received_events", "repos_url": "https://api.github.com/users/yanaiela/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yanaiela/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanaiela/subscriptions", "type": "User", "url": "https://api.github.com/users/yanaiela" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[]
2022-04-07T11:07:41Z
2022-04-07T11:07:41Z
null
CONTRIBUTOR
null
null
null
In the process of adding a new dataset to the hub, I stumbled upon the inability to represent dictionaries that contain different key names, unknown in advance (and may differ between samples), original asked in the [forum](https://discuss.huggingface.co/t/representing-nested-dictionary-with-different-keys/16442). For instance: ``` sample1 = {"nps": { "a": {"id": 0, "text": "text1"}, "b": {"id": 1, "text": "text2"}, }} sample2 = {"nps": { "a": {"id": 0, "text": "text1"}, "b": {"id": 1, "text": "text2"}, "c": {"id": 2, "text": "text3"}, }} sample3 = {"nps": { "a": {"id": 0, "text": "text1"}, "b": {"id": 1, "text": "text2"}, "c": {"id": 2, "text": "text3"}, "d": {"id": 3, "text": "text4"}, }} ``` the `nps` field cannot be represented as a Feature while maintaining its original structure. @lhoestq suggested to add JSON as a new feature type, which will solve this problem. It seems like an alternative solution would be to change the original data format, which isn't an optimal solution in my case. Moreover, JSON is a common structure, that will likely to be useful in future datasets as well.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4120/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4120/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5637
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5637/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5637/comments
https://api.github.com/repos/huggingface/datasets/issues/5637/events
https://github.com/huggingface/datasets/issues/5637
1,625,295,691
I_kwDODunzps5g4AtL
5,637
IterableDataset with_format does not support 'device' keyword for jax
{ "avatar_url": "https://avatars.githubusercontent.com/u/91322985?v=4", "events_url": "https://api.github.com/users/Lime-Cakes/events{/privacy}", "followers_url": "https://api.github.com/users/Lime-Cakes/followers", "following_url": "https://api.github.com/users/Lime-Cakes/following{/other_user}", "gists_url": "https://api.github.com/users/Lime-Cakes/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Lime-Cakes", "id": 91322985, "login": "Lime-Cakes", "node_id": "MDQ6VXNlcjkxMzIyOTg1", "organizations_url": "https://api.github.com/users/Lime-Cakes/orgs", "received_events_url": "https://api.github.com/users/Lime-Cakes/received_events", "repos_url": "https://api.github.com/users/Lime-Cakes/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Lime-Cakes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lime-Cakes/subscriptions", "type": "User", "url": "https://api.github.com/users/Lime-Cakes" }
[]
open
false
null
[]
null
[ "Hi! Yes, only `torch` is currently supported. Unlike `Dataset`, `IterableDataset` is not PyArrow-backed, so we cannot simply call `to_numpy` on the underlying subtables to format them numerically. Instead, we must manually convert examples to (numeric) arrays while preserving consistency with `Dataset`, which is n...
2023-03-15T11:04:12Z
2023-03-16T18:30:59Z
null
NONE
null
null
null
### Describe the bug As seen here: https://huggingface.co/docs/datasets/use_with_jax dataset.with_format() supports the keyword 'device', to put data on a specific device when loaded as jax. However, when called on an IterableDataset, I got the error `TypeError: with_format() got an unexpected keyword argument 'device'` Looking over the code, it seems IterableDataset support only pytorch and no support for jax device keyword? https://github.com/huggingface/datasets/blob/fc5c84f36684343bff3e424cb0fd1ac5ecdd66da/src/datasets/iterable_dataset.py#L1029 ### Steps to reproduce the bug 1. Load an IterableDataset (tested in streaming mode) 2. Call with_format('jax',device=device) ### Expected behavior I expect to call `with_format('jax', device=device)` as per [documentation](https://huggingface.co/docs/datasets/use_with_jax) without error ### Environment info Tested with installing newest (dev) and also pip release (2.10.1). - `datasets` version: 2.10.2.dev0 - Platform: Linux-5.15.89+-x86_64-with-debian-bullseye-sid - Python version: 3.7.12 - Huggingface_hub version: 0.12.1 - PyArrow version: 11.0.0 - Pandas version: 1.3.5
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5637/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5637/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3048
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3048/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3048/comments
https://api.github.com/repos/huggingface/datasets/issues/3048/events
https://github.com/huggingface/datasets/issues/3048
1,021,765,661
I_kwDODunzps485ugd
3,048
Identify which shard data belongs to
{ "avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4", "events_url": "https://api.github.com/users/borisdayma/events{/privacy}", "followers_url": "https://api.github.com/users/borisdayma/followers", "following_url": "https://api.github.com/users/borisdayma/following{/other_user}", "gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/borisdayma", "id": 715491, "login": "borisdayma", "node_id": "MDQ6VXNlcjcxNTQ5MQ==", "organizations_url": "https://api.github.com/users/borisdayma/orgs", "received_events_url": "https://api.github.com/users/borisdayma/received_events", "repos_url": "https://api.github.com/users/borisdayma/repos", "site_admin": false, "starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions", "type": "User", "url": "https://api.github.com/users/borisdayma" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "Independently of this I think it raises the need to allow multiprocessing during streaming so that we get samples from multiple shards in one batch." ]
2021-10-09T17:46:35Z
2021-10-09T20:24:17Z
null
CONTRIBUTOR
null
null
null
**Is your feature request related to a problem? Please describe.** I'm training on a large dataset made of multiple sub-datasets. During training I can observe some jumps in loss which may correspond to different shards. ![image](https://user-images.githubusercontent.com/715491/136668758-521263aa-a9b2-4ad2-8d22-060b6bf86a1c.png) My suspicion is that either: * some of the sub-datasets are harder for the model than others * some of the sub-datasets are not formatted properly I'd like to identify which shards correspond to those jumps. **Describe the solution you'd like** It would be nice to have a key associated to each data sample or data batch containing details on where the data comes from (shard idx + item idx within the shard). This should be supported both in local and streaming mode. **Describe alternatives you've considered** A fix would be for me to add myself details (shard id, sample id) as part of each data sample. The inconvenient is that it requires users to process/reupload every dataset when they need this feature.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3048/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3048/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1274
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1274/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1274/comments
https://api.github.com/repos/huggingface/datasets/issues/1274/events
https://github.com/huggingface/datasets/pull/1274
758,943,174
MDExOlB1bGxSZXF1ZXN0NTM0MDI0MTQx
1,274
oclar-dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/26907161?v=4", "events_url": "https://api.github.com/users/alaameloh/events{/privacy}", "followers_url": "https://api.github.com/users/alaameloh/followers", "following_url": "https://api.github.com/users/alaameloh/following{/other_user}", "gists_url": "https://api.github.com/users/alaameloh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alaameloh", "id": 26907161, "login": "alaameloh", "node_id": "MDQ6VXNlcjI2OTA3MTYx", "organizations_url": "https://api.github.com/users/alaameloh/orgs", "received_events_url": "https://api.github.com/users/alaameloh/received_events", "repos_url": "https://api.github.com/users/alaameloh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alaameloh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alaameloh/subscriptions", "type": "User", "url": "https://api.github.com/users/alaameloh" }
[]
closed
false
null
[]
null
[ "merging since the CI is fixed on master" ]
2020-12-07T23:56:45Z
2020-12-09T15:36:08Z
2020-12-09T15:36:08Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1274.diff", "html_url": "https://github.com/huggingface/datasets/pull/1274", "merged_at": "2020-12-09T15:36:08Z", "patch_url": "https://github.com/huggingface/datasets/pull/1274.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1274" }
Opinion Corpus for Lebanese Arabic Reviews (OCLAR) corpus is utilizable for Arabic sentiment classification on reviews, including hotels, restaurants, shops, and others. : [homepage](http://archive.ics.uci.edu/ml/datasets/Opinion+Corpus+for+Lebanese+Arabic+Reviews+%28OCLAR%29#)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1274/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1274/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5502
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5502/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5502/comments
https://api.github.com/repos/huggingface/datasets/issues/5502/events
https://github.com/huggingface/datasets/pull/5502
1,570,091,225
PR_kwDODunzps5JN0aX
5,502
Added functionality: sort datasets by multiple keys
{ "avatar_url": "https://avatars.githubusercontent.com/u/7805682?v=4", "events_url": "https://api.github.com/users/MichlF/events{/privacy}", "followers_url": "https://api.github.com/users/MichlF/followers", "following_url": "https://api.github.com/users/MichlF/following{/other_user}", "gists_url": "https://api.github.com/users/MichlF/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MichlF", "id": 7805682, "login": "MichlF", "node_id": "MDQ6VXNlcjc4MDU2ODI=", "organizations_url": "https://api.github.com/users/MichlF/orgs", "received_events_url": "https://api.github.com/users/MichlF/received_events", "repos_url": "https://api.github.com/users/MichlF/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MichlF/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MichlF/subscriptions", "type": "User", "url": "https://api.github.com/users/MichlF" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Thanks! I've left some comments.\r\n> \r\n> We should also add some tests, mainly to make sure `reverse` behaves as expected. Let me know if you need help with that.\r\n\r\nThanks for the offer! I couldn't find any guidelines on ho...
2023-02-03T16:17:00Z
2023-02-21T14:46:49Z
2023-02-21T14:39:23Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5502.diff", "html_url": "https://github.com/huggingface/datasets/pull/5502", "merged_at": "2023-02-21T14:39:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/5502.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5502" }
Added functionality implementation: sort datasets by multiple keys/columns as discussed in https://github.com/huggingface/datasets/issues/5425.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5502/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5502/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4543
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4543/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4543/comments
https://api.github.com/repos/huggingface/datasets/issues/4543/events
https://github.com/huggingface/datasets/pull/4543
1,280,379,781
PR_kwDODunzps46IiEp
4,543
[CI] Fix upstream hub test url
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Remaining CI failures are unrelated to this fix, merging" ]
2022-06-22T15:34:27Z
2022-06-22T16:37:40Z
2022-06-22T16:27:37Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4543.diff", "html_url": "https://github.com/huggingface/datasets/pull/4543", "merged_at": "2022-06-22T16:27:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/4543.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4543" }
Some tests were still using moon-stagign instead of hub-ci. I also updated the token to use one dedicated to `datasets`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4543/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4543/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1526
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1526/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1526/comments
https://api.github.com/repos/huggingface/datasets/issues/1526/events
https://github.com/huggingface/datasets/pull/1526
764,591,243
MDExOlB1bGxSZXF1ZXN0NTM4NTgxNDg4
1,526
added Hebrew thisworld corpus
{ "avatar_url": "https://avatars.githubusercontent.com/u/10088963?v=4", "events_url": "https://api.github.com/users/imvladikon/events{/privacy}", "followers_url": "https://api.github.com/users/imvladikon/followers", "following_url": "https://api.github.com/users/imvladikon/following{/other_user}", "gists_url": "https://api.github.com/users/imvladikon/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/imvladikon", "id": 10088963, "login": "imvladikon", "node_id": "MDQ6VXNlcjEwMDg4OTYz", "organizations_url": "https://api.github.com/users/imvladikon/orgs", "received_events_url": "https://api.github.com/users/imvladikon/received_events", "repos_url": "https://api.github.com/users/imvladikon/repos", "site_admin": false, "starred_url": "https://api.github.com/users/imvladikon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/imvladikon/subscriptions", "type": "User", "url": "https://api.github.com/users/imvladikon" }
[]
closed
false
null
[]
null
[ "merging since the CI is fixed on master" ]
2020-12-12T23:42:52Z
2020-12-18T10:47:30Z
2020-12-18T10:47:30Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1526.diff", "html_url": "https://github.com/huggingface/datasets/pull/1526", "merged_at": "2020-12-18T10:47:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/1526.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1526" }
added corpus from https://thisworld.online/ , https://github.com/thisworld1/thisworld.online
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1526/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1526/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2657
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2657/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2657/comments
https://api.github.com/repos/huggingface/datasets/issues/2657/events
https://github.com/huggingface/datasets/issues/2657
945,822,829
MDU6SXNzdWU5NDU4MjI4Mjk=
2,657
`to_json` reporting enhancements
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[]
2021-07-15T23:32:18Z
2021-07-15T23:33:53Z
null
CONTRIBUTOR
null
null
null
While using `to_json` 2 things came to mind that would have made the experience easier on the user: 1. Could we have a `desc` arg for the tqdm use and a fallback to just `to_json` so that it'd be clear to the user what's happening? Surely, one can just print the description before calling json, but I thought perhaps it'd help to have it self-identify like you did for other progress bars recently. 2. It took me a while to make sense of the reported numbers: ``` 22%|██▏ | 1536/7076 [12:30:57<44:09:42, 28.70s/it] ``` So iteration here happens to be 10K samples, and the total is 70M records. But the user does't know that, so the progress bar is perfect, but the numbers it reports are meaningless until one discovers that 1it=10K samples. And one still has to convert these in the head - so it's not quick. Not exactly sure what's the best way to approach this, perhaps it can be part of `desc`? or report M or K, so it'd be built-in if it were to print, e.g.: ``` 22%|██▏ | 15360K/70760K [12:30:57<44:09:42, 28.70s/it] ``` or ``` 22%|██▏ | 15.36M/70.76M [12:30:57<44:09:42, 28.70s/it] ``` (while of course remaining friendly to small datasets) I forget if tqdm lets you add a magnitude identifier to the running count. Thank you!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2657/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2657/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2776
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2776/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2776/comments
https://api.github.com/repos/huggingface/datasets/issues/2776/events
https://github.com/huggingface/datasets/issues/2776
964,400,596
MDU6SXNzdWU5NjQ0MDA1OTY=
2,776
document `config.HF_DATASETS_OFFLINE` and precedence
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[]
2021-08-09T21:23:17Z
2021-08-09T21:23:17Z
null
CONTRIBUTOR
null
null
null
https://github.com/huggingface/datasets/pull/1976 implemented `HF_DATASETS_OFFLINE`, but: 1. `config.HF_DATASETS_OFFLINE` is not documented 2. the precedence is not documented (env, config) I'm thinking it probably should be similar to what it says https://huggingface.co/docs/datasets/loading_datasets.html#from-the-huggingface-hub about `datasets.config.IN_MEMORY_MAX_SIZE`: Quote: > The default in 🤗 Datasets is to memory-map the dataset on disk unless you set datasets.config.IN_MEMORY_MAX_SIZE different from 0 bytes (default). In that case, the dataset will be copied in-memory if its size is smaller than datasets.config.IN_MEMORY_MAX_SIZE bytes, and memory-mapped otherwise. This behavior can be enabled by setting either the configuration option datasets.config.IN_MEMORY_MAX_SIZE (higher precedence) or the environment variable HF_DATASETS_IN_MEMORY_MAX_SIZE (lower precedence) to nonzero. Context: trying to use `config.HF_DATASETS_OFFLINE` here: https://github.com/bigscience-workshop/Megatron-DeepSpeed/pull/48 but are uncertain if it's safe, since it's not documented as a public API. Thank you! @lhoestq, @albertvillanova
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2776/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2776/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4176
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4176/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4176/comments
https://api.github.com/repos/huggingface/datasets/issues/4176/events
https://github.com/huggingface/datasets/issues/4176
1,206,515,563
I_kwDODunzps5H6fdr
4,176
Very slow between two operations
{ "avatar_url": "https://avatars.githubusercontent.com/u/26405281?v=4", "events_url": "https://api.github.com/users/yananchen1989/events{/privacy}", "followers_url": "https://api.github.com/users/yananchen1989/followers", "following_url": "https://api.github.com/users/yananchen1989/following{/other_user}", "gists_url": "https://api.github.com/users/yananchen1989/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yananchen1989", "id": 26405281, "login": "yananchen1989", "node_id": "MDQ6VXNlcjI2NDA1Mjgx", "organizations_url": "https://api.github.com/users/yananchen1989/orgs", "received_events_url": "https://api.github.com/users/yananchen1989/received_events", "repos_url": "https://api.github.com/users/yananchen1989/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yananchen1989/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yananchen1989/subscriptions", "type": "User", "url": "https://api.github.com/users/yananchen1989" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[]
2022-04-17T23:52:29Z
2022-04-18T00:03:00Z
2022-04-18T00:03:00Z
NONE
null
null
null
Hello, in the processing stage, I use two operations. The first one : map + filter, is very fast and it uses the full cores, while the socond step is very slow and did not use full cores. Also, there is a significant lag between them. Am I missing something ? ``` raw_datasets = raw_datasets.map(split_func, batched=False, num_proc=args.preprocessing_num_workers, load_from_cache_file=not args.overwrite_cache, desc = "running split para ==>")\ .filter(lambda example: example['text1']!='' and example['text2']!='', num_proc=args.preprocessing_num_workers, desc="filtering ==>") processed_datasets = raw_datasets.map( preprocess_function, batched=True, num_proc=args.preprocessing_num_workers, remove_columns=column_names, load_from_cache_file=not args.overwrite_cache, desc="Running tokenizer on dataset===>", ) ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4176/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4176/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3585
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3585/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3585/comments
https://api.github.com/repos/huggingface/datasets/issues/3585/events
https://github.com/huggingface/datasets/issues/3585
1,105,821,470
I_kwDODunzps5B6X8e
3,585
Datasets streaming + map doesn't work for `Audio`
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "cfd3d7", "default": true, "descript...
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "This seems related to https://github.com/huggingface/datasets/issues/3505." ]
2022-01-17T12:55:42Z
2022-01-20T13:28:00Z
2022-01-20T13:28:00Z
MEMBER
null
null
null
## Describe the bug When using audio datasets in streaming mode, applying a `map(...)` before iterating leads to an error as the key `array` does not exist anymore. ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset("common_voice", "en", streaming=True, split="train") def map_fn(batch): print("audio keys", batch["audio"].keys()) batch["audio"] = batch["audio"]["array"][:100] return batch ds = ds.map(map_fn) sample = next(iter(ds)) ``` I think the audio is somehow decoded before `.map(...)` is actually called. ## Expected results IMO, the above code snippet should work. ## Actual results ```bash audio keys dict_keys(['path', 'bytes']) Traceback (most recent call last): File "./run_audio.py", line 15, in <module> sample = next(iter(ds)) File "/home/patrick/python_bin/datasets/iterable_dataset.py", line 341, in __iter__ for key, example in self._iter(): File "/home/patrick/python_bin/datasets/iterable_dataset.py", line 338, in _iter yield from ex_iterable File "/home/patrick/python_bin/datasets/iterable_dataset.py", line 192, in __iter__ yield key, self.function(example) File "./run_audio.py", line 9, in map_fn batch["input"] = batch["audio"]["array"][:100] KeyError: 'array' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.1.dev0 - Platform: Linux-5.3.0-64-generic-x86_64-with-glibc2.17 - Python version: 3.8.12 - PyArrow version: 6.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3585/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3585/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1324
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1324/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1324/comments
https://api.github.com/repos/huggingface/datasets/issues/1324/events
https://github.com/huggingface/datasets/issues/1324
759,587,864
MDU6SXNzdWU3NTk1ODc4NjQ=
1,324
❓ Sharing ElasticSearch indexed dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4", "events_url": "https://api.github.com/users/pietrolesci/events{/privacy}", "followers_url": "https://api.github.com/users/pietrolesci/followers", "following_url": "https://api.github.com/users/pietrolesci/following{/other_user}", "gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pietrolesci", "id": 61748653, "login": "pietrolesci", "node_id": "MDQ6VXNlcjYxNzQ4NjUz", "organizations_url": "https://api.github.com/users/pietrolesci/orgs", "received_events_url": "https://api.github.com/users/pietrolesci/received_events", "repos_url": "https://api.github.com/users/pietrolesci/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions", "type": "User", "url": "https://api.github.com/users/pietrolesci" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
open
false
null
[]
null
[ "Hello @pietrolesci , I am not sure to understand what you are trying to do here.\r\n\r\nIf you're looking for ways to save a dataset on disk, you can you the `save_to_disk` method:\r\n```python\r\n>>> import datasets\r\n>>> loaded_dataset = datasets.load(\"dataset_name\")\r\n>>> loaded_dataset.save_to_disk(\"/path...
2020-12-08T16:25:58Z
2020-12-22T07:50:56Z
null
NONE
null
null
null
Hi there, First of all, thank you very much for this amazing library. Datasets have become my preferred data structure for basically everything I am currently doing. **Question:** I'm working with a dataset and I have an elasticsearch container running at localhost:9200. I added an elasticsearch index and I was wondering - how can I know where it has been saved? - how can I share the indexed dataset with others? I tried to dig into the docs, but could not find anything about that. Thank you very much for your help. Best, Pietro Edit: apologies for the wrong label
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1324/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1324/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6035
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6035/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6035/comments
https://api.github.com/repos/huggingface/datasets/issues/6035/events
https://github.com/huggingface/datasets/pull/6035
1,805,087,687
PR_kwDODunzps5Vh_QR
6,035
Dataset representation
{ "avatar_url": "https://avatars.githubusercontent.com/u/63643948?v=4", "events_url": "https://api.github.com/users/Ganryuu/events{/privacy}", "followers_url": "https://api.github.com/users/Ganryuu/followers", "following_url": "https://api.github.com/users/Ganryuu/following{/other_user}", "gists_url": "https://api.github.com/users/Ganryuu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Ganryuu", "id": 63643948, "login": "Ganryuu", "node_id": "MDQ6VXNlcjYzNjQzOTQ4", "organizations_url": "https://api.github.com/users/Ganryuu/orgs", "received_events_url": "https://api.github.com/users/Ganryuu/received_events", "repos_url": "https://api.github.com/users/Ganryuu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Ganryuu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ganryuu/subscriptions", "type": "User", "url": "https://api.github.com/users/Ganryuu" }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6035). All of your documentation changes will be reflected on that endpoint." ]
2023-07-14T15:42:37Z
2023-07-19T19:41:35Z
null
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6035.diff", "html_url": "https://github.com/huggingface/datasets/pull/6035", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6035.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6035" }
__repr__ and _repr_html_ now both are similar to that of Polars
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6035/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6035/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3071
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3071/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3071/comments
https://api.github.com/repos/huggingface/datasets/issues/3071/events
https://github.com/huggingface/datasets/issues/3071
1,024,893,493
I_kwDODunzps49FqI1
3,071
Custom plain text dataset, plain json dataset and plain csv dataset are remove from datasets template folder
{ "avatar_url": "https://avatars.githubusercontent.com/u/49173327?v=4", "events_url": "https://api.github.com/users/zixiliuUSC/events{/privacy}", "followers_url": "https://api.github.com/users/zixiliuUSC/followers", "following_url": "https://api.github.com/users/zixiliuUSC/following{/other_user}", "gists_url": "https://api.github.com/users/zixiliuUSC/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zixiliuUSC", "id": 49173327, "login": "zixiliuUSC", "node_id": "MDQ6VXNlcjQ5MTczMzI3", "organizations_url": "https://api.github.com/users/zixiliuUSC/orgs", "received_events_url": "https://api.github.com/users/zixiliuUSC/received_events", "repos_url": "https://api.github.com/users/zixiliuUSC/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zixiliuUSC/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zixiliuUSC/subscriptions", "type": "User", "url": "https://api.github.com/users/zixiliuUSC" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "Hi @zixiliuUSC, \r\n\r\nAs explained in the documentation (https://huggingface.co/docs/datasets/loading.html#json), we support loading any dataset in JSON (as well as CSV, text, Parquet) format:\r\n```python\r\nds = load_dataset('json', data_files='my_file.json')\r\n```" ]
2021-10-13T07:32:10Z
2021-10-13T08:27:04Z
2021-10-13T08:27:03Z
NONE
null
null
null
## Adding a Dataset - **Name:** text, json, csv - **Description:** I am developing a customized dataset loading script. The problem is mainly about my custom dataset is seperate into many files and I only find a dataset loading template in [https://github.com/huggingface/datasets/blob/1.2.1/datasets/json/json.py](https://github.com/huggingface/datasets/blob/1.2.1/datasets/json/json.py) that can handle my circumstance. I'm afraid these templates are too old to use. Could you re-add these three templates to current master branch?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3071/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3071/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4006
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4006/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4006/comments
https://api.github.com/repos/huggingface/datasets/issues/4006/events
https://github.com/huggingface/datasets/pull/4006
1,179,367,195
PR_kwDODunzps408YnW
4,006
Use audio feature in ASR task template
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-03-24T11:15:22Z
2022-03-24T17:19:29Z
2022-03-24T16:48:02Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4006.diff", "html_url": "https://github.com/huggingface/datasets/pull/4006", "merged_at": "2022-03-24T16:48:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/4006.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4006" }
The AutomaticSpeechRecognition task template is outdated: it still uses the file path column as input instead of the audio column. I changed that and updated all the datasets as well as the tests. The only community dataset that will need to be updated is `facebook/multilingual_librispeech`. It has almost zero usage unfortunately (probably because users load the duplicate `multilingual_librispeech` directly instead), but it means we can update it. (this makes me think that we should deprecate `multilingual_librispeech` it and redirect users to `facebook/multilingual_librispeech`). This PR is also useful for the AudioFolder in https://github.com/huggingface/datasets/pull/3963
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4006/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4006/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5014
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5014/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5014/comments
https://api.github.com/repos/huggingface/datasets/issues/5014/events
https://github.com/huggingface/datasets/issues/5014
1,383,422,639
I_kwDODunzps5SdVqv
5,014
I need to read the custom dataset in conll format
{ "avatar_url": "https://avatars.githubusercontent.com/u/39985245?v=4", "events_url": "https://api.github.com/users/shell-nlp/events{/privacy}", "followers_url": "https://api.github.com/users/shell-nlp/followers", "following_url": "https://api.github.com/users/shell-nlp/following{/other_user}", "gists_url": "https://api.github.com/users/shell-nlp/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shell-nlp", "id": 39985245, "login": "shell-nlp", "node_id": "MDQ6VXNlcjM5OTg1MjQ1", "organizations_url": "https://api.github.com/users/shell-nlp/orgs", "received_events_url": "https://api.github.com/users/shell-nlp/received_events", "repos_url": "https://api.github.com/users/shell-nlp/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shell-nlp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shell-nlp/subscriptions", "type": "User", "url": "https://api.github.com/users/shell-nlp" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "Hi! We don't currently have a builder for parsing custom `conll` datasets, but I guess we could add one as a packaged module (similarly to what [TFDS](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/core/dataset_builders/conll/conll_dataset_builder.py) did). @lhoestq @albertvillanova WDYT?\r...
2022-09-23T07:49:42Z
2022-11-02T11:57:15Z
null
NONE
null
null
null
I need to read the custom dataset in conll format
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5014/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5014/timeline
null
reopened
false
https://api.github.com/repos/huggingface/datasets/issues/2528
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2528/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2528/comments
https://api.github.com/repos/huggingface/datasets/issues/2528/events
https://github.com/huggingface/datasets/issues/2528
926,314,656
MDU6SXNzdWU5MjYzMTQ2NTY=
2,528
Logging cannot be set to NOTSET similar to transformers
{ "avatar_url": "https://avatars.githubusercontent.com/u/34662010?v=4", "events_url": "https://api.github.com/users/joshzwiebel/events{/privacy}", "followers_url": "https://api.github.com/users/joshzwiebel/followers", "following_url": "https://api.github.com/users/joshzwiebel/following{/other_user}", "gists_url": "https://api.github.com/users/joshzwiebel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/joshzwiebel", "id": 34662010, "login": "joshzwiebel", "node_id": "MDQ6VXNlcjM0NjYyMDEw", "organizations_url": "https://api.github.com/users/joshzwiebel/orgs", "received_events_url": "https://api.github.com/users/joshzwiebel/received_events", "repos_url": "https://api.github.com/users/joshzwiebel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/joshzwiebel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joshzwiebel/subscriptions", "type": "User", "url": "https://api.github.com/users/joshzwiebel" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "Hi @joshzwiebel, thanks for reporting. We are going to align with `transformers`." ]
2021-06-21T15:04:54Z
2021-06-24T14:42:47Z
2021-06-24T14:42:47Z
NONE
null
null
null
## Describe the bug In the transformers library you can set the verbosity level to logging.NOTSET to work around the usage of tqdm and IPywidgets, however in Datasets this is no longer possible. This is because transformers set the verbosity level of tqdm with [this](https://github.com/huggingface/transformers/blob/b53bc55ba9bb10d5ee279eab51a2f0acc5af2a6b/src/transformers/file_utils.py#L1449) `disable=bool(logging.get_verbosity() == logging.NOTSET)` and datasets accomplishes this like [so](https://github.com/huggingface/datasets/blob/83554e410e1ab8c6f705cfbb2df7953638ad3ac1/src/datasets/utils/file_utils.py#L493) `not_verbose = bool(logger.getEffectiveLevel() > WARNING)` ## Steps to reproduce the bug ```python import datasets import logging datasets.logging.get_verbosity = lambda : logging.NOTSET datasets.load_dataset("patrickvonplaten/librispeech_asr_dummy") ``` ## Expected results The code should download and load the dataset as normal without displaying progress bars ## Actual results ```ImportError Traceback (most recent call last) <ipython-input-4-aec65c0509c6> in <module> ----> 1 datasets.load_dataset("patrickvonplaten/librispeech_asr_dummy") ~/venv/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, **config_kwargs) 713 dataset=True, 714 return_resolved_file_path=True, --> 715 use_auth_token=use_auth_token, 716 ) 717 # Set the base path for downloads as the parent of the script location ~/venv/lib/python3.7/site-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, **download_kwargs) 350 file_path = hf_bucket_url(path, filename=name, dataset=False) 351 try: --> 352 local_path = cached_path(file_path, download_config=download_config) 353 except FileNotFoundError: 354 raise FileNotFoundError( ~/venv/lib/python3.7/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 289 use_etag=download_config.use_etag, 290 max_retries=download_config.max_retries, --> 291 use_auth_token=download_config.use_auth_token, 292 ) 293 elif os.path.exists(url_or_filename): ~/venv/lib/python3.7/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token) 668 headers=headers, 669 cookies=cookies, --> 670 max_retries=max_retries, 671 ) 672 ~/venv/lib/python3.7/site-packages/datasets/utils/file_utils.py in http_get(url, temp_file, proxies, resume_size, headers, cookies, timeout, max_retries) 493 initial=resume_size, 494 desc="Downloading", --> 495 disable=not_verbose, 496 ) 497 for chunk in response.iter_content(chunk_size=1024): ~/venv/lib/python3.7/site-packages/tqdm/notebook.py in __init__(self, *args, **kwargs) 217 total = self.total * unit_scale if self.total else self.total 218 self.container = self.status_printer( --> 219 self.fp, total, self.desc, self.ncols) 220 self.sp = self.display 221 ~/venv/lib/python3.7/site-packages/tqdm/notebook.py in status_printer(_, total, desc, ncols) 95 if IProgress is None: # #187 #451 #558 #872 96 raise ImportError( ---> 97 "IProgress not found. Please update jupyter and ipywidgets." 98 " See https://ipywidgets.readthedocs.io/en/stable" 99 "/user_install.html") ImportError: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.0 - Platform: Linux-5.4.95-42.163.amzn2.x86_64-x86_64-with-debian-10.8 - Python version: 3.7.10 - PyArrow version: 3.0.0 I am running this code on Deepnote and which important to this issue **does not** support IPywidgets
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2528/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2528/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1623
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1623/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1623/comments
https://api.github.com/repos/huggingface/datasets/issues/1623/events
https://github.com/huggingface/datasets/pull/1623
772,950,710
MDExOlB1bGxSZXF1ZXN0NTQ0MTI2ODQ4
1,623
Add CLIMATE-FEVER dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/1658969?v=4", "events_url": "https://api.github.com/users/tdiggelm/events{/privacy}", "followers_url": "https://api.github.com/users/tdiggelm/followers", "following_url": "https://api.github.com/users/tdiggelm/following{/other_user}", "gists_url": "https://api.github.com/users/tdiggelm/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tdiggelm", "id": 1658969, "login": "tdiggelm", "node_id": "MDQ6VXNlcjE2NTg5Njk=", "organizations_url": "https://api.github.com/users/tdiggelm/orgs", "received_events_url": "https://api.github.com/users/tdiggelm/received_events", "repos_url": "https://api.github.com/users/tdiggelm/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tdiggelm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tdiggelm/subscriptions", "type": "User", "url": "https://api.github.com/users/tdiggelm" }
[]
closed
false
null
[]
null
[ "Thank you @lhoestq for your comments! 😄 I added your suggested changes, ran the tests and regenerated `dataset_infos.json` and `dummy_data`." ]
2020-12-22T13:34:05Z
2020-12-22T17:53:53Z
2020-12-22T17:53:53Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1623.diff", "html_url": "https://github.com/huggingface/datasets/pull/1623", "merged_at": "2020-12-22T17:53:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/1623.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1623" }
As suggested by @SBrandeis , fresh PR that adds CLIMATE-FEVER. Replaces PR #1579. --- A dataset adopting the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet. Each claim is accompanied by five manually annotated evidence sentences retrieved from the English Wikipedia that support, refute or do not give enough information to validate the claim totalling in 7,675 claim-evidence pairs. The dataset features challenging claims that relate multiple facets and disputed cases of claims where both supporting and refuting evidence are present. More information can be found at: * Homepage: http://climatefever.ai * Paper: https://arxiv.org/abs/2012.00614
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1623/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1623/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1971
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1971/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1971/comments
https://api.github.com/repos/huggingface/datasets/issues/1971/events
https://github.com/huggingface/datasets/pull/1971
819,714,231
MDExOlB1bGxSZXF1ZXN0NTgyNzgyNTU0
1,971
Fix ArrowWriter closes stream at exit
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "Oh nice thanks for adding the context manager ! All the streams and RecordBatchWriter will be properly closed now. Hopefully this gives a better experience on windows on which it's super important to close stuff.\r\n\r\nNot sure about the error, it looks like a process crashed silently.\r\nLet me take a look", "...
2021-03-02T07:12:34Z
2021-03-10T16:36:57Z
2021-03-10T16:36:57Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1971.diff", "html_url": "https://github.com/huggingface/datasets/pull/1971", "merged_at": "2021-03-10T16:36:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/1971.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1971" }
Current implementation of ArrowWriter does not properly release the `stream` resource (by closing it) if its `finalize()` method is not called and/or an Exception is raised before/during the call to its `finalize()` method. Therefore, ArrowWriter should be used as a context manager that properly closes its `stream` resource at exit.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1971/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1971/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4873
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4873/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4873/comments
https://api.github.com/repos/huggingface/datasets/issues/4873/events
https://github.com/huggingface/datasets/issues/4873
1,347,592,022
I_kwDODunzps5QUp9W
4,873
Multiple dataloader memory error
{ "avatar_url": "https://avatars.githubusercontent.com/u/13767887?v=4", "events_url": "https://api.github.com/users/cyk1337/events{/privacy}", "followers_url": "https://api.github.com/users/cyk1337/followers", "following_url": "https://api.github.com/users/cyk1337/following{/other_user}", "gists_url": "https://api.github.com/users/cyk1337/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cyk1337", "id": 13767887, "login": "cyk1337", "node_id": "MDQ6VXNlcjEzNzY3ODg3", "organizations_url": "https://api.github.com/users/cyk1337/orgs", "received_events_url": "https://api.github.com/users/cyk1337/received_events", "repos_url": "https://api.github.com/users/cyk1337/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cyk1337/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cyk1337/subscriptions", "type": "User", "url": "https://api.github.com/users/cyk1337" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[ "Hi!\r\n\r\n200+ data loaders is a lot. Have you tried to reduce the number of datasets by concatenating/interleaving the ones with the same structure/task (the API is `{concatenate_datasets/interleave_datasets}([dset1, ..., dset_N])`)?", "Hi @mariosasko, thank you for your reply. I tried pre-concatenating differ...
2022-08-23T08:59:50Z
2023-01-26T02:01:11Z
null
NONE
null
null
null
For the use of multiple datasets and tasks, we use around more than 200+ dataloaders, then pass it into `dataloader1, dataloader2, ..., dataloader200=accelerate.prepare(dataloader1, dataloader2, ..., dataloader200)` It causes the memory error when generating batches. Any solutions to it? ```bash File "/home/xxx/my_code/src/utils/data_utils.py", line 54, in generate_batch x = next(iterator) File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/accelerate/data_loader.py", line 301, in __iter__ for batch in super().__iter__(): File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in __next__ data = self._next_data() File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 28, in fetch data.append(next(self.dataset_iter)) File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/accelerate/data_loader.py", line 249, in __iter__ for element in self.dataset: File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 503, in __iter__ for key, example in self._iter(): File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 500, in _iter yield from ex_iterable File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 231, in __iter__ new_key = "_".join(str(key) for key in keys) MemoryError ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4873/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4873/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5681
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5681/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5681/comments
https://api.github.com/repos/huggingface/datasets/issues/5681/events
https://github.com/huggingface/datasets/issues/5681
1,645,630,784
I_kwDODunzps5iFlVA
5,681
Add information about patterns search order to the doc about structuring repo
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stevhliu", "id": 59462357, "login": "stevhliu", "node_id": "MDQ6VXNlcjU5NDYyMzU3", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "repos_url": "https://api.github.com/users/stevhliu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "type": "User", "url": "https://api.github.com/users/stevhliu" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gi...
null
[ "Good idea, I think I've seen this a couple of times before too on the forums. I can work on this :)", "Closed in #5693 " ]
2023-03-29T11:44:49Z
2023-04-03T18:31:11Z
2023-04-03T18:31:11Z
CONTRIBUTOR
null
null
null
Following [this](https://github.com/huggingface/datasets/issues/5650) issue I think we should add a note about the order of patterns that is used to find splits, see [my comment](https://github.com/huggingface/datasets/issues/5650#issuecomment-1488412527). Also we should reference this page in pages about packaged loaders. I have a déjà vu that it had already been discussed as some point but I don't remember....
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5681/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5681/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2750
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2750/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2750/comments
https://api.github.com/repos/huggingface/datasets/issues/2750/events
https://github.com/huggingface/datasets/issues/2750
958,984,730
MDU6SXNzdWU5NTg5ODQ3MzA=
2,750
Second concatenation of datasets produces errors
{ "avatar_url": "https://avatars.githubusercontent.com/u/36672861?v=4", "events_url": "https://api.github.com/users/Aktsvigun/events{/privacy}", "followers_url": "https://api.github.com/users/Aktsvigun/followers", "following_url": "https://api.github.com/users/Aktsvigun/following{/other_user}", "gists_url": "https://api.github.com/users/Aktsvigun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Aktsvigun", "id": 36672861, "login": "Aktsvigun", "node_id": "MDQ6VXNlcjM2NjcyODYx", "organizations_url": "https://api.github.com/users/Aktsvigun/orgs", "received_events_url": "https://api.github.com/users/Aktsvigun/received_events", "repos_url": "https://api.github.com/users/Aktsvigun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Aktsvigun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Aktsvigun/subscriptions", "type": "User", "url": "https://api.github.com/users/Aktsvigun" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "@albertvillanova ", "Hi @Aktsvigun, thanks for reporting.\r\n\r\nI'm investigating this.", "Hi @albertvillanova ,\r\nany update on this? Can I probably help in some way?", "Hi @Aktsvigun! We are planning to address this issue before our next release, in a couple of weeks at most. 😅 \r\n\r\nIn the meantime, ...
2021-08-03T10:47:04Z
2022-01-19T14:23:43Z
2022-01-19T14:19:05Z
NONE
null
null
null
Hi, I am need to concatenate my dataset with others several times, and after I concatenate it for the second time, the features of features (e.g. tags names) are collapsed. This hinders, for instance, the usage of tokenize function with `data.map`. ``` from datasets import load_dataset, concatenate_datasets data = load_dataset('trec')['train'] concatenated = concatenate_datasets([data, data]) concatenated_2 = concatenate_datasets([concatenated, concatenated]) print('True features of features:', concatenated.features) print('\nProduced features of features:', concatenated_2.features) ``` outputs ``` True features of features: {'label-coarse': ClassLabel(num_classes=6, names=['DESC', 'ENTY', 'ABBR', 'HUM', 'NUM', 'LOC'], names_file=None, id=None), 'label-fine': ClassLabel(num_classes=47, names=['manner', 'cremat', 'animal', 'exp', 'ind', 'gr', 'title', 'def', 'date', 'reason', 'event', 'state', 'desc', 'count', 'other', 'letter', 'religion', 'food', 'country', 'color', 'termeq', 'city', 'body', 'dismed', 'mount', 'money', 'product', 'period', 'substance', 'sport', 'plant', 'techmeth', 'volsize', 'instru', 'abb', 'speed', 'word', 'lang', 'perc', 'code', 'dist', 'temp', 'symbol', 'ord', 'veh', 'weight', 'currency'], names_file=None, id=None), 'text': Value(dtype='string', id=None)} Produced features of features: {'label-coarse': Value(dtype='int64', id=None), 'label-fine': Value(dtype='int64', id=None), 'text': Value(dtype='string', id=None)} ``` I am using `datasets` v.1.11.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2750/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2750/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5456
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5456/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5456/comments
https://api.github.com/repos/huggingface/datasets/issues/5456/events
https://github.com/huggingface/datasets/pull/5456
1,553,905,148
PR_kwDODunzps5IXq92
5,456
feat: tqdm for `to_parquet`
{ "avatar_url": "https://avatars.githubusercontent.com/u/33707069?v=4", "events_url": "https://api.github.com/users/zanussbaum/events{/privacy}", "followers_url": "https://api.github.com/users/zanussbaum/followers", "following_url": "https://api.github.com/users/zanussbaum/following{/other_user}", "gists_url": "https://api.github.com/users/zanussbaum/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zanussbaum", "id": 33707069, "login": "zanussbaum", "node_id": "MDQ6VXNlcjMzNzA3MDY5", "organizations_url": "https://api.github.com/users/zanussbaum/orgs", "received_events_url": "https://api.github.com/users/zanussbaum/received_events", "repos_url": "https://api.github.com/users/zanussbaum/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zanussbaum/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zanussbaum/subscriptions", "type": "User", "url": "https://api.github.com/users/zanussbaum" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-01-23T22:05:38Z
2023-01-24T11:26:47Z
2023-01-24T11:17:12Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5456.diff", "html_url": "https://github.com/huggingface/datasets/pull/5456", "merged_at": "2023-01-24T11:17:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/5456.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5456" }
As described in #5418 I noticed also that the `to_json` function supports multi-workers whereas `to_parquet`, is that not possible/not needed with Parquet or something that hasn't been implemented yet?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5456/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5456/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5325
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5325/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5325/comments
https://api.github.com/repos/huggingface/datasets/issues/5325/events
https://github.com/huggingface/datasets/issues/5325
1,471,536,822
I_kwDODunzps5Xtd62
5,325
map(...batch_size=None) for IterableDataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/299380?v=4", "events_url": "https://api.github.com/users/frankier/events{/privacy}", "followers_url": "https://api.github.com/users/frankier/followers", "following_url": "https://api.github.com/users/frankier/following{/other_user}", "gists_url": "https://api.github.com/users/frankier/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/frankier", "id": 299380, "login": "frankier", "node_id": "MDQ6VXNlcjI5OTM4MA==", "organizations_url": "https://api.github.com/users/frankier/orgs", "received_events_url": "https://api.github.com/users/frankier/received_events", "repos_url": "https://api.github.com/users/frankier/repos", "site_admin": false, "starred_url": "https://api.github.com/users/frankier/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/frankier/subscriptions", "type": "User", "url": "https://api.github.com/users/frankier" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "7057ff", "default": true...
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alvarobartt", "id": 36760800, "login": "alvarobartt", "node_id": "MDQ6VXNlcjM2NzYwODAw", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "repos_url": "https://api.github.com/users/alvarobartt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "type": "User", "url": "https://api.github.com/users/alvarobartt" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}"...
null
[ "Hi! I agree it makes sense for `IterableDataset.map` to support the `batch_size=None` case. This should be super easy to fix.", "@mariosasko as this is something simple maybe I can include it as part of https://github.com/huggingface/datasets/pull/5311? Let me know :+1:", "#self-assign", "Feel free to close ...
2022-12-01T15:43:42Z
2022-12-07T15:54:43Z
2022-12-07T15:54:42Z
CONTRIBUTOR
null
null
null
### Feature request Dataset.map(...) allows batch_size to be None. It would be nice if IterableDataset did too. ### Motivation Although it may seem a bit of a spurious request given that `IterableDataset` is meant for larger than memory datasets, but there are a couple of reasons why this might be nice. One is that load_dataset(...) can return either IterableDataset or Dataset. mypy will then complain if batch_size=None even if we know it is Dataset. Of course we can do: assert isinstance(d, datasets.DatasetDict) But it is a mild inconvenience. What's more annoying is that whenever we use something like e.g. `combine_datasets(...)`, we end up with the union again, and so have to do the assert again. Another is that we could actually end up with an IterableDataset small enough for memory in normal/correct usage, e.g. by filtering a massive IterableDataset. For practical usages, an alternative to this would be to convert from an iterable dataset to a map-style dataset, but it is not obvious how to do this. ### Your contribution Not this time.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5325/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5325/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4087
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4087/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4087/comments
https://api.github.com/repos/huggingface/datasets/issues/4087/events
https://github.com/huggingface/datasets/pull/4087
1,191,819,805
PR_kwDODunzps41lnfO
4,087
Fix BeamWriter output Parquet file
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-04-04T13:46:50Z
2022-04-05T15:00:40Z
2022-04-05T14:54:48Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4087.diff", "html_url": "https://github.com/huggingface/datasets/pull/4087", "merged_at": "2022-04-05T14:54:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/4087.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4087" }
Since now, the `BeamWriter` saved a Parquet file with a simplified schema, where each field value was serialized to JSON. That resulted in Parquet files larger than Arrow files. This PR: - writes Parquet file preserving original schema and without serialization, thus avoiding serialization overhead and resulting in a smaller output file size. - fixes `parquet_to_arrow` function
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4087/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4087/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4335
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4335/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4335/comments
https://api.github.com/repos/huggingface/datasets/issues/4335/events
https://github.com/huggingface/datasets/pull/4335
1,234,157,123
PR_kwDODunzps43usJP
4,335
Eval metadata batch 1: BillSum, CoNLL2003, CoNLLPP, CUAD, Emotion, GigaWord, GLUE, Hate Speech 18, Hate Speech
{ "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sashavor", "id": 14205986, "login": "sashavor", "node_id": "MDQ6VXNlcjE0MjA1OTg2", "organizations_url": "https://api.github.com/users/sashavor/orgs", "received_events_url": "https://api.github.com/users/sashavor/received_events", "repos_url": "https://api.github.com/users/sashavor/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "type": "User", "url": "https://api.github.com/users/sashavor" }
[]
closed
false
null
[]
null
[ "Summary of CircleCI errors:\r\n- **BoolQ**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n- **Conllpp**: expected some content in section `Citation Information` but it i...
2022-05-12T15:28:16Z
2022-05-16T16:31:10Z
2022-05-16T16:23:09Z
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4335.diff", "html_url": "https://github.com/huggingface/datasets/pull/4335", "merged_at": "2022-05-16T16:23:08Z", "patch_url": "https://github.com/huggingface/datasets/pull/4335.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4335" }
Adding evaluation metadata for: - BillSum - CoNLL2003 - CoNLLPP - CUAD - Emotion - GigaWord - GLUE - Hate Speech 18 - Hate Speech Offensive
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4335/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4335/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2469
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2469/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2469/comments
https://api.github.com/repos/huggingface/datasets/issues/2469/events
https://github.com/huggingface/datasets/pull/2469
916,440,418
MDExOlB1bGxSZXF1ZXN0NjY2MTA1OTk1
2,469
Bump tqdm version
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
[]
closed
false
null
[]
null
[ "i tried both the latest version of `tqdm` and the version required by `autonlp` - no luck with windows 😞 \r\n\r\nit's very weird that a progress bar would trigger these kind of errors, so i'll have a look to see if it's something unique to `datasets`", "Closing since this is now fixed in #2482 " ]
2021-06-09T17:24:40Z
2021-06-11T15:03:42Z
2021-06-11T15:03:36Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2469.diff", "html_url": "https://github.com/huggingface/datasets/pull/2469", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2469.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2469" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2469/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2469/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3829
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3829/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3829/comments
https://api.github.com/repos/huggingface/datasets/issues/3829/events
https://github.com/huggingface/datasets/issues/3829
1,160,154,352
I_kwDODunzps5FJozw
3,829
[📄 Docs] Create a `datasets` performance guide.
{ "avatar_url": "https://avatars.githubusercontent.com/u/3712347?v=4", "events_url": "https://api.github.com/users/dynamicwebpaige/events{/privacy}", "followers_url": "https://api.github.com/users/dynamicwebpaige/followers", "following_url": "https://api.github.com/users/dynamicwebpaige/following{/other_user}", "gists_url": "https://api.github.com/users/dynamicwebpaige/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dynamicwebpaige", "id": 3712347, "login": "dynamicwebpaige", "node_id": "MDQ6VXNlcjM3MTIzNDc=", "organizations_url": "https://api.github.com/users/dynamicwebpaige/orgs", "received_events_url": "https://api.github.com/users/dynamicwebpaige/received_events", "repos_url": "https://api.github.com/users/dynamicwebpaige/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dynamicwebpaige/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dynamicwebpaige/subscriptions", "type": "User", "url": "https://api.github.com/users/dynamicwebpaige" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "Hi ! Yes this is definitely something we'll explore, since optimizing processing pipelines can be challenging and because performance is key here: we want anyone to be able to play with large-scale datasets more easily.\r\n\r\nI think we'll start by documenting the performance of the dataset transforms we provide,...
2022-03-05T00:28:06Z
2022-03-10T16:24:27Z
null
NONE
null
null
null
## Brief Overview Downloading, saving, and preprocessing large datasets from the `datasets` library can often result in [performance bottlenecks](https://github.com/huggingface/datasets/issues/3735). These performance snags can be challenging to identify and to debug, especially for users who are less experienced with building deep learning experiments. ## Feature Request Could we create a performance guide for using `datasets`, similar to: * [Better performance with the `tf.data` API](https://github.com/huggingface/datasets/issues/3735) * [Analyze `tf.data` performance with the TF Profiler](https://www.tensorflow.org/guide/data_performance_analysis) This performance guide should detail practical options for improving performance with `datasets`, and enumerate any common best practices. It should also show how to use tools like the PyTorch Profiler or the TF Profiler to identify any performance bottlenecks (example below). ![image](https://user-images.githubusercontent.com/3712347/156859152-a3cb9565-3ec6-4d39-8e77-56d0a75a4954.png) ## Related Issues * [wiki_dpr pre-processing performance #1670](https://github.com/huggingface/datasets/issues/1670) * [Adjusting chunk size for streaming datasets #3499](https://github.com/huggingface/datasets/issues/3499) * [how large datasets are handled under the hood #1004](https://github.com/huggingface/datasets/issues/1004) * [using map on loaded Tokenizer 10x - 100x slower than default Tokenizer? #1830](https://github.com/huggingface/datasets/issues/1830) * [Best way to batch a large dataset? #315](https://github.com/huggingface/datasets/issues/315) * [Saving processed dataset running infinitely #1911](https://github.com/huggingface/datasets/issues/1911)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3829/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3829/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4496
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4496/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4496/comments
https://api.github.com/repos/huggingface/datasets/issues/4496/events
https://github.com/huggingface/datasets/pull/4496
1,271,945,704
PR_kwDODunzps45sUnW
4,496
Replace `assertEqual` with `assertTupleEqual` in unit tests for verbosity
{ "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alvarobartt", "id": 36760800, "login": "alvarobartt", "node_id": "MDQ6VXNlcjM2NzYwODAw", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "repos_url": "https://api.github.com/users/alvarobartt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "type": "User", "url": "https://api.github.com/users/alvarobartt" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "FYI I used the following regex to look for the `assertEqual` statements where the assertion was being done over a Tuple: `self.assertEqual(.*, \\(.*,)(\\)\\))$`, hope this is useful!" ]
2022-06-15T09:29:16Z
2022-07-07T17:06:51Z
2022-07-07T16:55:48Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4496.diff", "html_url": "https://github.com/huggingface/datasets/pull/4496", "merged_at": "2022-07-07T16:55:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/4496.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4496" }
As detailed in #4419 and as suggested by @mariosasko, we could replace the `assertEqual` assertions with `assertTupleEqual` when the assertion is between Tuples, in order to make the tests more verbose.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4496/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4496/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1887
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1887/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1887/comments
https://api.github.com/repos/huggingface/datasets/issues/1887/events
https://github.com/huggingface/datasets/pull/1887
809,229,809
MDExOlB1bGxSZXF1ZXN0NTc0MTI2NTMy
1,887
Implement to_csv for Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SBrandeis", "id": 33657802, "login": "SBrandeis", "node_id": "MDQ6VXNlcjMzNjU3ODAy", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "repos_url": "https://api.github.com/users/SBrandeis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "type": "User", "url": "https://api.github.com/users/SBrandeis" }
[]
closed
false
null
[]
null
[ "@lhoestq I stumbled upon an interesting failure when adding tests for CSV serialization of `ArrayXD` features (see the failing unit tests in the CI)\r\n\r\nIt's due to the fact that booleans cannot be converted from arrow format to numpy without copy: https://arrow.apache.org/docs/python/generated/pyarrow.Array.ht...
2021-02-16T11:27:29Z
2021-02-19T09:41:59Z
2021-02-19T09:41:59Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1887.diff", "html_url": "https://github.com/huggingface/datasets/pull/1887", "merged_at": "2021-02-19T09:41:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/1887.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1887" }
cc @thomwolf `to_csv` supports passing either a file path or a *binary* file object The writing is batched to avoid loading the whole table in memory
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1887/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1887/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2930
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2930/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2930/comments
https://api.github.com/repos/huggingface/datasets/issues/2930/events
https://github.com/huggingface/datasets/issues/2930
998,154,311
I_kwDODunzps47fqBH
2,930
Mutable columns argument breaks set_format
{ "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Rocketknight1", "id": 12866554, "login": "Rocketknight1", "node_id": "MDQ6VXNlcjEyODY2NTU0", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "type": "User", "url": "https://api.github.com/users/Rocketknight1" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Rocketknight1", "id": 12866554, "login": "Rocketknight1", "node_id": "MDQ6VXNlcjEyODY2NTU0", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "type": "User", "url": "https://api.github.com/users/Rocketknight1" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_...
null
[ "Pushed a fix to my branch #2731 " ]
2021-09-16T12:27:22Z
2021-09-16T13:50:53Z
2021-09-16T13:50:53Z
MEMBER
null
null
null
## Describe the bug If you pass a mutable list to the `columns` argument of `set_format` and then change the list afterwards, the returned columns also change. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("glue", "cola") column_list = ["idx", "label"] dataset.set_format("python", columns=column_list) column_list[1] = "foo" # Change the list after we call `set_format` dataset['train'][:4].keys() ``` ## Expected results ```python dict_keys(['idx', 'label']) ``` ## Actual results ```python dict_keys(['idx']) ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2930/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2930/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4312
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4312/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4312/comments
https://api.github.com/repos/huggingface/datasets/issues/4312/events
https://github.com/huggingface/datasets/pull/4312
1,231,662,775
PR_kwDODunzps43mlug
4,312
added TR-News dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/25901065?v=4", "events_url": "https://api.github.com/users/batubayk/events{/privacy}", "followers_url": "https://api.github.com/users/batubayk/followers", "following_url": "https://api.github.com/users/batubayk/following{/other_user}", "gists_url": "https://api.github.com/users/batubayk/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/batubayk", "id": 25901065, "login": "batubayk", "node_id": "MDQ6VXNlcjI1OTAxMDY1", "organizations_url": "https://api.github.com/users/batubayk/orgs", "received_events_url": "https://api.github.com/users/batubayk/received_events", "repos_url": "https://api.github.com/users/batubayk/repos", "site_admin": false, "starred_url": "https://api.github.com/users/batubayk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/batubayk/subscriptions", "type": "User", "url": "https://api.github.com/users/batubayk" }
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
[]
null
[ "Thanks for your contribution, @batubayk.\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nI would suggest you create this dataset there. Please, feel free to tell us if you need some help." ]
2022-05-10T20:33:00Z
2022-10-03T09:36:45Z
2022-10-03T09:36:45Z
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4312.diff", "html_url": "https://github.com/huggingface/datasets/pull/4312", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4312.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4312" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4312/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4312/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5225
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5225/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5225/comments
https://api.github.com/repos/huggingface/datasets/issues/5225/events
https://github.com/huggingface/datasets/issues/5225
1,444,305,183
I_kwDODunzps5WFlkf
5,225
Add video feature
{ "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nateraw", "id": 32437151, "login": "nateraw", "node_id": "MDQ6VXNlcjMyNDM3MTUx", "organizations_url": "https://api.github.com/users/nateraw/orgs", "received_events_url": "https://api.github.com/users/nateraw/received_events", "repos_url": "https://api.github.com/users/nateraw/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateraw/subscriptions", "type": "User", "url": "https://api.github.com/users/nateraw" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "008672", "default": true...
open
false
null
[]
null
[ "@NielsRogge @rwightman may have additional requirements regarding this feature.\r\n\r\nWhen adding a new (decodable) type, the hardest part is choosing the right decoding library. What I mean by \"right\" here is that it has all the features we need and is easy to install (with GPU support?).\r\n\r\nSome candidate...
2022-11-10T17:36:11Z
2022-12-02T15:13:15Z
null
CONTRIBUTOR
null
null
null
### Feature request Add a `Video` feature to the library so folks can include videos in their datasets. ### Motivation Being able to load Video data would be quite helpful. However, there are some challenges when it comes to videos: 1. Videos, unlike images, can end up being extremely large files 2. Often times when training video models, you need to do some very specific sampling. Videos might end up needing to be broken down into X number of clips used for training/inference 3. Videos have an additional audio stream, which must be accounted for 4. The feature needs to be able to encode/decode videos (with right video settings) from bytes. ### Your contribution I did work on this a while back in [this (now closed) PR](https://github.com/huggingface/datasets/pull/4532). It used a library I made called [encoded_video](https://github.com/nateraw/encoded-video), which is basically the utils from [pytorchvideo](https://github.com/facebookresearch/pytorchvideo), but without the `torch` dep. It included the ability to read/write from bytes, as we need to do here. We don't want to be using a sketchy library that I made as a dependency in this repo, though. Would love to use this issue as a place to: - brainstorm ideas on how to do this right - list ways/examples to work around it for now CC @sayakpaul @mariosasko @fcakyon
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/5225/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5225/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1407
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1407/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1407/comments
https://api.github.com/repos/huggingface/datasets/issues/1407/events
https://github.com/huggingface/datasets/pull/1407
760,581,756
MDExOlB1bGxSZXF1ZXN0NTM1Mzg5ODQx
1,407
Add Tweet Eval Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}", "followers_url": "https://api.github.com/users/abhishekkrthakur/followers", "following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}", "gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/abhishekkrthakur", "id": 1183441, "login": "abhishekkrthakur", "node_id": "MDQ6VXNlcjExODM0NDE=", "organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs", "received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events", "repos_url": "https://api.github.com/users/abhishekkrthakur/repos", "site_admin": false, "starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions", "type": "User", "url": "https://api.github.com/users/abhishekkrthakur" }
[]
closed
false
null
[]
null
[ "Hi @lhoestq,\r\n\r\nSeeing that it has been almost two months to this draft, I'm willing to take this forward if you and @abhishekkrthakur don't mind. :)", "Hi @gchhablani !\r\nSure if @abhishekkrthakur doesn't mind\r\nThanks for your help :)", "Please feel free :) ", "Hi @lhoestq, @abhishekkrthakur \r\n\r\n...
2020-12-09T18:48:57Z
2023-09-24T09:52:03Z
2021-02-26T08:54:04Z
MEMBER
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/1407.diff", "html_url": "https://github.com/huggingface/datasets/pull/1407", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1407.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1407" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1407/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1407/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1492
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1492/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1492/comments
https://api.github.com/repos/huggingface/datasets/issues/1492/events
https://github.com/huggingface/datasets/pull/1492
762,965,239
MDExOlB1bGxSZXF1ZXN0NTM3NDYxMjc3
1,492
OPUS UBUNTU dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/22396042?v=4", "events_url": "https://api.github.com/users/rkc007/events{/privacy}", "followers_url": "https://api.github.com/users/rkc007/followers", "following_url": "https://api.github.com/users/rkc007/following{/other_user}", "gists_url": "https://api.github.com/users/rkc007/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rkc007", "id": 22396042, "login": "rkc007", "node_id": "MDQ6VXNlcjIyMzk2MDQy", "organizations_url": "https://api.github.com/users/rkc007/orgs", "received_events_url": "https://api.github.com/users/rkc007/received_events", "repos_url": "https://api.github.com/users/rkc007/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rkc007/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rkc007/subscriptions", "type": "User", "url": "https://api.github.com/users/rkc007" }
[]
closed
false
null
[]
null
[ "merging since the CI is fixed on master" ]
2020-12-11T22:01:37Z
2020-12-17T14:38:16Z
2020-12-17T14:38:15Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1492.diff", "html_url": "https://github.com/huggingface/datasets/pull/1492", "merged_at": "2020-12-17T14:38:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/1492.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1492" }
Dataset : http://opus.nlpl.eu/Ubuntu.php
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1492/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1492/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4804
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4804/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4804/comments
https://api.github.com/repos/huggingface/datasets/issues/4804/events
https://github.com/huggingface/datasets/issues/4804
1,332,630,358
I_kwDODunzps5PblNW
4,804
streaming dataset with concatenating splits raises an error
{ "avatar_url": "https://avatars.githubusercontent.com/u/37621276?v=4", "events_url": "https://api.github.com/users/Bing-su/events{/privacy}", "followers_url": "https://api.github.com/users/Bing-su/followers", "following_url": "https://api.github.com/users/Bing-su/following{/other_user}", "gists_url": "https://api.github.com/users/Bing-su/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Bing-su", "id": 37621276, "login": "Bing-su", "node_id": "MDQ6VXNlcjM3NjIxMjc2", "organizations_url": "https://api.github.com/users/Bing-su/orgs", "received_events_url": "https://api.github.com/users/Bing-su/received_events", "repos_url": "https://api.github.com/users/Bing-su/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Bing-su/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bing-su/subscriptions", "type": "User", "url": "https://api.github.com/users/Bing-su" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[ "Hi! Only the name of a particular split (\"train\", \"test\", ...) is supported as a split pattern if `streaming=True`. We plan to address this limitation soon.", "Hi, have you addressed this yet?", "yes, same error occurs.\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n# error\r\nrepo = \"nateraw/ad...
2022-08-09T02:41:56Z
2023-05-11T01:42:59Z
null
NONE
null
null
null
## Describe the bug streaming dataset with concatenating splits raises an error ## Steps to reproduce the bug ```python from datasets import load_dataset # no error repo = "nateraw/ade20k-tiny" dataset = load_dataset(repo, split="train+validation") ``` ```python from datasets import load_dataset # error repo = "nateraw/ade20k-tiny" dataset = load_dataset(repo, split="train+validation", streaming=True) ``` ```sh --------------------------------------------------------------------------- ValueError Traceback (most recent call last) [<ipython-input-4-a6ae02d63899>](https://localhost:8080/#) in <module>() 3 # error 4 repo = "nateraw/ade20k-tiny" ----> 5 dataset = load_dataset(repo, split="train+validation", streaming=True) 1 frames [/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in as_streaming_dataset(self, split, base_path) 1030 splits_generator = splits_generators[split] 1031 else: -> 1032 raise ValueError(f"Bad split: {split}. Available splits: {list(splits_generators)}") 1033 1034 # Create a dataset for each of the given splits ValueError: Bad split: train+validation. Available splits: ['validation', 'train'] ``` [Colab](https://colab.research.google.com/drive/1wMj08_0bym9jnGgByib4lsBPu8NCZBG9?usp=sharing) ## Expected results load successfully or throws an error saying it is not supported. ## Actual results above ## Environment info - `datasets` version: 2.4.0 - Platform: Windows-10-10.0.22000-SP0 (windows11 x64) - Python version: 3.9.13 - PyArrow version: 8.0.0 - Pandas version: 1.4.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4804/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4804/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3231
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3231/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3231/comments
https://api.github.com/repos/huggingface/datasets/issues/3231/events
https://github.com/huggingface/datasets/pull/3231
1,047,170,906
PR_kwDODunzps4uNmWT
3,231
Group tests in multiprocessing workers by test file
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
2021-11-08T08:46:03Z
2021-11-08T13:19:18Z
2021-11-08T08:59:44Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3231.diff", "html_url": "https://github.com/huggingface/datasets/pull/3231", "merged_at": "2021-11-08T08:59:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/3231.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3231" }
By grouping tests by test file, we make sure that all the tests in `test_load.py` are sent to the same worker. Therefore, the fixture `hf_token` will be called only once (and from the same worker). Related to: #3200. Fix #3219.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3231/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3231/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1829
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1829/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1829/comments
https://api.github.com/repos/huggingface/datasets/issues/1829/events
https://github.com/huggingface/datasets/pull/1829
802,693,600
MDExOlB1bGxSZXF1ZXN0NTY4NzgzNjA5
1,829
Add Tweet Eval Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gchhablani", "id": 29076344, "login": "gchhablani", "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "repos_url": "https://api.github.com/users/gchhablani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "type": "User", "url": "https://api.github.com/users/gchhablani" }
[]
closed
false
null
[]
null
[]
2021-02-06T12:36:25Z
2021-02-08T13:17:54Z
2021-02-08T13:17:53Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1829.diff", "html_url": "https://github.com/huggingface/datasets/pull/1829", "merged_at": "2021-02-08T13:17:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/1829.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1829" }
Closes Draft PR #1407. Notes: 1. I have excluded `mapping.txt` from the dataset at it only contained the name mappings, which are already present in the ClassLabels. 2. I have also exluded the textual names for the emojis mentioned in the [mapping](https://github.com/cardiffnlp/tweeteval/blob/main/datasets/emoji/mapping.txt). 3. I do not understand @abhishekkrthakur's example generator on #1407. Maybe he was trying to build up on code from some other dataset. Requesting @lhoestq to review.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1829/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1829/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2364
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2364/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2364/comments
https://api.github.com/repos/huggingface/datasets/issues/2364/events
https://github.com/huggingface/datasets/pull/2364
892,420,500
MDExOlB1bGxSZXF1ZXN0NjQ1MTI4MDYx
2,364
README updated for SNLI, MNLI
{ "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bhavitvyamalik", "id": 19718818, "login": "bhavitvyamalik", "node_id": "MDQ6VXNlcjE5NzE4ODE4", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "type": "User", "url": "https://api.github.com/users/bhavitvyamalik" }
[]
closed
false
null
[]
null
[ "Regarding the license issue, I think we should allow it since it starts with `other-`. Cc @gchhablani what do you think ?", "@lhoestq I agree, I'll look into it." ]
2021-05-15T11:37:59Z
2021-05-17T14:14:27Z
2021-05-17T13:34:19Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2364.diff", "html_url": "https://github.com/huggingface/datasets/pull/2364", "merged_at": "2021-05-17T13:34:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/2364.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2364" }
Closes #2275. Mentioned about -1 labels in MNLI, SNLI and how they should be removed before training. @lhoestq `check_code_quality` test might fail for MNLI as the license name `other-Open Portion of the American National Corpus` is not a registered tag for 'licenses'
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2364/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2364/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6290
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6290/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6290/comments
https://api.github.com/repos/huggingface/datasets/issues/6290/events
https://github.com/huggingface/datasets/issues/6290
1,935,629,679
I_kwDODunzps5zX11v
6,290
Incremental dataset (e.g. `.push_to_hub(..., append=True)`)
{ "avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4", "events_url": "https://api.github.com/users/Wauplin/events{/privacy}", "followers_url": "https://api.github.com/users/Wauplin/followers", "following_url": "https://api.github.com/users/Wauplin/following{/other_user}", "gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Wauplin", "id": 11801849, "login": "Wauplin", "node_id": "MDQ6VXNlcjExODAxODQ5", "organizations_url": "https://api.github.com/users/Wauplin/orgs", "received_events_url": "https://api.github.com/users/Wauplin/received_events", "repos_url": "https://api.github.com/users/Wauplin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions", "type": "User", "url": "https://api.github.com/users/Wauplin" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "Yea I think waiting for #6269 would be best, or branching from it. For reference, this [PR](https://github.com/LAION-AI/Discord-Scrapers/pull/2) is progressing pretty well which will do similar using the hf hub for our LAION dataset bot https://github.com/LAION-AI/Discord-Scrapers/pull/2. " ]
2023-10-10T15:18:03Z
2023-10-13T16:05:26Z
null
CONTRIBUTOR
null
null
null
### Feature request Have the possibility to do `ds.push_to_hub(..., append=True)`. ### Motivation Requested in this [comment](https://huggingface.co/datasets/laion/dalle-3-dataset/discussions/3#65252597c4edc168202a5eaa) and this [comment](https://huggingface.co/datasets/laion/dalle-3-dataset/discussions/4#6524f675c9607bdffb208d8f). Discussed internally on [slack](https://huggingface.slack.com/archives/C02EMARJ65P/p1696950642610639?thread_ts=1690554266.830949&cid=C02EMARJ65P). ### Your contribution What I suggest to do for parquet datasets is to use `CommitOperationCopy` + `CommitOperationDelete` from `huggingface_hub`: 1. list files 2. copy files from parquet-0001-of-0004 to parquet-0001-of-0005 3. delete files like parquet-0001-of-0004 4. generate + add last parquet file parquet-0005-of-0005 => make a single commit with all commit operations at once I think it should be quite straightforward to implement. Happy to review a PR (maybe conflicting with the ongoing "1 commit push_to_hub" PR https://github.com/huggingface/datasets/pull/6269)
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/6290/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6290/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5358
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5358/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5358/comments
https://api.github.com/repos/huggingface/datasets/issues/5358/events
https://github.com/huggingface/datasets/pull/5358
1,495,270,822
PR_kwDODunzps5FYBcq
5,358
Fix `fs.open` resource leaks
{ "avatar_url": "https://avatars.githubusercontent.com/u/297847?v=4", "events_url": "https://api.github.com/users/tkukurin/events{/privacy}", "followers_url": "https://api.github.com/users/tkukurin/followers", "following_url": "https://api.github.com/users/tkukurin/following{/other_user}", "gists_url": "https://api.github.com/users/tkukurin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tkukurin", "id": 297847, "login": "tkukurin", "node_id": "MDQ6VXNlcjI5Nzg0Nw==", "organizations_url": "https://api.github.com/users/tkukurin/orgs", "received_events_url": "https://api.github.com/users/tkukurin/received_events", "repos_url": "https://api.github.com/users/tkukurin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tkukurin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tkukurin/subscriptions", "type": "User", "url": "https://api.github.com/users/tkukurin" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "@mariosasko Sorry, I didn't check tests/style after doing a merge from the Git UI last week. Thx for fixing. \r\n\r\nFYI I'm getting \"Only those with [write access](https://docs.github.com/articles/what-are-the-different-access-perm...
2022-12-13T22:35:51Z
2023-01-05T16:46:31Z
2023-01-05T15:59:51Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5358.diff", "html_url": "https://github.com/huggingface/datasets/pull/5358", "merged_at": "2023-01-05T15:59:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/5358.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5358" }
Invoking `{load,save}_from_dict` results in resource leak warnings, this should fix. Introduces no significant logic changes.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5358/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5358/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6220
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6220/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6220/comments
https://api.github.com/repos/huggingface/datasets/issues/6220/events
https://github.com/huggingface/datasets/pull/6220
1,884,285,980
PR_kwDODunzps5ZspRb
6,220
Set dev version
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6220). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma...
2023-09-06T15:40:33Z
2023-09-06T15:52:33Z
2023-09-06T15:41:13Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6220.diff", "html_url": "https://github.com/huggingface/datasets/pull/6220", "merged_at": "2023-09-06T15:41:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/6220.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6220" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6220/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6220/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4013
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4013/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4013/comments
https://api.github.com/repos/huggingface/datasets/issues/4013/events
https://github.com/huggingface/datasets/issues/4013
1,180,427,174
I_kwDODunzps5GW-Om
4,013
Cannot preview "hazal/Turkish-Biomedical-corpus-trM"
{ "avatar_url": "https://avatars.githubusercontent.com/u/42860397?v=4", "events_url": "https://api.github.com/users/hazalturkmen/events{/privacy}", "followers_url": "https://api.github.com/users/hazalturkmen/followers", "following_url": "https://api.github.com/users/hazalturkmen/following{/other_user}", "gists_url": "https://api.github.com/users/hazalturkmen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hazalturkmen", "id": 42860397, "login": "hazalturkmen", "node_id": "MDQ6VXNlcjQyODYwMzk3", "organizations_url": "https://api.github.com/users/hazalturkmen/orgs", "received_events_url": "https://api.github.com/users/hazalturkmen/received_events", "repos_url": "https://api.github.com/users/hazalturkmen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hazalturkmen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hazalturkmen/subscriptions", "type": "User", "url": "https://api.github.com/users/hazalturkmen" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "Hi @hazalturkmen, thanks for reporting.\r\n\r\nNote that your dataset repository does not contain any loading script; it only contains a data file named `tr_article_2`.\r\n\r\nWhen there is no loading script but only data files, the `datasets` library tries to infer how to load the data by looking at the data file...
2022-03-25T07:12:02Z
2022-04-04T08:05:01Z
2022-03-25T14:16:11Z
NONE
null
null
null
## Dataset viewer issue for '*hazal/Turkish-Biomedical-corpus-trM' **Link:** *https://huggingface.co/datasets/hazal/Turkish-Biomedical-corpus-trM* *I cannot see the dataset preview.* ``` Server Error Status code: 400 Exception: HTTPError Message: 403 Client Error: Forbidden for url: https://huggingface.co/api/datasets/hazal/Turkish-Biomedical-corpus-trM?full=true ``` Am I the one who added this dataset ? Yes
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4013/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4013/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5166
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5166/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5166/comments
https://api.github.com/repos/huggingface/datasets/issues/5166/events
https://github.com/huggingface/datasets/pull/5166
1,423,629,582
PR_kwDODunzps5Bj5IQ
5,166
Support dill 0.3.6
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I think it hasn't been merged ? https://github.com/uqfoundation/dill/pull/501\r\n\r\nThough I can see that the CI is green because it uses dill 0.3.1.1 - we should probably fix the dill version in both CIs:\r\n- use 0.3.1.1 for the C...
2022-10-26T08:24:59Z
2022-10-28T05:41:05Z
2022-10-28T05:38:14Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5166.diff", "html_url": "https://github.com/huggingface/datasets/pull/5166", "merged_at": "2022-10-28T05:38:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/5166.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5166" }
This PR: - ~~Unpins dill to allow installing dill>=0.3.6~~ - ~~Removes the fix on dill for >=0.3.6 because they implemented a deterministic mode (to be confirmed by @anivegesana)~~ - Pins dill<0.3.7 to allow latest dill 0.3.6 - Implements a fix for dill `save_function` for dill 0.3.6 - Additionally had to implement a fix for dill `save_code` and `_save_regex` for dill 0.3.6 - Fixes the CI so that the latest dill version is tested (besides the minimum 0.3.1.1 required by apache-beam 2.42.0) Fix #5162.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5166/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5166/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6264
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6264/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6264/comments
https://api.github.com/repos/huggingface/datasets/issues/6264/events
https://github.com/huggingface/datasets/pull/6264
1,914,958,781
PR_kwDODunzps5bTvzh
6,264
Temporarily pin tensorflow < 2.14.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-09-27T08:16:06Z
2023-09-27T08:45:24Z
2023-09-27T08:36:39Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6264.diff", "html_url": "https://github.com/huggingface/datasets/pull/6264", "merged_at": "2023-09-27T08:36:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/6264.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6264" }
Temporarily pin tensorflow < 2.14.0 until permanent solution is found. Hot fix #6263.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6264/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6264/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2379
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2379/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2379/comments
https://api.github.com/repos/huggingface/datasets/issues/2379/events
https://github.com/huggingface/datasets/pull/2379
895,252,597
MDExOlB1bGxSZXF1ZXN0NjQ3NDk2ODUx
2,379
Disallow duplicate keys in yaml tags
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2021-05-19T10:10:07Z
2021-05-19T10:45:32Z
2021-05-19T10:45:31Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2379.diff", "html_url": "https://github.com/huggingface/datasets/pull/2379", "merged_at": "2021-05-19T10:45:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/2379.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2379" }
Make sure that there's no duplidate keys in yaml tags. I added the check in the yaml tree constructor's method, so that the verification is done at every level in the yaml structure. cc @julien-c
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2379/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2379/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1340
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1340/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1340/comments
https://api.github.com/repos/huggingface/datasets/issues/1340/events
https://github.com/huggingface/datasets/pull/1340
759,765,408
MDExOlB1bGxSZXF1ZXN0NTM0NzExMjc5
1,340
:fist: ¡Viva la Independencia!
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
[]
closed
false
null
[]
null
[ "I've added the changes / fixes - ready for a second pass :)" ]
2020-12-08T20:43:43Z
2020-12-14T10:36:01Z
2020-12-14T10:36:01Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1340.diff", "html_url": "https://github.com/huggingface/datasets/pull/1340", "merged_at": "2020-12-14T10:36:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/1340.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1340" }
Adds the Catalonia Independence Corpus for stance-detection of Tweets. Ready for review!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 3, "laugh": 4, "rocket": 0, "total_count": 8, "url": "https://api.github.com/repos/huggingface/datasets/issues/1340/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1340/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3499
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3499/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3499/comments
https://api.github.com/repos/huggingface/datasets/issues/3499/events
https://github.com/huggingface/datasets/issues/3499
1,090,132,618
I_kwDODunzps5A-hqK
3,499
Adjusting chunk size for streaming datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/3775944?v=4", "events_url": "https://api.github.com/users/JoelNiklaus/events{/privacy}", "followers_url": "https://api.github.com/users/JoelNiklaus/followers", "following_url": "https://api.github.com/users/JoelNiklaus/following{/other_user}", "gists_url": "https://api.github.com/users/JoelNiklaus/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JoelNiklaus", "id": 3775944, "login": "JoelNiklaus", "node_id": "MDQ6VXNlcjM3NzU5NDQ=", "organizations_url": "https://api.github.com/users/JoelNiklaus/orgs", "received_events_url": "https://api.github.com/users/JoelNiklaus/received_events", "repos_url": "https://api.github.com/users/JoelNiklaus/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JoelNiklaus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JoelNiklaus/subscriptions", "type": "User", "url": "https://api.github.com/users/JoelNiklaus" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "Hi ! Data streaming uses `fsspec` to read the data files progressively. IIRC the block size for buffering is 5MiB by default. So every time you finish iterating over a block, it downloads the next one. You can still try to increase the `fsspec` block size for buffering if it can help. To do so you just need to inc...
2021-12-28T21:17:53Z
2022-05-06T16:29:05Z
2022-05-06T16:29:05Z
CONTRIBUTOR
null
null
null
**Is your feature request related to a problem? Please describe.** I want to use mc4 which I cannot save locally, so I stream it. However, I want to process the entire dataset and filter some documents from it. With the current chunk size of around 1000 documents (right?) I hit a performance bottleneck because of the frequent decompressing. **Describe the solution you'd like** I would appreciate a parameter in the load_dataset function, that allows me to set the chunksize myself (to a value like 100'000 in my case). Like that, I hope to improve the processing time.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3499/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3499/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5829
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5829/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5829/comments
https://api.github.com/repos/huggingface/datasets/issues/5829/events
https://github.com/huggingface/datasets/issues/5829
1,699,958,189
I_kwDODunzps5lU02t
5,829
(mach-o file, but is an incompatible architecture (have 'arm64', need 'x86_64'))
{ "avatar_url": "https://avatars.githubusercontent.com/u/18206728?v=4", "events_url": "https://api.github.com/users/elcolie/events{/privacy}", "followers_url": "https://api.github.com/users/elcolie/followers", "following_url": "https://api.github.com/users/elcolie/following{/other_user}", "gists_url": "https://api.github.com/users/elcolie/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/elcolie", "id": 18206728, "login": "elcolie", "node_id": "MDQ6VXNlcjE4MjA2NzI4", "organizations_url": "https://api.github.com/users/elcolie/orgs", "received_events_url": "https://api.github.com/users/elcolie/received_events", "repos_url": "https://api.github.com/users/elcolie/repos", "site_admin": false, "starred_url": "https://api.github.com/users/elcolie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elcolie/subscriptions", "type": "User", "url": "https://api.github.com/users/elcolie" }
[]
closed
false
null
[]
null
[ "Can you paste the error stack trace?", "That is weird. I can't reproduce it again after reboot.\r\n```python\r\nIn [2]: import platform\r\n\r\nIn [3]: platform.platform()\r\nOut[3]: 'macOS-13.2-arm64-arm-64bit'\r\n\r\nIn [4]: from datasets import load_dataset\r\n ...:\r\n ...: jazzy = load_dataset(\"nomic-ai...
2023-05-08T10:07:14Z
2023-06-30T11:39:14Z
2023-05-09T00:46:42Z
NONE
null
null
null
### Describe the bug M2 MBP can't run ```python from datasets import load_dataset jazzy = load_dataset("nomic-ai/gpt4all-j-prompt-generations", revision='v1.2-jazzy') ``` ### Steps to reproduce the bug 1. Use M2 MBP 2. Python 3.10.10 from pyenv 3. Run ``` from datasets import load_dataset jazzy = load_dataset("nomic-ai/gpt4all-j-prompt-generations", revision='v1.2-jazzy') ``` ### Expected behavior Be able to run normally ### Environment info ``` from datasets import load_dataset jazzy = load_dataset("nomic-ai/gpt4all-j-prompt-generations", revision='v1.2-jazzy') ``` OSX: 13.2 CPU: M2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5829/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5829/timeline
null
completed
false