url stringlengths 58 61 | repository_url stringclasses 1
value | labels_url stringlengths 72 75 | comments_url stringlengths 67 70 | events_url stringlengths 65 68 | html_url stringlengths 46 51 | id int64 600M 2.05B | node_id stringlengths 18 32 | number int64 2 6.51k | title stringlengths 1 290 | user dict | labels listlengths 0 4 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 4 | milestone dict | comments listlengths 0 30 | created_at timestamp[ns, tz=UTC] | updated_at timestamp[ns, tz=UTC] | closed_at timestamp[ns, tz=UTC] | author_association stringclasses 3
values | active_lock_reason float64 | draft float64 0 1 ⌀ | pull_request dict | body stringlengths 0 228k ⌀ | reactions dict | timeline_url stringlengths 67 70 | performed_via_github_app float64 | state_reason stringclasses 3
values | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/4938 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4938/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4938/comments | https://api.github.com/repos/huggingface/datasets/issues/4938/events | https://github.com/huggingface/datasets/pull/4938 | 1,363,429,228 | PR_kwDODunzps4-coaB | 4,938 | Remove main branch rename notice | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-06T15:03:05Z | 2022-09-06T16:46:11Z | 2022-09-06T16:43:53Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4938.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4938",
"merged_at": "2022-09-06T16:43:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4938.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | We added a notice in README.md to show that we renamed the master branch to main, but we can remove it now (it's been 2 months)
I also unpinned the github issue about the branch renaming | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4938/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4938/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/742 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/742/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/742/comments | https://api.github.com/repos/huggingface/datasets/issues/742/events | https://github.com/huggingface/datasets/pull/742 | 724,509,974 | MDExOlB1bGxSZXF1ZXN0NTA1ODgzNjI3 | 742 | Add OCNLI, a new CLUE dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "... | [] | closed | false | null | [] | null | [
"Thanks :) merging it"
] | 2020-10-19T11:06:33Z | 2020-10-22T16:19:49Z | 2020-10-22T16:19:48Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/742.diff",
"html_url": "https://github.com/huggingface/datasets/pull/742",
"merged_at": "2020-10-22T16:19:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/742.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/742... | OCNLI stands for Original Chinese Natural Language Inference. It is a corpus for
Chinese Natural Language Inference, collected following closely the procedures of MNLI,
but with enhanced strategies aiming for more challenging inference pairs. We want to
emphasize we did not use hu... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/742/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/742/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1145 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1145/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1145/comments | https://api.github.com/repos/huggingface/datasets/issues/1145/events | https://github.com/huggingface/datasets/pull/1145 | 757,477,349 | MDExOlB1bGxSZXF1ZXN0NTMyODQ4MTQx | 1,145 | Add Species-800 | {
"avatar_url": "https://avatars.githubusercontent.com/u/17855740?v=4",
"events_url": "https://api.github.com/users/edugp/events{/privacy}",
"followers_url": "https://api.github.com/users/edugp/followers",
"following_url": "https://api.github.com/users/edugp/following{/other_user}",
"gists_url": "https://api.... | [] | closed | false | null | [] | null | [
"thanks @lhoestq ! I probably need to do the same change in the `SplitGenerator`s (lines 107, 110 and 113). I'll open a new PR for that",
"Yes indeed ! Good catch 👍 \r\nFeel free to open a PR and ping me",
"Hi , theres a issue pulling species_800 dataset , throws google drive error \r\n\r\nerror: \r\n\r\n```... | 2020-12-04T23:44:51Z | 2022-01-13T03:09:20Z | 2020-12-05T16:35:01Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1145.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1145",
"merged_at": "2020-12-05T16:35:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1145.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1145/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1145/timeline | null | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/5055 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5055/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5055/comments | https://api.github.com/repos/huggingface/datasets/issues/5055/events | https://github.com/huggingface/datasets/pull/5055 | 1,394,503,844 | PR_kwDODunzps5ACyVU | 5,055 | Fix backward compatibility for dataset_infos.json | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-03T10:30:14Z | 2022-10-03T13:43:55Z | 2022-10-03T13:41:32Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5055.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5055",
"merged_at": "2022-10-03T13:41:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5055.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | While working on https://github.com/huggingface/datasets/pull/5018 I noticed a small bug introduced in #4926 regarding backward compatibility for dataset_infos.json
Indeed, when a dataset repo had both dataset_infos.json and README.md, the JSON file was ignored. This is unexpected: in practice it should be ignored o... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5055/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5055/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4099 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4099/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4099/comments | https://api.github.com/repos/huggingface/datasets/issues/4099/events | https://github.com/huggingface/datasets/issues/4099 | 1,193,253,768 | I_kwDODunzps5HH5uI | 4,099 | UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128) | {
"avatar_url": "https://avatars.githubusercontent.com/u/20210017?v=4",
"events_url": "https://api.github.com/users/andreybond/events{/privacy}",
"followers_url": "https://api.github.com/users/andreybond/followers",
"following_url": "https://api.github.com/users/andreybond/following{/other_user}",
"gists_url"... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Hi @andreybond, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to able to reproduce your issue:\r\n```python\r\nIn [4]: from datasets import load_dataset\r\n ...: datasets = load_dataset(\"nielsr/XFUN\", \"xfun.ja\")\r\n\r\nIn [5]: datasets\r\nOut[5]: \r\nDatasetDict({\r\n train: Dataset({\r\n ... | 2022-04-05T14:42:38Z | 2022-04-06T06:37:44Z | 2022-04-06T06:35:54Z | NONE | null | null | null | ## Describe the bug
Error "UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)" is thrown when downloading dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
datasets = load_dataset("nielsr/XFUN", "xfun.ja")
```
## Expected resu... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4099/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4099/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6385 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6385/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6385/comments | https://api.github.com/repos/huggingface/datasets/issues/6385/events | https://github.com/huggingface/datasets/issues/6385 | 1,979,308,338 | I_kwDODunzps51-dky | 6,385 | Get an error when i try to concatenate the squad dataset with my own dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/149378500?v=4",
"events_url": "https://api.github.com/users/CCDXDX/events{/privacy}",
"followers_url": "https://api.github.com/users/CCDXDX/followers",
"following_url": "https://api.github.com/users/CCDXDX/following{/other_user}",
"gists_url": "https://... | [] | closed | false | null | [] | null | [
"The `answers.text` field in the JSON dataset needs to be a list of strings, not a string.\r\n\r\nSo, here is the fixed code:\r\n```python\r\nfrom huggingface_hub import notebook_login\r\nfrom datasets import load_dataset\r\n\r\n\r\n\r\nnotebook_login(\"mymailadresse\", \"mypassword\")\r\nsquad = load_dataset(\"squ... | 2023-11-06T14:29:22Z | 2023-11-06T16:50:45Z | 2023-11-06T16:50:45Z | NONE | null | null | null | ### Describe the bug
Hello,
I'm new here and I need to concatenate the squad dataset with my own dataset i created. I find the following error when i try to do it: Traceback (most recent call last):
Cell In[9], line 1
concatenated_dataset = concatenate_datasets([train_dataset, dataset1])
File ~\ana... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6385/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6385/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/745 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/745/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/745/comments | https://api.github.com/repos/huggingface/datasets/issues/745/events | https://github.com/huggingface/datasets/pull/745 | 725,589,352 | MDExOlB1bGxSZXF1ZXN0NTA2ODAxMTI0 | 745 | Fix emotion description | {
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://a... | [] | closed | false | null | [] | null | [
"Hello, I probably have a silly question but the labels of the emotion dataset are in the form of numbers and not string, so I can not use the function classification_report because it mixes numbers and string (prediction). How can I access the label in the form of a string and not a number? \r\nThank you in advanc... | 2020-10-20T13:28:39Z | 2021-04-22T14:47:31Z | 2020-10-21T08:38:27Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/745.diff",
"html_url": "https://github.com/huggingface/datasets/pull/745",
"merged_at": "2020-10-21T08:38:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/745.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/745... | Fixes the description of the emotion dataset to reflect the class names observed in the data, not the ones described in the paper.
I also took the liberty to make use of `ClassLabel` for the emotion labels. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/745/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/745/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3752 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3752/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3752/comments | https://api.github.com/repos/huggingface/datasets/issues/3752/events | https://github.com/huggingface/datasets/pull/3752 | 1,142,627,889 | PR_kwDODunzps4zD1D9 | 3,752 | Update metadata JSON for cats_vs_dogs dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [] | 2022-02-18T08:32:53Z | 2022-02-18T14:56:12Z | 2022-02-18T14:56:11Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3752.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3752",
"merged_at": "2022-02-18T14:56:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3752.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Note that the number of examples in the train split was already fixed in the dataset card.
Fix #3750. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3752/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3752/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6424 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6424/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6424/comments | https://api.github.com/repos/huggingface/datasets/issues/6424/events | https://github.com/huggingface/datasets/pull/6424 | 1,995,224,516 | PR_kwDODunzps5fiwDC | 6,424 | [docs] troubleshooting guide | {
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url":... | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6424). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... | 2023-11-15T17:28:14Z | 2023-11-30T17:29:55Z | 2023-11-30T17:23:46Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6424.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6424",
"merged_at": "2023-11-30T17:23:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6424.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Hi all! This is a PR adding a troubleshooting guide for Datasets docs.
I went through the library's GitHub Issues and Forum questions and identified a few issues that are common enough that I think it would be valuable to include them in the troubleshooting guide. These are:
- creating a dataset from a folder and n... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6424/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6424/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6411 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6411/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6411/comments | https://api.github.com/repos/huggingface/datasets/issues/6411/events | https://github.com/huggingface/datasets/pull/6411 | 1,992,386,630 | PR_kwDODunzps5fZE9F | 6,411 | Fix dependency conflict within CI build documentation | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2023-11-14T09:52:51Z | 2023-11-14T10:05:59Z | 2023-11-14T10:05:35Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6411.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6411",
"merged_at": "2023-11-14T10:05:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6411.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Manually fix dependency conflict on `typing-extensions` version originated by `apache-beam` + `pydantic` (now a dependency of `huggingface-hub`).
This is a temporary hot fix of our CI build documentation until we stop using `apache-beam`.
Fix #6406. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6411/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6411/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2708 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2708/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2708/comments | https://api.github.com/repos/huggingface/datasets/issues/2708/events | https://github.com/huggingface/datasets/issues/2708 | 951,092,660 | MDU6SXNzdWU5NTEwOTI2NjA= | 2,708 | QASC: incomplete training set | {
"avatar_url": "https://avatars.githubusercontent.com/u/2441454?v=4",
"events_url": "https://api.github.com/users/danyaljj/events{/privacy}",
"followers_url": "https://api.github.com/users/danyaljj/followers",
"following_url": "https://api.github.com/users/danyaljj/following{/other_user}",
"gists_url": "http... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Hi @danyaljj, thanks for reporting.\r\n\r\nUnfortunately, I have not been able to reproduce your problem. My train split has 8134 examples:\r\n```ipython\r\nIn [10]: ds[\"train\"]\r\nOut[10]:\r\nDataset({\r\n features: ['id', 'question', 'choices', 'answerKey', 'fact1', 'fact2', 'combinedfact', 'formatted_quest... | 2021-07-22T21:59:44Z | 2021-07-23T13:30:07Z | 2021-07-23T13:30:07Z | CONTRIBUTOR | null | null | null | ## Describe the bug
The training instances are not loaded properly.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("qasc", script_version='1.10.2')
def load_instances(split):
instances = dataset[split]
print(f"split: {split} - size: {len(instanc... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2708/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2708/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4089 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4089/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4089/comments | https://api.github.com/repos/huggingface/datasets/issues/4089/events | https://github.com/huggingface/datasets/pull/4089 | 1,191,915,196 | PR_kwDODunzps41l7yd | 4,089 | Create metric card for Frugal Score | {
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-04T14:53:49Z | 2022-04-05T14:14:46Z | 2022-04-05T14:06:50Z | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4089.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4089",
"merged_at": "2022-04-05T14:06:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4089.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Proposing metric card for Frugal Score.
@albertvillanova or @lhoestq -- there are certain aspects that I'm not 100% sure on (such as how exactly the distillation between BertScore and FrugalScore is done) -- so if you find that something isn't clear, please let me know! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4089/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4089/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6119 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6119/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6119/comments | https://api.github.com/repos/huggingface/datasets/issues/6119/events | https://github.com/huggingface/datasets/pull/6119 | 1,835,996,350 | PR_kwDODunzps5XKI19 | 6,119 | [Docs] Add description of `select_columns` to guide | {
"avatar_url": "https://avatars.githubusercontent.com/u/18213435?v=4",
"events_url": "https://api.github.com/users/unifyh/events{/privacy}",
"followers_url": "https://api.github.com/users/unifyh/followers",
"following_url": "https://api.github.com/users/unifyh/following{/other_user}",
"gists_url": "https://a... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-08-04T03:13:30Z | 2023-08-16T10:13:02Z | 2023-08-16T10:02:52Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6119.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6119",
"merged_at": "2023-08-16T10:02:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6119.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Closes #6116 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6119/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6119/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4926 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4926/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4926/comments | https://api.github.com/repos/huggingface/datasets/issues/4926/events | https://github.com/huggingface/datasets/pull/4926 | 1,360,384,484 | PR_kwDODunzps4-Srm1 | 4,926 | Dataset infos in yaml | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Alright this is ready for review :)\r\nI mostly would like your opinion on the YAML structure and what we can do in the docs (IMO we can add the docs about those fields in the Hub docs). Other than that let me know if the changes in ... | 2022-09-02T16:10:05Z | 2022-10-03T09:13:07Z | 2022-10-03T09:11:12Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4926.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4926",
"merged_at": "2022-10-03T09:11:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4926.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | To simplify the addition of new datasets, we'd like to have the dataset infos in the YAML and deprecate the dataset_infos.json file. YAML is readable and easy to edit, and the YAML metadata of the readme already contain dataset metadata so we would have everything in one place.
To be more specific, I moved these fie... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4926/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4926/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/214 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/214/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/214/comments | https://api.github.com/repos/huggingface/datasets/issues/214/events | https://github.com/huggingface/datasets/pull/214 | 626,641,549 | MDExOlB1bGxSZXF1ZXN0NDI0NTk1NjIx | 214 | [arrow_dataset.py] add new filter function | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_use... | [] | closed | false | null | [] | null | [
"I agree that a `.filter` method would be VERY useful and appreciated. I'm not a big fan of using `flatten_nested` as it completely breaks down the structure of the example and it may create bugs. Right now I think it may not work for nested structures. Maybe there's a simpler way that we've not figured out yet.",
... | 2020-05-28T16:21:40Z | 2020-05-29T11:43:29Z | 2020-05-29T11:32:20Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/214.diff",
"html_url": "https://github.com/huggingface/datasets/pull/214",
"merged_at": "2020-05-29T11:32:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/214.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/214... | The `.map()` function is super useful, but can IMO a bit tedious when filtering certain examples.
I think, filtering out examples is also a very common operation people would like to perform on datasets.
This PR is a proposal to add a `.filter()` function in the same spirit than the `.map()` function.
Here is a ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/214/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/214/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2186 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2186/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2186/comments | https://api.github.com/repos/huggingface/datasets/issues/2186/events | https://github.com/huggingface/datasets/pull/2186 | 852,840,819 | MDExOlB1bGxSZXF1ZXN0NjExMDMxNzE0 | 2,186 | GEM: new challenge sets | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | null | [
"cc @sebastiangehrmann"
] | 2021-04-07T21:39:07Z | 2021-04-07T21:56:35Z | 2021-04-07T21:56:35Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2186.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2186",
"merged_at": "2021-04-07T21:56:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2186.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | This PR updates the GEM dataset to:
- remove extraneous fields in WikiAuto after https://github.com/huggingface/datasets/pull/2171 fixed the source
- add context and services to Schema Guided Dialog
- Add new or update challenge sets for MLSUM ES and DE, XSUM, and SGD | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2186/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2186/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/718 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/718/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/718/comments | https://api.github.com/repos/huggingface/datasets/issues/718/events | https://github.com/huggingface/datasets/pull/718 | 715,694,709 | MDExOlB1bGxSZXF1ZXN0NDk4NTU5MDcw | 718 | Don't use tqdm 4.50.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [] | 2020-10-06T13:45:53Z | 2020-10-06T13:49:24Z | 2020-10-06T13:49:22Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/718.diff",
"html_url": "https://github.com/huggingface/datasets/pull/718",
"merged_at": "2020-10-06T13:49:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/718.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/718... | tqdm 4.50.0 introduced permission errors on windows
see [here](https://app.circleci.com/pipelines/github/huggingface/datasets/235/workflows/cfb6a39f-68eb-4802-8b17-2cd5e8ea7369/jobs/1111) for the error details.
For now I just added `<4.50.0` in the setup.py
Hopefully we can find what's wrong with this version soon | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/718/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/718/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5588 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5588/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5588/comments | https://api.github.com/repos/huggingface/datasets/issues/5588/events | https://github.com/huggingface/datasets/pull/5588 | 1,603,304,766 | PR_kwDODunzps5K8YYz | 5,588 | Flatten dataset on the fly in `save_to_disk` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | 2023-02-28T15:37:46Z | 2023-02-28T17:28:35Z | 2023-02-28T17:21:17Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5588.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5588",
"merged_at": "2023-02-28T17:21:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5588.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Flatten a dataset on the fly in `save_to_disk` instead of doing it with `flatten_indices` to avoid creating an additional cache file.
(this is one of the sub-tasks in https://github.com/huggingface/datasets/issues/5507) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5588/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5588/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/139 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/139/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/139/comments | https://api.github.com/repos/huggingface/datasets/issues/139/events | https://github.com/huggingface/datasets/pull/139 | 619,327,409 | MDExOlB1bGxSZXF1ZXN0NDE4ODc4NzMy | 139 | Add GermEval 2014 NER dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "... | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_use... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/followin... | null | [
"Had really fun playing around with this new library :heart: ",
"That's awesome - thanks @stefan-it :-) \r\n\r\nCould you maybe rebase to master and check if all dummy data tests are fine. I should have included the local tests directly in the test suite so that all PRs are fully checked: #140 - sorry :D ",
"@p... | 2020-05-15T23:42:09Z | 2020-05-16T13:56:37Z | 2020-05-16T13:56:22Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/139.diff",
"html_url": "https://github.com/huggingface/datasets/pull/139",
"merged_at": "2020-05-16T13:56:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/139.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/139... | Hi,
this PR adds the GermEval 2014 NER dataset 😃
> The GermEval 2014 NER Shared Task builds on a new dataset with German Named Entity annotation [1] with the following properties:
> - The data was sampled from German Wikipedia and News Corpora as a collection of citations.
> - The dataset covers over 31,000... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/139/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/139/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3895 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3895/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3895/comments | https://api.github.com/repos/huggingface/datasets/issues/3895/events | https://github.com/huggingface/datasets/pull/3895 | 1,166,619,182 | PR_kwDODunzps40T1C8 | 3,895 | Fix code examples indentation | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3895). All of your documentation changes will be reflected on that endpoint.",
"Still not rendered properly: https://moon-ci-docs.huggingface.co/docs/datasets/pr_3895/en/package_reference/main_classes#datasets.Dataset.align_lab... | 2022-03-11T16:29:04Z | 2022-03-11T17:34:30Z | 2022-03-11T17:34:29Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3895.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3895",
"merged_at": "2022-03-11T17:34:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3895.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Some code examples are currently not rendered correctly. I think this is because they are over-indented
cc @mariosasko | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3895/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3895/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4546 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4546/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4546/comments | https://api.github.com/repos/huggingface/datasets/issues/4546/events | https://github.com/huggingface/datasets/pull/4546 | 1,282,093,288 | PR_kwDODunzps46Oe_K | 4,546 | [CI] fixing seqeval install in ci by pinning setuptools-scm | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-23T09:24:37Z | 2022-06-23T10:24:16Z | 2022-06-23T10:13:44Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4546.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4546",
"merged_at": "2022-06-23T10:13:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4546.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | The latest setuptools-scm version supported on 3.6 is 6.4.2. However for some reason circleci has version 7, which doesn't work.
I fixed this by pinning the version of setuptools-scm in the circleci job
Fix https://github.com/huggingface/datasets/issues/4544 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4546/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4546/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5098 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5098/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5098/comments | https://api.github.com/repos/huggingface/datasets/issues/5098/events | https://github.com/huggingface/datasets/issues/5098 | 1,404,058,518 | I_kwDODunzps5TsDuW | 5,098 | Classes label error when loading symbolic links using imagefolder | {
"avatar_url": "https://avatars.githubusercontent.com/u/49552732?v=4",
"events_url": "https://api.github.com/users/horizon86/events{/privacy}",
"followers_url": "https://api.github.com/users/horizon86/followers",
"following_url": "https://api.github.com/users/horizon86/following{/other_user}",
"gists_url": "... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true... | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4",
"events_url": "https://api.github.com/users/riccardobucco/events{/privacy}",
"followers_url": "https://api.github.com/users/riccardobucco/followers",
"following_url": "https://api.github.com/users/riccardobucco/following{/other_user}",
"gi... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4",
"events_url": "https://api.github.com/users/riccardobucco/events{/privacy}",
"followers_url": "https://api.github.com/users/riccardobucco/followers",
"following_url": "https://api.github.com/users/riccardobucco/following{/other_u... | null | [
"It can be solved temporarily by remove `resolve` in \r\nhttps://github.com/huggingface/datasets/blob/bef23be3d9543b1ca2da87ab2f05070201044ddc/src/datasets/data_files.py#L278",
"Hi, thanks for reporting and suggesting a fix! We still need to account for `.`/`..` in the file path, so a more robust fix would be `P... | 2022-10-11T06:10:58Z | 2022-11-14T14:40:20Z | 2022-11-14T14:40:20Z | NONE | null | null | null | **Is your feature request related to a problem? Please describe.**
Like this: #4015
When there are **symbolic links** to pictures in the data folder, the parent folder name of the **real file** will be used as the class name instead of the parent folder of the symbolic link itself. Can you give an option to decide wh... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5098/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5098/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/726 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/726/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/726/comments | https://api.github.com/repos/huggingface/datasets/issues/726/events | https://github.com/huggingface/datasets/issues/726 | 719,313,754 | MDU6SXNzdWU3MTkzMTM3NTQ= | 726 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/16469472?v=4",
"events_url": "https://api.github.com/users/SparkJiao/events{/privacy}",
"followers_url": "https://api.github.com/users/SparkJiao/followers",
"following_url": "https://api.github.com/users/SparkJiao/following{/other_user}",
"gists_url": "... | [] | closed | false | null | [] | null | [
"Hi try, to provide more information please.\r\n\r\nExample code in a colab to reproduce the error, details on what you are trying to do and what you were expected and details on your environment (OS, PyPi packages version).",
"> Hi try, to provide more information please.\r\n> \r\n> Example code in a colab to re... | 2020-10-12T11:45:10Z | 2022-02-17T17:53:54Z | 2022-02-15T10:38:57Z | NONE | null | null | null | Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, post-processed: Unknown size, total: 49.03 GiB) to /home/admin/.cache/huggingface/datasets/op... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 2,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/726/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/726/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5863 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5863/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5863/comments | https://api.github.com/repos/huggingface/datasets/issues/5863/events | https://github.com/huggingface/datasets/pull/5863 | 1,710,335,905 | PR_kwDODunzps5QhtlM | 5,863 | Use a new low-memory approach for tf dataset index shuffling | {
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"g... | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5863). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... | 2023-05-15T15:28:34Z | 2023-06-08T16:40:18Z | 2023-06-08T16:32:51Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5863.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5863",
"merged_at": "2023-06-08T16:32:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5863.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | This PR tries out a new approach to generating the index tensor in `to_tf_dataset`, which should reduce memory usage for very large datasets. I'll need to do some testing before merging it!
Fixes #5855 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5863/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5863/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1261 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1261/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1261/comments | https://api.github.com/repos/huggingface/datasets/issues/1261/events | https://github.com/huggingface/datasets/pull/1261 | 758,626,112 | MDExOlB1bGxSZXF1ZXN0NTMzNzY4OTgy | 1,261 | Add Google Sentence Compression dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/46804938?v=4",
"events_url": "https://api.github.com/users/mattbui/events{/privacy}",
"followers_url": "https://api.github.com/users/mattbui/followers",
"following_url": "https://api.github.com/users/mattbui/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [] | 2020-12-07T15:47:43Z | 2020-12-08T17:01:59Z | 2020-12-08T17:01:59Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1261.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1261",
"merged_at": "2020-12-08T17:01:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1261.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | For more information: https://www.aclweb.org/anthology/D13-1155.pdf | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1261/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1261/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4005 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4005/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4005/comments | https://api.github.com/repos/huggingface/datasets/issues/4005/events | https://github.com/huggingface/datasets/issues/4005 | 1,179,365,663 | I_kwDODunzps5GS7Ef | 4,005 | Yelp not working | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_use... | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists... | null | [
"I don't think it's an issue with the dataset-viewer. Maybe @lhoestq or @albertvillanova could confirm.\r\n\r\n```python\r\n>>> from datasets import load_dataset, DownloadMode\r\n>>> import itertools\r\n>>> # without streaming\r\n>>> dataset = load_dataset(\"yelp_review_full\", name=\"yelp_review_full\", split=\"tr... | 2022-03-24T11:14:00Z | 2022-03-25T14:59:57Z | 2022-03-25T14:56:10Z | MEMBER | null | null | null | ## Dataset viewer issue for '*name of the dataset*'
**Link:** https://huggingface.co/datasets/yelp_review_full/viewer/yelp_review_full/train
Doesn't work:
```
Server error
Status code: 400
Exception: Error
Message: line contains NULL
```
Am I the one who added this dataset ? No
A seamingly... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4005/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4005/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2552 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2552/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2552/comments | https://api.github.com/repos/huggingface/datasets/issues/2552/events | https://github.com/huggingface/datasets/issues/2552 | 931,354,687 | MDU6SXNzdWU5MzEzNTQ2ODc= | 2,552 | Keys should be unique error on code_search_net | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "http... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Two questions:\r\n- with `datasets-cli env` we don't have any information on the dataset script version used. Should we give access to this somehow? Either as a note in the Error message or as an argument with the name of the dataset to `datasets-cli env`?\r\n- I don't really understand why the id is duplicated in... | 2021-06-28T09:15:20Z | 2021-09-06T14:08:30Z | 2021-09-02T08:25:29Z | MEMBER | null | null | null | ## Describe the bug
Loading `code_search_net` seems not possible at the moment.
## Steps to reproduce the bug
```python
>>> load_dataset('code_search_net')
Downloading: 8.50kB [00:00, 3.09MB/s] ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2552/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2552/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1963 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1963/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1963/comments | https://api.github.com/repos/huggingface/datasets/issues/1963/events | https://github.com/huggingface/datasets/issues/1963 | 818,289,967 | MDU6SXNzdWU4MTgyODk5Njc= | 1,963 | bug in SNLI dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
"Hi ! The labels -1 correspond to the examples without gold labels in the original snli dataset.\r\nFeel free to remove these examples if you don't need them by using\r\n```python\r\ndata = data.filter(lambda x: x[\"label\"] != -1)\r\n```"
] | 2021-02-28T19:36:20Z | 2022-10-05T13:13:46Z | 2022-10-05T13:13:46Z | NONE | null | null | null | Hi
There is label of -1 in train set of SNLI dataset, please find the code below:
```
import numpy as np
import datasets
data = datasets.load_dataset("snli")["train"]
labels = []
for d in data:
labels.append(d["label"])
print(np.unique(labels))
```
and results:
`[-1 0 1 2]`
version of datas... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1963/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1963/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5317 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5317/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5317/comments | https://api.github.com/repos/huggingface/datasets/issues/5317/events | https://github.com/huggingface/datasets/issues/5317 | 1,470,390,164 | I_kwDODunzps5XpF-U | 5,317 | `ImageFolder` performs poorly with large datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/1086393?v=4",
"events_url": "https://api.github.com/users/salieri/events{/privacy}",
"followers_url": "https://api.github.com/users/salieri/followers",
"following_url": "https://api.github.com/users/salieri/following{/other_user}",
"gists_url": "https:/... | [] | open | false | null | [] | null | [
"Hi ! ImageFolder is made for small scale datasets indeed. For large scale image datasets you better group your images in TAR archives or Arrow/Parquet files. This is true not just for ImageFolder loading performance, but also because having millions of files is not ideal for your filesystem or when moving the data... | 2022-12-01T00:04:21Z | 2022-12-01T21:49:26Z | null | NONE | null | null | null | ### Describe the bug
While testing image dataset creation, I'm seeing significant performance bottlenecks with imagefolders when scanning a directory structure with large number of images.
## Setup
* Nested directories (5 levels deep)
* 3M+ images
* 1 `metadata.jsonl` file
## Performance Degradation Point... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5317/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5317/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5539 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5539/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5539/comments | https://api.github.com/repos/huggingface/datasets/issues/5539/events | https://github.com/huggingface/datasets/issues/5539 | 1,587,970,083 | I_kwDODunzps5epoAj | 5,539 | IndexError: invalid index of a 0-dim tensor. Use `tensor.item()` in Python or `tensor.item<T>()` in C++ to convert a 0-dim tensor to a number | {
"avatar_url": "https://avatars.githubusercontent.com/u/41912135?v=4",
"events_url": "https://api.github.com/users/aalbersk/events{/privacy}",
"followers_url": "https://api.github.com/users/aalbersk/followers",
"following_url": "https://api.github.com/users/aalbersk/following{/other_user}",
"gists_url": "htt... | [
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | closed | false | null | [] | null | [
"Hi! The `set_transform` does not apply a custom formatting transform on a single example but the entire batch, so the fixed version of your transform would look as follows:\r\n```python\r\nfrom datasets import load_dataset\r\nimport torch\r\n\r\ndataset = load_dataset(\"lambdalabs/pokemon-blip-captions\", split='t... | 2023-02-16T16:08:51Z | 2023-02-22T10:30:30Z | 2023-02-21T13:03:57Z | NONE | null | null | null | ### Describe the bug
When dataset contains a 0-dim tensor, formatting.py raises a following error and fails.
```bash
Traceback (most recent call last):
File "<path>/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 501, in format_row
return _unnest(formatted_batch)
File "<path>/lib/py... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5539/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5539/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1391 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1391/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1391/comments | https://api.github.com/repos/huggingface/datasets/issues/1391/events | https://github.com/huggingface/datasets/pull/1391 | 760,432,041 | MDExOlB1bGxSZXF1ZXN0NTM1MjY0NjUx | 1,391 | Add MultiParaCrawl Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user... | [] | closed | false | null | [] | null | [] | 2020-12-09T15:32:46Z | 2020-12-10T18:39:45Z | 2020-12-10T18:39:44Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1391.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1391",
"merged_at": "2020-12-10T18:39:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1391.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1391/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1391/timeline | null | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/4826 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4826/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4826/comments | https://api.github.com/repos/huggingface/datasets/issues/4826/events | https://github.com/huggingface/datasets/pull/4826 | 1,335,987,583 | PR_kwDODunzps49B0V3 | 4,826 | Fix language tags in dataset cards | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The non-passing tests are caused by other missing information in the dataset cards."
] | 2022-08-11T13:47:14Z | 2022-08-11T14:17:48Z | 2022-08-11T14:03:12Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4826.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4826",
"merged_at": "2022-08-11T14:03:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4826.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Fix language tags in all dataset cards, so that they are validated (aligned with our `languages.json` resource). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4826/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4826/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2429 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2429/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2429/comments | https://api.github.com/repos/huggingface/datasets/issues/2429/events | https://github.com/huggingface/datasets/pull/2429 | 907,321,665 | MDExOlB1bGxSZXF1ZXN0NjU4MTg2ODc0 | 2,429 | Rename QuestionAnswering template to QuestionAnsweringExtractive | {
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://a... | [] | closed | false | null | [] | null | [
"> I like having \"extractive\" in the name to make things explicit. However this creates an inconsistency with transformers.\r\n> \r\n> See\r\n> https://huggingface.co/transformers/task_summary.html#extractive-question-answering\r\n> \r\n> But this is minor IMO and I'm ok with this renaming\r\n\r\nyes i chose this... | 2021-05-31T10:04:42Z | 2021-05-31T15:57:26Z | 2021-05-31T15:57:24Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2429.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2429",
"merged_at": "2021-05-31T15:57:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2429.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Following the discussion with @thomwolf in #2255, this PR renames the QA template to distinguish extractive vs abstractive QA. The abstractive template will be added in a future PR. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2429/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2429/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5247 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5247/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5247/comments | https://api.github.com/repos/huggingface/datasets/issues/5247/events | https://github.com/huggingface/datasets/pull/5247 | 1,451,297,749 | PR_kwDODunzps5DAhto | 5,247 | Set dev version | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5247). All of your documentation changes will be reflected on that endpoint."
] | 2022-11-16T10:17:31Z | 2022-11-16T10:22:20Z | 2022-11-16T10:17:50Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5247.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5247",
"merged_at": "2022-11-16T10:17:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5247.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5247/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5247/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2139 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2139/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2139/comments | https://api.github.com/repos/huggingface/datasets/issues/2139/events | https://github.com/huggingface/datasets/issues/2139 | 843,662,613 | MDU6SXNzdWU4NDM2NjI2MTM= | 2,139 | TypeError when using save_to_disk in a dataset loaded with ReadInstruction split | {
"avatar_url": "https://avatars.githubusercontent.com/u/22480495?v=4",
"events_url": "https://api.github.com/users/PedroMLF/events{/privacy}",
"followers_url": "https://api.github.com/users/PedroMLF/followers",
"following_url": "https://api.github.com/users/PedroMLF/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | null | [
"Hi !\r\nI think this has been fixed recently on `master`.\r\nCan you try again by installing `datasets` from `master` ?\r\n```\r\npip install git+https://github.com/huggingface/datasets.git\r\n```",
"Hi!\r\n\r\nUsing that version of the code solves the issue. Thanks!"
] | 2021-03-29T18:23:54Z | 2021-03-30T09:12:53Z | 2021-03-30T09:12:53Z | NONE | null | null | null | Hi,
Loading a dataset with `load_dataset` using a split defined via `ReadInstruction` and then saving it to disk results in the following error: `TypeError: Object of type ReadInstruction is not JSON serializable`.
Here is the minimal reproducible example:
```python
from datasets import load_dataset
from dat... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2139/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2139/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6489 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6489/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6489/comments | https://api.github.com/repos/huggingface/datasets/issues/6489/events | https://github.com/huggingface/datasets/issues/6489 | 2,036,743,777 | I_kwDODunzps55Zj5h | 6,489 | load_dataset imageflder for aws s3 path | {
"avatar_url": "https://avatars.githubusercontent.com/u/9353106?v=4",
"events_url": "https://api.github.com/users/segalinc/events{/privacy}",
"followers_url": "https://api.github.com/users/segalinc/followers",
"following_url": "https://api.github.com/users/segalinc/following{/other_user}",
"gists_url": "http... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 2023-12-12T00:08:43Z | 2023-12-12T00:09:27Z | null | NONE | null | null | null | ### Feature request
I would like to load a dataset from S3 using the imagefolder option
something like
`dataset = datasets.load_dataset('imagefolder', data_dir='s3://.../lsun/train/bedroom', fs=S3FileSystem(), streaming=True) `
### Motivation
no need of data_files
### Your contribution
no experience... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6489/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6489/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3102 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3102/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3102/comments | https://api.github.com/repos/huggingface/datasets/issues/3102/events | https://github.com/huggingface/datasets/issues/3102 | 1,029,067,062 | I_kwDODunzps49VlE2 | 3,102 | Unsuitable project description in PyPI | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [] | 2021-10-18T12:45:00Z | 2021-10-18T12:59:56Z | 2021-10-18T12:59:56Z | MEMBER | null | null | null | Currently, `datasets` project description appearing in PyPI shows the release instructions addressed to core maintainers: https://pypi.org/project/datasets/1.13.3/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3102/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3102/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/203 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/203/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/203/comments | https://api.github.com/repos/huggingface/datasets/issues/203/events | https://github.com/huggingface/datasets/pull/203 | 625,515,488 | MDExOlB1bGxSZXF1ZXN0NDIzNzEyMTQ3 | 203 | Raise an error if no config name for datasets like glue | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [] | 2020-05-27T09:03:58Z | 2020-05-27T16:40:39Z | 2020-05-27T16:40:38Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/203.diff",
"html_url": "https://github.com/huggingface/datasets/pull/203",
"merged_at": "2020-05-27T16:40:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/203.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/203... | Some datasets like glue (see #130) and scientific_papers (see #197) have many configs.
For example for glue there are cola, sst2, mrpc etc.
Currently if a user does `load_dataset('glue')`, then Cola is loaded by default and it can be confusing. Instead, we should raise an error to let the user know that he has to p... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/203/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/203/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/775 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/775/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/775/comments | https://api.github.com/repos/huggingface/datasets/issues/775/events | https://github.com/huggingface/datasets/pull/775 | 732,287,504 | MDExOlB1bGxSZXF1ZXN0NTEyMjUyODI3 | 775 | Properly delete metrics when a process is killed | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [] | 2020-10-29T12:52:07Z | 2020-10-29T14:01:20Z | 2020-10-29T14:01:19Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/775.diff",
"html_url": "https://github.com/huggingface/datasets/pull/775",
"merged_at": "2020-10-29T14:01:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/775.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/775... | Tests are flaky when using metrics in distributed setup.
There is because of one test that make sure that using two possibly incompatible metric computation (same exp id) either works or raises the right error.
However if the error is raised, all the processes of the metric are killed, and the open files (arrow + loc... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/775/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/775/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6013 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6013/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6013/comments | https://api.github.com/repos/huggingface/datasets/issues/6013/events | https://github.com/huggingface/datasets/issues/6013 | 1,796,083,437 | I_kwDODunzps5rDg7t | 6,013 | [FR] `map` should reuse unchanged columns from the previous dataset to avoid disk usage | {
"avatar_url": "https://avatars.githubusercontent.com/u/36224762?v=4",
"events_url": "https://api.github.com/users/NightMachinery/events{/privacy}",
"followers_url": "https://api.github.com/users/NightMachinery/followers",
"following_url": "https://api.github.com/users/NightMachinery/following{/other_user}",
... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "BDE59C",
"default": fals... | open | false | null | [] | null | [
"You can use the `remove_columns` parameter in `map` to avoid duplicating the columns (and save disk space) and then concatenate the original dataset with the map result:\r\n```python\r\nfrom datasets import concatenate_datasets\r\n# dummy example\r\nds_new = ds.map(lambda x: {\"new_col\": x[\"col\"] + 2}, remove_c... | 2023-07-10T06:42:20Z | 2023-07-10T15:37:52Z | null | CONTRIBUTOR | null | null | null | ### Feature request
Currently adding a new column with `map` will cause all the data in the dataset to be duplicated and stored/cached on the disk again. It should reuse unchanged columns.
### Motivation
This allows having datasets with different columns but sharing some basic columns. Currently, these datasets wou... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6013/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6013/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/242 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/242/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/242/comments | https://api.github.com/repos/huggingface/datasets/issues/242/events | https://github.com/huggingface/datasets/issues/242 | 631,733,683 | MDU6SXNzdWU2MzE3MzM2ODM= | 242 | UnicodeDecodeError when downloading GLUE-MNLI | {
"avatar_url": "https://avatars.githubusercontent.com/u/15801338?v=4",
"events_url": "https://api.github.com/users/patpizio/events{/privacy}",
"followers_url": "https://api.github.com/users/patpizio/followers",
"following_url": "https://api.github.com/users/patpizio/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | null | [
"It should be good now, thanks for noticing and fixing it ! I would say that it was because you are on windows but not 100% sure",
"On Windows Python supports Unicode almost everywhere, but one of the notable exceptions is open() where it uses the locale encoding schema. So platform independent python scripts wou... | 2020-06-05T16:30:01Z | 2020-06-09T16:06:47Z | 2020-06-08T08:45:03Z | CONTRIBUTOR | null | null | null | When I run
```python
dataset = nlp.load_dataset('glue', 'mnli')
```
I get an encoding error (could it be because I'm using Windows?) :
```python
# Lots of error log lines later...
~\Miniconda3\envs\nlp\lib\site-packages\tqdm\std.py in __iter__(self)
1128 try:
-> 1129 for obj in iterable:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/242/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/242/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2560 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2560/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2560/comments | https://api.github.com/repos/huggingface/datasets/issues/2560/events | https://github.com/huggingface/datasets/pull/2560 | 932,143,634 | MDExOlB1bGxSZXF1ZXN0Njc5NTMyODk4 | 2,560 | fix Dataset.map when num_procs > num rows | {
"avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4",
"events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}",
"followers_url": "https://api.github.com/users/connor-mccarthy/followers",
"following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}"... | [] | closed | false | null | [] | null | [
"Hi ! Thanks for fixing this :)\r\n\r\nLooks like you have tons of changes due to code formatting.\r\nWe're using `black` for this, with a custom line length. To run our code formatting, you just need to run\r\n```\r\nmake style\r\n```\r\n\r\nThen for the windows error in the CI, I'm looking into it. It's probably ... | 2021-06-29T02:24:11Z | 2021-06-29T15:00:18Z | 2021-06-29T14:53:31Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2560.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2560",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2560.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2560"
} | closes #2470
## Testing notes
To run updated tests:
```sh
pytest tests/test_arrow_dataset.py -k "BaseDatasetTest and test_map_multiprocessing" -s
```
With Python code (to view warning):
```python
from datasets import Dataset
dataset = Dataset.from_dict({"x": ["sample"]})
print(len(dataset))
dataset.map... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2560/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2560/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4701 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4701/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4701/comments | https://api.github.com/repos/huggingface/datasets/issues/4701/events | https://github.com/huggingface/datasets/pull/4701 | 1,307,689,625 | PR_kwDODunzps47jeE9 | 4,701 | Added more information in the README about contributors of the Arabic Speech Corpus | {
"avatar_url": "https://avatars.githubusercontent.com/u/2845798?v=4",
"events_url": "https://api.github.com/users/nawarhalabi/events{/privacy}",
"followers_url": "https://api.github.com/users/nawarhalabi/followers",
"following_url": "https://api.github.com/users/nawarhalabi/following{/other_user}",
"gists_ur... | [] | closed | false | null | [] | null | [] | 2022-07-18T09:48:03Z | 2022-07-28T10:33:05Z | 2022-07-28T10:33:05Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4701.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4701",
"merged_at": "2022-07-28T10:33:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4701.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Added more information in the README about contributors and encouraged reading the thesis for more infos | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4701/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4701/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6002 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6002/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6002/comments | https://api.github.com/repos/huggingface/datasets/issues/6002/events | https://github.com/huggingface/datasets/pull/6002 | 1,786,053,060 | PR_kwDODunzps5UhP-Z | 6,002 | Add KLUE-MRC metrics | {
"avatar_url": "https://avatars.githubusercontent.com/u/37537248?v=4",
"events_url": "https://api.github.com/users/ingyuseong/events{/privacy}",
"followers_url": "https://api.github.com/users/ingyuseong/followers",
"following_url": "https://api.github.com/users/ingyuseong/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
"The metrics API in `datasets` is deprecated as of version 2.0, and `evaulate` is our new library for metrics. You can add a new metric to it by following [these steps](https://huggingface.co/docs/evaluate/creating_and_sharing)."
] | 2023-07-03T12:11:10Z | 2023-07-09T11:57:20Z | 2023-07-09T11:57:20Z | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6002.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6002",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6002.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6002"
} | ## Metrics for KLUE-MRC (Korean Language Understanding Evaluation — Machine Reading Comprehension)
Adding metrics for [KLUE-MRC](https://huggingface.co/datasets/klue).
KLUE-MRC is very similar to SQuAD 2.0 but has a slightly different format which is why I added metrics for KLUE-MRC.
Specifically, in the case of... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6002/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6002/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3776 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3776/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3776/comments | https://api.github.com/repos/huggingface/datasets/issues/3776/events | https://github.com/huggingface/datasets/issues/3776 | 1,146,932,871 | I_kwDODunzps5EXM6H | 3,776 | Allow download only some files from the Wikipedia dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/1514798?v=4",
"events_url": "https://api.github.com/users/jvanz/events{/privacy}",
"followers_url": "https://api.github.com/users/jvanz/followers",
"following_url": "https://api.github.com/users/jvanz/following{/other_user}",
"gists_url": "https://api.g... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Hi @jvanz, thank you for your proposal.\r\n\r\nIn fact, we are aware that it is very common the problem you mention. Because of that, we are currently working in implementing a new version of wikipedia on the Hub, with all data preprocessed (no need to use Apache Beam), from where you will be able to use `data_fil... | 2022-02-22T13:46:41Z | 2022-02-22T14:50:02Z | null | NONE | null | null | null | **Is your feature request related to a problem? Please describe.**
The Wikipedia dataset can be really big. This is a problem if you want to use it locally in a laptop with the Apache Beam `DirectRunner`. Even if your laptop have a considerable amount of memory (e.g. 32gb).
**Describe the solution you'd like**
I... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3776/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3776/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5282 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5282/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5282/comments | https://api.github.com/repos/huggingface/datasets/issues/5282/events | https://github.com/huggingface/datasets/pull/5282 | 1,460,238,928 | PR_kwDODunzps5Det2_ | 5,282 | Release: 2.7.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [] | 2022-11-22T16:58:54Z | 2022-11-22T17:21:28Z | 2022-11-22T17:21:27Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5282.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5282",
"merged_at": "2022-11-22T17:21:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5282.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5282/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5282/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1118 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1118/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1118/comments | https://api.github.com/repos/huggingface/datasets/issues/1118/events | https://github.com/huggingface/datasets/pull/1118 | 757,142,350 | MDExOlB1bGxSZXF1ZXN0NTMyNTY3ODMw | 1,118 | Add Tashkeela dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/15667714?v=4",
"events_url": "https://api.github.com/users/zaidalyafeai/events{/privacy}",
"followers_url": "https://api.github.com/users/zaidalyafeai/followers",
"following_url": "https://api.github.com/users/zaidalyafeai/following{/other_user}",
"gist... | [] | closed | false | null | [] | null | [
"Sorry @lhoestq for the trouble, sometime I forget to change the names :/",
"> Sorry @lhoestq for the trouble, sometime I forget to change the names :/\r\n\r\nhaha it's ok ;)"
] | 2020-12-04T14:26:18Z | 2020-12-04T15:47:01Z | 2020-12-04T15:46:51Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1118.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1118",
"merged_at": "2020-12-04T15:46:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1118.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Arabic Vocalized Words Dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1118/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1118/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5448 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5448/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5448/comments | https://api.github.com/repos/huggingface/datasets/issues/5448/events | https://github.com/huggingface/datasets/issues/5448 | 1,550,618,514 | I_kwDODunzps5cbI-S | 5,448 | Support fsspec 2023.1.0 in CI | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [] | 2023-01-20T10:26:31Z | 2023-01-20T13:26:05Z | 2023-01-20T13:26:05Z | MEMBER | null | null | null | Once we find out the root cause of:
- #5445
we should revert the temporary pin on fsspec introduced by:
- #5447 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5448/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5448/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3696 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3696/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3696/comments | https://api.github.com/repos/huggingface/datasets/issues/3696/events | https://github.com/huggingface/datasets/pull/3696 | 1,129,764,534 | PR_kwDODunzps4yXXgH | 3,696 | Force unique keys in newsqa dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [] | 2022-02-10T10:09:19Z | 2022-02-14T08:37:20Z | 2022-02-14T08:37:19Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3696.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3696",
"merged_at": "2022-02-14T08:37:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3696.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Currently, it may raise `DuplicatedKeysError`.
Fix #3630. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3696/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3696/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2256 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2256/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2256/comments | https://api.github.com/repos/huggingface/datasets/issues/2256/events | https://github.com/huggingface/datasets/issues/2256 | 866,708,609 | MDU6SXNzdWU4NjY3MDg2MDk= | 2,256 | Running `datase.map` with `num_proc > 1` uses a lot of memory | {
"avatar_url": "https://avatars.githubusercontent.com/u/8143425?v=4",
"events_url": "https://api.github.com/users/roskoN/events{/privacy}",
"followers_url": "https://api.github.com/users/roskoN/followers",
"following_url": "https://api.github.com/users/roskoN/following{/other_user}",
"gists_url": "https://ap... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists... | null | [
"Thanks for reporting ! We are working on this and we'll do a patch release very soon.",
"We did a patch release to fix this issue.\r\nIt should be fixed in the new version 1.6.1\r\n\r\nThanks again for reporting and for the details :)"
] | 2021-04-24T09:56:20Z | 2021-04-26T17:12:15Z | 2021-04-26T17:12:15Z | NONE | null | null | null | ## Describe the bug
Running `datase.map` with `num_proc > 1` leads to a tremendous memory usage that requires swapping on disk and it becomes very slow.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dstc8_datset = load_dataset("roskoN/dstc8-reddit-corpus", keep_in_memory=False)
... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2256/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2256/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1710 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1710/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1710/comments | https://api.github.com/repos/huggingface/datasets/issues/1710/events | https://github.com/huggingface/datasets/issues/1710 | 781,914,951 | MDU6SXNzdWU3ODE5MTQ5NTE= | 1,710 | IsADirectoryError when trying to download C4 | {
"avatar_url": "https://avatars.githubusercontent.com/u/5771366?v=4",
"events_url": "https://api.github.com/users/fredriko/events{/privacy}",
"followers_url": "https://api.github.com/users/fredriko/followers",
"following_url": "https://api.github.com/users/fredriko/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | null | [
"I haven't tested C4 on my side so there so there may be a few bugs in the code/adjustments to make.\r\nHere it looks like in c4.py, line 190 one of the `files_to_download` is `'/'` which is invalid.\r\nValid files are paths to local files or URLs to remote files.",
"Fixed once processed data is used instead:\r\n... | 2021-01-08T07:31:30Z | 2022-08-04T11:56:10Z | 2022-08-04T11:55:04Z | NONE | null | null | null | **TLDR**:
I fail to download C4 and see a stacktrace originating in `IsADirectoryError` as an explanation for failure.
How can the problem be fixed?
**VERBOSE**:
I use Python version 3.7 and have the following dependencies listed in my project:
```
datasets==1.2.0
apache-beam==2.26.0
```
When runn... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1710/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1710/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/771 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/771/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/771/comments | https://api.github.com/repos/huggingface/datasets/issues/771/events | https://github.com/huggingface/datasets/issues/771 | 731,482,213 | MDU6SXNzdWU3MzE0ODIyMTM= | 771 | Using `Dataset.map` with `n_proc>1` print multiple progress bars | {
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"Yes it allows to monitor the speed of each process. Currently each process takes care of one shard of the dataset.\r\n\r\nAt one point we can consider using streaming batches to a pool of processes instead of sharding the dataset in `num_proc` parts. At that point it will be easy to use only one progress bar",
"... | 2020-10-28T14:13:27Z | 2023-02-13T20:16:39Z | 2023-02-13T20:16:39Z | CONTRIBUTOR | null | null | null | When using `Dataset.map` with `n_proc > 1`, only one of the processes should print a progress bar (to make the output readable). Right now, `n_proc` progress bars are printed. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/771/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/771/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/619 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/619/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/619/comments | https://api.github.com/repos/huggingface/datasets/issues/619/events | https://github.com/huggingface/datasets/issues/619 | 699,733,612 | MDU6SXNzdWU2OTk3MzM2MTI= | 619 | Mistakes in MLQA features names | {
"avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4",
"events_url": "https://api.github.com/users/M-Salti/events{/privacy}",
"followers_url": "https://api.github.com/users/M-Salti/followers",
"following_url": "https://api.github.com/users/M-Salti/following{/other_user}",
"gists_url": "https:/... | [] | closed | false | null | [] | null | [
"Indeed you're right ! Thanks for reporting that\r\n\r\nCould you open a PR to fix the features names ?"
] | 2020-09-11T20:46:23Z | 2020-09-16T06:59:19Z | 2020-09-16T06:59:19Z | CONTRIBUTOR | null | null | null | I think the following features in MLQA shouldn't be named the way they are:
1. `questions` (should be `question`)
2. `ids` (should be `id`)
3. `start` (should be `answer_start`)
The reasons I'm suggesting these features be renamed are:
* To make them consistent with other QA datasets like SQuAD, XQuAD, TyDiQA et... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/619/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/619/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4351 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4351/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4351/comments | https://api.github.com/repos/huggingface/datasets/issues/4351/events | https://github.com/huggingface/datasets/issues/4351 | 1,235,950,209 | I_kwDODunzps5JqxqB | 4,351 | Add optional progress bar for .save_to_disk(..) and .load_from_disk(..) when working with remote filesystems | {
"avatar_url": "https://avatars.githubusercontent.com/u/5154447?v=4",
"events_url": "https://api.github.com/users/Rexhaif/events{/privacy}",
"followers_url": "https://api.github.com/users/Rexhaif/followers",
"following_url": "https://api.github.com/users/Rexhaif/following{/other_user}",
"gists_url": "https:/... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"Hi! I like this idea. For consistency with `load_dataset`, we can use `fsspec`'s `TqdmCallback` in `.load_from_disk` to monitor the number of bytes downloaded, and in `.save_to_disk`, we can track the number of saved shards for consistency with `push_to_hub` (after we implement https://github.com/huggingface/data... | 2022-05-14T11:30:42Z | 2022-12-14T18:22:59Z | 2022-12-14T18:22:59Z | NONE | null | null | null | **Is your feature request related to a problem? Please describe.**
When working with large datasets stored on remote filesystems(such as s3), the process of uploading a dataset could take really long time. For instance: I was uploading a re-processed version of wmt17 en-ru to my s3 bucket and it took like 35 minutes(a... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4351/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4351/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4217 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4217/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4217/comments | https://api.github.com/repos/huggingface/datasets/issues/4217/events | https://github.com/huggingface/datasets/issues/4217 | 1,214,688,141 | I_kwDODunzps5IZquN | 4,217 | Big_Patent dataset broken | {
"avatar_url": "https://avatars.githubusercontent.com/u/54189843?v=4",
"events_url": "https://api.github.com/users/Matthew-Larsen/events{/privacy}",
"followers_url": "https://api.github.com/users/Matthew-Larsen/followers",
"following_url": "https://api.github.com/users/Matthew-Larsen/following{/other_user}",
... | [
{
"color": "8B51EF",
"default": false,
"description": "",
"id": 4069435429,
"name": "hosted-on-google-drive",
"node_id": "LA_kwDODunzps7yjqgl",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hosted-on-google-drive"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Thanks for reporting. The issue seems not to be directly related to the dataset viewer or the `datasets` library, but instead to it being hosted on Google Drive.\r\n\r\nSee related issues: https://github.com/huggingface/datasets/issues?q=is%3Aissue+is%3Aopen+drive.google.com\r\n\r\nTo quote [@lhoestq](https://gith... | 2022-04-25T15:31:45Z | 2022-05-26T06:29:43Z | 2022-05-02T18:21:15Z | NONE | null | null | null | ## Dataset viewer issue for '*big_patent*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/big_patent/viewer/all/train)*
*Unable to view because it says FileNotFound, also cannot download it through the python API*
Am I the one who added this dataset ? No
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4217/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4217/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1343 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1343/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1343/comments | https://api.github.com/repos/huggingface/datasets/issues/1343/events | https://github.com/huggingface/datasets/pull/1343 | 759,809,999 | MDExOlB1bGxSZXF1ZXN0NTM0NzQ4NTE4 | 1,343 | Add LiveQA | {
"avatar_url": "https://avatars.githubusercontent.com/u/22435209?v=4",
"events_url": "https://api.github.com/users/j-chim/events{/privacy}",
"followers_url": "https://api.github.com/users/j-chim/followers",
"following_url": "https://api.github.com/users/j-chim/following{/other_user}",
"gists_url": "https://a... | [] | closed | false | null | [] | null | [] | 2020-12-08T21:52:36Z | 2020-12-14T09:40:28Z | 2020-12-14T09:40:28Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1343.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1343",
"merged_at": "2020-12-14T09:40:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1343.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | This PR adds LiveQA, the Chinese real-time/timeline-based QA task by [Liu et al., 2020](https://arxiv.org/pdf/2010.00526.pdf). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1343/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1343/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1834 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1834/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1834/comments | https://api.github.com/repos/huggingface/datasets/issues/1834/events | https://github.com/huggingface/datasets/pull/1834 | 803,517,094 | MDExOlB1bGxSZXF1ZXN0NTY5NDMzNDA4 | 1,834 | Fixes base_url of limit dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/237550?v=4",
"events_url": "https://api.github.com/users/Paethon/events{/privacy}",
"followers_url": "https://api.github.com/users/Paethon/followers",
"following_url": "https://api.github.com/users/Paethon/following{/other_user}",
"gists_url": "https://... | [] | closed | false | null | [] | null | [
"OK, apparently it is a lot more complicated than simply changing the URL? Going to make an issue."
] | 2021-02-08T12:26:35Z | 2021-02-08T12:42:50Z | 2021-02-08T12:42:50Z | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1834.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1834",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1834.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1834"
} | `test.json` is not available in the master branch of the repository anymore. Linking to a specific commit. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1834/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1834/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1501 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1501/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1501/comments | https://api.github.com/repos/huggingface/datasets/issues/1501/events | https://github.com/huggingface/datasets/pull/1501 | 763,517,647 | MDExOlB1bGxSZXF1ZXN0NTM3OTYzMDU5 | 1,501 | Adds XED dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/24206326?v=4",
"events_url": "https://api.github.com/users/harshalmittal4/events{/privacy}",
"followers_url": "https://api.github.com/users/harshalmittal4/followers",
"following_url": "https://api.github.com/users/harshalmittal4/following{/other_user}",
... | [] | closed | false | null | [] | null | [
"Hi @lhoestq @yjernite, requesting you to review this for any changes needed. Thanks! :)"
] | 2020-12-12T09:47:00Z | 2020-12-14T21:20:59Z | 2020-12-14T21:20:59Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1501.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1501",
"merged_at": "2020-12-14T21:20:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1501.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1501/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1501/timeline | null | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/6216 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6216/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6216/comments | https://api.github.com/repos/huggingface/datasets/issues/6216/events | https://github.com/huggingface/datasets/pull/6216 | 1,883,492,703 | PR_kwDODunzps5Zp8al | 6,216 | Release: 2.13.2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-09-06T08:15:32Z | 2023-09-06T08:52:18Z | 2023-09-06T08:22:43Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6216.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6216",
"merged_at": "2023-09-06T08:22:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6216.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6216/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6216/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2438 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2438/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2438/comments | https://api.github.com/repos/huggingface/datasets/issues/2438/events | https://github.com/huggingface/datasets/pull/2438 | 908,461,914 | MDExOlB1bGxSZXF1ZXN0NjU5MTQ5Njg0 | 2,438 | Fix NQ features loading: reorder fields of features to match nested fields order in arrow data | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [] | 2021-06-01T16:09:30Z | 2021-06-04T09:02:31Z | 2021-06-04T09:02:31Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2438.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2438",
"merged_at": "2021-06-04T09:02:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2438.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | As mentioned in #2401, there is an issue when loading the features of `natural_questions` since the order of the nested fields in the features don't match. The order is important since it matters for the underlying arrow schema.
To fix that I re-order the features based on the arrow schema:
```python
inferred_fe... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2438/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2438/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4314 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4314/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4314/comments | https://api.github.com/repos/huggingface/datasets/issues/4314/events | https://github.com/huggingface/datasets/pull/4314 | 1,232,326,726 | PR_kwDODunzps43oqXD | 4,314 | Catch pull error when mirroring | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-11T09:38:35Z | 2022-05-11T12:54:07Z | 2022-05-11T12:46:42Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4314.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4314",
"merged_at": "2022-05-11T12:46:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4314.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Catch pull errors when mirroring so that the script continues to update the other datasets.
The error will still be printed at the end of the job. In this case the job also fails, and asks to manually update the datasets that failed. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4314/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4314/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2267 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2267/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2267/comments | https://api.github.com/repos/huggingface/datasets/issues/2267/events | https://github.com/huggingface/datasets/issues/2267 | 868,291,129 | MDU6SXNzdWU4NjgyOTExMjk= | 2,267 | DatasetDict save load Failing test in 1.6 not in 1.5 | {
"avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4",
"events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}",
"followers_url": "https://api.github.com/users/timothyjlaurent/followers",
"following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}",... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [
"Thanks for reporting ! We're looking into it",
"I'm not able to reproduce this, do you think you can provide a code that creates a DatasetDict that has this issue when saving and reloading ?",
"Hi, I just ran into a similar error. Here is the minimal code to reproduce:\r\n```python\r\nfrom datasets import load... | 2021-04-27T00:03:25Z | 2021-05-28T15:27:34Z | null | NONE | null | null | null | ## Describe the bug
We have a test that saves a DatasetDict to disk and then loads it from disk. In 1.6 there is an incompatibility in the schema.
Downgrading to `>1.6` -- fixes the problem.
## Steps to reproduce the bug
```python
### Load a dataset dict from jsonl
path = '/test/foo'
ds_dict.s... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2267/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2267/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5361 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5361/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5361/comments | https://api.github.com/repos/huggingface/datasets/issues/5361/events | https://github.com/huggingface/datasets/issues/5361 | 1,497,153,889 | I_kwDODunzps5ZPMFh | 5,361 | How concatenate `Audio` elements using batch mapping | {
"avatar_url": "https://avatars.githubusercontent.com/u/43239645?v=4",
"events_url": "https://api.github.com/users/bayartsogt-ya/events{/privacy}",
"followers_url": "https://api.github.com/users/bayartsogt-ya/followers",
"following_url": "https://api.github.com/users/bayartsogt-ya/following{/other_user}",
"g... | [] | closed | false | null | [] | null | [
"You can try something like this ?\r\n```python\r\ndef mapper_function(batch):\r\n return {\"concatenated_audio\": [np.concatenate([audio[\"array\"] for audio in batch[\"audio\"]])]}\r\n\r\ndataset = dataset.map(\r\n mapper_function,\r\n batched=True,\r\n batch_size=3,\r\n remove_columns=list(dataset.... | 2022-12-14T18:13:55Z | 2023-07-21T14:30:51Z | 2023-07-21T14:30:51Z | NONE | null | null | null | ### Describe the bug
I am trying to do concatenate audios in a dataset e.g. `google/fleurs`.
```python
print(dataset)
# Dataset({
# features: ['path', 'audio'],
# num_rows: 24
# })
def mapper_function(batch):
# to merge every 3 audio
# np.concatnate(audios[i: i+3]) for i in range(i, len(batc... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5361/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5361/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2313 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2313/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2313/comments | https://api.github.com/repos/huggingface/datasets/issues/2313/events | https://github.com/huggingface/datasets/pull/2313 | 875,475,367 | MDExOlB1bGxSZXF1ZXN0NjI5ODEwNTc4 | 2,313 | Remove unused head_hf_s3 function | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [] | 2021-05-04T13:42:06Z | 2021-05-07T09:31:42Z | 2021-05-07T09:31:42Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2313.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2313",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2313.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2313"
} | Currently, the function `head_hf_s3` is not used:
- neither its returned result is used
- nor it raises any exception, as exceptions are catched and returned (not raised)
This PR removes it. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2313/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2313/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4221 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4221/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4221/comments | https://api.github.com/repos/huggingface/datasets/issues/4221/events | https://github.com/huggingface/datasets/issues/4221 | 1,215,911,182 | I_kwDODunzps5IeVUO | 4,221 | Dictionary Feature | {
"avatar_url": "https://avatars.githubusercontent.com/u/2944532?v=4",
"events_url": "https://api.github.com/users/jordiae/events{/privacy}",
"followers_url": "https://api.github.com/users/jordiae/followers",
"following_url": "https://api.github.com/users/jordiae/following{/other_user}",
"gists_url": "https:/... | [
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Hi @jordiae,\r\n\r\nInstead of the `Sequence` feature, you can use just a regular list: put the dict between `[` and `]`:\r\n```python\r\n\"list_of_dict_feature\": [\r\n {\r\n \"key1_in_dict\": datasets.Value(\"string\"),\r\n \"key2_in_dict\": datasets.Value(\"int32\"),\r\n ...\r\n }\r\n... | 2022-04-26T12:50:18Z | 2022-04-29T14:52:19Z | 2022-04-28T17:04:58Z | NONE | null | null | null | Hi, I'm trying to create the loading script for a dataset in which one feature is a list of dictionaries, which afaik doesn't fit very well the values and structures supported by Value and Sequence. Is there any suggested workaround, am I missing something?
Thank you in advance. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4221/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4221/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/28 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/28/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/28/comments | https://api.github.com/repos/huggingface/datasets/issues/28/events | https://github.com/huggingface/datasets/pull/28 | 610,241,907 | MDExOlB1bGxSZXF1ZXN0NDExNzE5MTQy | 28 | [Circle ci] Adds circle ci config | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_use... | [] | closed | false | null | [] | null | [] | 2020-04-30T17:03:35Z | 2020-04-30T19:51:09Z | 2020-04-30T19:51:08Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/28.diff",
"html_url": "https://github.com/huggingface/datasets/pull/28",
"merged_at": "2020-04-30T19:51:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/28.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/28"
} | @thomwolf can you take a look and set up circle ci on:
https://app.circleci.com/projects/project-dashboard/github/huggingface
I think for `nlp` only admins can set it up, which I guess is you :-) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/28/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/28/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2939 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2939/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2939/comments | https://api.github.com/repos/huggingface/datasets/issues/2939/events | https://github.com/huggingface/datasets/pull/2939 | 999,639,630 | PR_kwDODunzps4r58Gu | 2,939 | MENYO-20k repo has moved, updating URL | {
"avatar_url": "https://avatars.githubusercontent.com/u/4109253?v=4",
"events_url": "https://api.github.com/users/cdleong/events{/privacy}",
"followers_url": "https://api.github.com/users/cdleong/followers",
"following_url": "https://api.github.com/users/cdleong/following{/other_user}",
"gists_url": "https:/... | [] | closed | false | null | [] | null | [] | 2021-09-17T19:01:54Z | 2021-09-21T15:31:37Z | 2021-09-21T15:31:36Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2939.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2939",
"merged_at": "2021-09-21T15:31:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2939.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Dataset repo moved to https://github.com/uds-lsv/menyo-20k_MT, now editing URL to match.
https://github.com/uds-lsv/menyo-20k_MT/blob/master/data/train.tsv is the file we're looking for | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2939/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2939/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6179 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6179/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6179/comments | https://api.github.com/repos/huggingface/datasets/issues/6179/events | https://github.com/huggingface/datasets/issues/6179 | 1,867,009,016 | I_kwDODunzps5vSEv4 | 6,179 | Map cache with tokenizer | {
"avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4",
"events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}",
"followers_url": "https://api.github.com/users/jonathanasdf/followers",
"following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}",
"gists_... | [] | open | false | null | [] | null | [
"https://github.com/huggingface/datasets/issues/5147 may be a solution, by passing in the tokenizer in a fn_kwargs and ignoring it in the fingerprint calculations",
"I have a similar issue. I was using a Jupyter Notebook and every time I call the map function it performs tokenization from scratch again although t... | 2023-08-25T12:55:18Z | 2023-08-31T15:17:24Z | null | NONE | null | null | null | Similar issue to https://github.com/huggingface/datasets/issues/5985, but across different sessions rather than two calls in the same session.
Unlike that issue, explicitly calling tokenizer(my_args) before the map() doesn't help, because the tokenizer was created with a different hash to begin with...
setup
```... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6179/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6179/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6503 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6503/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6503/comments | https://api.github.com/repos/huggingface/datasets/issues/6503/events | https://github.com/huggingface/datasets/pull/6503 | 2,043,847,591 | PR_kwDODunzps5iHgZf | 6,503 | Fix streaming xnli | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6503). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... | 2023-12-15T14:40:57Z | 2023-12-15T14:51:06Z | 2023-12-15T14:44:47Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6503.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6503",
"merged_at": "2023-12-15T14:44:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6503.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | This code was failing
```python
In [1]: from datasets import load_dataset
In [2]:
...: ds = load_dataset("xnli", "all_languages", split="test", streaming=True)
...:
...: sample_data = next(iter(ds))["premise"] # pick up one data
...: input_text = list(sample_data.valu... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6503/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6503/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2415 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2415/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2415/comments | https://api.github.com/repos/huggingface/datasets/issues/2415/events | https://github.com/huggingface/datasets/issues/2415 | 903,923,097 | MDU6SXNzdWU5MDM5MjMwOTc= | 2,415 | Cached dataset not loaded | {
"avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4",
"events_url": "https://api.github.com/users/borisdayma/events{/privacy}",
"followers_url": "https://api.github.com/users/borisdayma/followers",
"following_url": "https://api.github.com/users/borisdayma/following{/other_user}",
"gists_url": ... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"It actually seems to happen all the time in above configuration:\r\n* the function `filter_by_duration` correctly loads cached processed dataset\r\n* the function `prepare_dataset` is always reexecuted\r\n\r\nI end up solving the issue by saving to disk my dataset at the end but I'm still wondering if it's a bug o... | 2021-05-27T15:40:06Z | 2021-06-02T13:15:47Z | 2021-06-02T13:15:47Z | CONTRIBUTOR | null | null | null | ## Describe the bug
I have a large dataset (common_voice, english) where I use several map and filter functions.
Sometimes my cached datasets after specific functions are not loaded.
I always use the same arguments, same functions, no seed…
## Steps to reproduce the bug
```python
def filter_by_duration(batch):
... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2415/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2415/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6169 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6169/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6169/comments | https://api.github.com/repos/huggingface/datasets/issues/6169/events | https://github.com/huggingface/datasets/issues/6169 | 1,862,360,199 | I_kwDODunzps5vAVyH | 6,169 | Configurations in yaml not working | {
"avatar_url": "https://avatars.githubusercontent.com/u/45085098?v=4",
"events_url": "https://api.github.com/users/tsor13/events{/privacy}",
"followers_url": "https://api.github.com/users/tsor13/followers",
"following_url": "https://api.github.com/users/tsor13/following{/other_user}",
"gists_url": "https://a... | [] | open | false | null | [] | null | [
"Unfortunately, I cannot reproduce this behavior on my machine or Colab - the reproducer returns `['main_data', 'additional_data']` as expected.",
"Thank you for looking into this, Mario. Is this on [my repository](https://huggingface.co/datasets/tsor13/test), or on another one that you have reproduced? Would you... | 2023-08-23T00:13:22Z | 2023-08-23T15:35:31Z | null | NONE | null | null | null | ### Dataset configurations cannot be created in YAML/README
Hello! I'm trying to follow the docs here in order to create structure in my dataset as added from here (#5331): https://github.com/huggingface/datasets/blob/8b8e6ee067eb74e7965ca2a6768f15f9398cb7c8/docs/source/repository_structure.mdx#L110-L118
I have t... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6169/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6169/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6001 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6001/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6001/comments | https://api.github.com/repos/huggingface/datasets/issues/6001/events | https://github.com/huggingface/datasets/pull/6001 | 1,782,516,627 | PR_kwDODunzps5UVMMh | 6,001 | Align `column_names` type check with type hint in `sort` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-06-30T13:15:50Z | 2023-06-30T14:18:32Z | 2023-06-30T14:11:24Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6001.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6001",
"merged_at": "2023-06-30T14:11:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6001.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Fix #5998 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6001/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6001/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/433 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/433/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/433/comments | https://api.github.com/repos/huggingface/datasets/issues/433/events | https://github.com/huggingface/datasets/issues/433 | 665,311,025 | MDU6SXNzdWU2NjUzMTEwMjU= | 433 | How to reuse functionality of a (generic) dataset? | {
"avatar_url": "https://avatars.githubusercontent.com/u/3375489?v=4",
"events_url": "https://api.github.com/users/ArneBinder/events{/privacy}",
"followers_url": "https://api.github.com/users/ArneBinder/followers",
"following_url": "https://api.github.com/users/ArneBinder/following{/other_user}",
"gists_url":... | [] | closed | false | null | [] | null | [
"Hi @ArneBinder, we have a few \"generic\" datasets which are intended to load data files with a predefined format:\r\n- csv: https://github.com/huggingface/nlp/tree/master/datasets/csv\r\n- json: https://github.com/huggingface/nlp/tree/master/datasets/json\r\n- text: https://github.com/huggingface/nlp/tree/master/... | 2020-07-24T17:27:37Z | 2022-10-04T17:59:34Z | 2022-10-04T17:59:33Z | NONE | null | null | null | I have written a generic dataset for corpora created with the Brat annotation tool ([specification](https://brat.nlplab.org/standoff.html), [dataset code](https://github.com/ArneBinder/nlp/blob/brat/datasets/brat/brat.py)). Now I wonder how to use that to create specific dataset instances. What's the recommended way to... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/433/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/433/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1334 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1334/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1334/comments | https://api.github.com/repos/huggingface/datasets/issues/1334/events | https://github.com/huggingface/datasets/pull/1334 | 759,699,993 | MDExOlB1bGxSZXF1ZXN0NTM0NjU5MDg2 | 1,334 | Add QED Amara Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user... | [] | closed | false | null | [] | null | [] | 2020-12-08T19:01:13Z | 2020-12-10T11:17:25Z | 2020-12-10T11:15:57Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1334.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1334",
"merged_at": "2020-12-10T11:15:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1334.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1334/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1334/timeline | null | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/3983 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3983/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3983/comments | https://api.github.com/repos/huggingface/datasets/issues/3983/events | https://github.com/huggingface/datasets/issues/3983 | 1,175,759,412 | I_kwDODunzps5GFKo0 | 3,983 | Infinitely attempting lock | {
"avatar_url": "https://avatars.githubusercontent.com/u/11869652?v=4",
"events_url": "https://api.github.com/users/jyrr/events{/privacy}",
"followers_url": "https://api.github.com/users/jyrr/followers",
"following_url": "https://api.github.com/users/jyrr/following{/other_user}",
"gists_url": "https://api.git... | [] | closed | false | null | [] | null | [
"Hi ! Thanks for reporting. We're using `py-filelock` as our locking mechanism.\r\n\r\nCan you try deleting the .lock file mentioned in the logs and try again ? Make sure that no other process is generating the `cnn_dailymail` dataset.\r\n\r\nIf it doesn't work, could you try to set up a lock using the latest vers... | 2022-03-21T18:11:57Z | 2022-05-06T16:12:18Z | 2022-05-06T16:12:18Z | NONE | null | null | null | I am trying to run one of the examples of the `transformers` repo, which makes use of `datasets`.
Important to note is that I am trying to run this via a Databricks notebook, and all the files reside in the Databricks Filesystem (DBFS).
```
%sh
python /dbfs/transformers/examples/pytorch/summarization/run_summariz... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3983/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3983/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4571 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4571/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4571/comments | https://api.github.com/repos/huggingface/datasets/issues/4571/events | https://github.com/huggingface/datasets/issues/4571 | 1,284,883,289 | I_kwDODunzps5MlcNZ | 4,571 | move under the facebook org? | {
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://a... | [] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Related to https://github.com/huggingface/datasets/issues/4562#issuecomment-1166911751\r\n\r\nI'll assign @albertvillanova ",
"I'm just wondering why we don't have this dataset under:\r\n- the `facebook` namespace\r\n- or the canonical dataset `flores`: why does this only have 2 languages?",
"fwiw: the dataset... | 2022-06-26T11:19:09Z | 2023-09-25T12:05:18Z | null | MEMBER | null | null | null | ### Link
https://huggingface.co/datasets/gsarti/flores_101
### Description
It seems like streaming isn't supported for this dataset:
```
Server Error
Status code: 400
Exception: NotImplementedError
Message: Extraction protocol for TAR archives like 'https://dl.fbaipublicfiles.com/flores101/dataset... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4571/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4571/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2634 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2634/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2634/comments | https://api.github.com/repos/huggingface/datasets/issues/2634/events | https://github.com/huggingface/datasets/pull/2634 | 942,805,621 | MDExOlB1bGxSZXF1ZXN0Njg4NDk2Mzc2 | 2,634 | Inject ASR template for lj_speech dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | {
"closed_at": "2021-07-21T15:36:49Z",
"closed_issues": 29,
"created_at": "2021-06-08T18:48:33Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/... | [] | 2021-07-13T06:04:54Z | 2021-07-13T09:05:09Z | 2021-07-13T09:05:09Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2634.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2634",
"merged_at": "2021-07-13T09:05:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2634.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Related to: #2565, #2633.
cc: @lewtun | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2634/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2634/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/276 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/276/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/276/comments | https://api.github.com/repos/huggingface/datasets/issues/276/events | https://github.com/huggingface/datasets/pull/276 | 639,490,858 | MDExOlB1bGxSZXF1ZXN0NDM1MDY5Nzg5 | 276 | Fix metric compute (original_instructions missing) | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"Awesome! This is working now:\r\n\r\n```python\r\nimport nlp \r\nseqeval = nlp.load_metric(\"seqeval\") \r\ny_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] \r\ny_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] ... | 2020-06-16T08:52:01Z | 2020-06-18T07:41:45Z | 2020-06-18T07:41:44Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/276.diff",
"html_url": "https://github.com/huggingface/datasets/pull/276",
"merged_at": "2020-06-18T07:41:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/276.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/276... | When loading arrow data we added in cc8d250 a way to specify the instructions that were used to store them with the loaded dataset.
However metrics load data the same way but don't need instructions (we use one single file).
In this PR I just make `original_instructions` optional when reading files to load a `Datas... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/276/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/276/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1738 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1738/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1738/comments | https://api.github.com/repos/huggingface/datasets/issues/1738/events | https://github.com/huggingface/datasets/pull/1738 | 786,068,440 | MDExOlB1bGxSZXF1ZXN0NTU0OTk2NDU4 | 1,738 | Conda support | {
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_u... | [] | closed | false | null | [] | null | [
"Nice thanks :) \r\nNote that in `datasets` the tags are simply the version without the `v`. For example `1.2.1`.",
"Do you push tags only for versions?",
"Yes I've always used tags only for versions"
] | 2021-01-14T15:11:25Z | 2021-01-15T10:08:20Z | 2021-01-15T10:08:19Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1738.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1738",
"merged_at": "2021-01-15T10:08:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1738.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Will push a new version on anaconda cloud every time a tag starting with `v` is pushed (like `v1.2.2`).
Will appear here: https://anaconda.org/huggingface/datasets
Depends on `conda-forge` for now, so the following is required for installation:
```
conda install -c huggingface -c conda-forge datasets
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 4,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1738/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1738/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6417 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6417/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6417/comments | https://api.github.com/repos/huggingface/datasets/issues/6417/events | https://github.com/huggingface/datasets/issues/6417 | 1,993,149,416 | I_kwDODunzps52zQvo | 6,417 | Bug: LayoutLMv3 finetuning on FUNSD Notebook; Arrow Error | {
"avatar_url": "https://avatars.githubusercontent.com/u/57496007?v=4",
"events_url": "https://api.github.com/users/Davo00/events{/privacy}",
"followers_url": "https://api.github.com/users/Davo00/followers",
"following_url": "https://api.github.com/users/Davo00/following{/other_user}",
"gists_url": "https://a... | [] | closed | false | null | [] | null | [
"Very strange: `datasets-cli env`\r\n> \r\n> Copy-and-paste the text below in your GitHub issue.\r\n> \r\n> - `datasets` version: 2.9.0\r\n> - Platform: macOS-14.0-arm64-arm-64bit\r\n> - Python version: 3.9.13\r\n> - PyArrow version: 8.0.0\r\n> - Pandas version: 1.3.5\r\n\r\nAfter updating datasets and pyarrow on b... | 2023-11-14T16:53:20Z | 2023-11-16T20:23:41Z | 2023-11-16T20:23:41Z | NONE | null | null | null | ### Describe the bug
Arrow issues when running the example Notebook laptop locally on Mac with M1. Works on Google Collab.
**Notebook**: https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv3/Fine_tune_LayoutLMv3_on_FUNSD_(HuggingFace_Trainer).ipynb
**Error**: `ValueError: Arrow type extensi... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6417/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6417/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2301 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2301/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2301/comments | https://api.github.com/repos/huggingface/datasets/issues/2301/events | https://github.com/huggingface/datasets/issues/2301 | 873,941,266 | MDU6SXNzdWU4NzM5NDEyNjY= | 2,301 | Unable to setup dev env on Windows | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
"Hi @gchhablani, \r\n\r\nThere are some 3rd-party dependencies that require to build code in C. In this case, it is the library `python-Levenshtein`.\r\n\r\nOn Windows, in order to be able to build C code, you need to install at least `Microsoft C++ Build Tools` version 14. You can find more info here: https://visu... | 2021-05-02T13:20:42Z | 2021-05-03T15:18:01Z | 2021-05-03T15:17:34Z | CONTRIBUTOR | null | null | null | Hi
I tried installing the `".[dev]"` version on Windows 10 after cloning.
Here is the error I'm facing:
```bat
(env) C:\testing\datasets>pip install -e ".[dev]"
Obtaining file:///C:/testing/datasets
Requirement already satisfied: numpy>=1.17 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datas... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2301/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2301/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3043 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3043/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3043/comments | https://api.github.com/repos/huggingface/datasets/issues/3043/events | https://github.com/huggingface/datasets/issues/3043 | 1,020,252,114 | I_kwDODunzps48z8_S | 3,043 | Add PASS dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_ur... | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",... | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
... | null | [] | 2021-10-07T16:43:43Z | 2022-01-20T16:50:47Z | 2022-01-20T16:50:47Z | MEMBER | null | null | null | ## Adding a Dataset
- **Name:** PASS
- **Description:** An ImageNet replacement for self-supervised pretraining without humans
- **Data:** https://www.robots.ox.ac.uk/~vgg/research/pass/ https://github.com/yukimasano/PASS
Instructions to add a new dataset can be found [here](https://github.com/huggingface/dataset... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3043/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3043/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/516 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/516/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/516/comments | https://api.github.com/repos/huggingface/datasets/issues/516/events | https://github.com/huggingface/datasets/pull/516 | 681,846,032 | MDExOlB1bGxSZXF1ZXN0NDcwMTY5NTA0 | 516 | [Breaking] Rename formated to formatted | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [] | 2020-08-19T13:35:23Z | 2020-08-20T08:41:17Z | 2020-08-20T08:41:16Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/516.diff",
"html_url": "https://github.com/huggingface/datasets/pull/516",
"merged_at": "2020-08-20T08:41:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/516.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/516... | `formated` is not correct but `formatted` is | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/516/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/516/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3695 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3695/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3695/comments | https://api.github.com/repos/huggingface/datasets/issues/3695/events | https://github.com/huggingface/datasets/pull/3695 | 1,129,730,148 | PR_kwDODunzps4yXP44 | 3,695 | Fix ClassLabel to/from dict when passed names_file | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [] | 2022-02-10T09:47:10Z | 2022-02-11T23:02:32Z | 2022-02-11T23:02:31Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3695.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3695",
"merged_at": "2022-02-11T23:02:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3695.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Currently, `names_file` is a field of the data class `ClassLabel`, thus appearing when transforming it to dict (when saving infos). Afterwards, when trying to read it from infos, it conflicts with the other field `names`.
This PR, removes `names_file` as a field of the data class `ClassLabel`.
- it is only used at ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3695/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3695/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5196 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5196/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5196/comments | https://api.github.com/repos/huggingface/datasets/issues/5196/events | https://github.com/huggingface/datasets/pull/5196 | 1,434,401,646 | PR_kwDODunzps5CH439 | 5,196 | Use hfh hf_hub_url function | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5196). All of your documentation changes will be reflected on that endpoint.",
"@lhoestq I think we should first agree if `datasets` can introduce the breaking change of ignoring `config.HUB_DATASETS_URL`: some users may have o... | 2022-11-03T10:08:09Z | 2022-12-06T11:38:17Z | 2022-11-09T07:15:12Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5196.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5196",
"merged_at": "2022-11-09T07:15:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5196.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Small refactoring to use `hf_hub_url` function from `huggingface_hub`.
This PR also creates the `hub` module that will contain all `huggingface_hub` functionalities relevant to `datasets`.
This is a necessary stage before implementing the use of the `hfh` caching system (which uses its `hf_hub_url` under the hood... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5196/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5196/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3851 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3851/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3851/comments | https://api.github.com/repos/huggingface/datasets/issues/3851/events | https://github.com/huggingface/datasets/issues/3851 | 1,162,137,998 | I_kwDODunzps5FRNGO | 3,851 | Load audio dataset error | {
"avatar_url": "https://avatars.githubusercontent.com/u/31890987?v=4",
"events_url": "https://api.github.com/users/lemoner20/events{/privacy}",
"followers_url": "https://api.github.com/users/lemoner20/followers",
"following_url": "https://api.github.com/users/lemoner20/following{/other_user}",
"gists_url": "... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Hi @lemoner20, thanks for reporting.\r\n\r\nI'm sorry but I cannot reproduce your problem:\r\n```python\r\nIn [1]: from datasets import load_dataset, load_metric, Audio\r\n ...: raw_datasets = load_dataset(\"superb\", \"ks\", split=\"train\")\r\n ...: print(raw_datasets[0][\"audio\"])\r\nDownloading builder sc... | 2022-03-08T02:16:04Z | 2022-09-27T12:13:55Z | 2022-03-08T11:20:06Z | NONE | null | null | null | ## Load audio dataset error
Hi, when I load audio dataset following https://huggingface.co/docs/datasets/audio_process and https://github.com/huggingface/datasets/tree/master/datasets/superb,
```
from datasets import load_dataset, load_metric, Audio
raw_datasets = load_dataset("superb", "ks", split="train")
prin... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3851/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3851/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/399 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/399/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/399/comments | https://api.github.com/repos/huggingface/datasets/issues/399/events | https://github.com/huggingface/datasets/pull/399 | 657,841,433 | MDExOlB1bGxSZXF1ZXN0NDQ5ODkxNTEy | 399 | Spelling mistake | {
"avatar_url": "https://avatars.githubusercontent.com/u/9410067?v=4",
"events_url": "https://api.github.com/users/BlancRay/events{/privacy}",
"followers_url": "https://api.github.com/users/BlancRay/followers",
"following_url": "https://api.github.com/users/BlancRay/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | null | [
"Thanks!"
] | 2020-07-16T04:37:58Z | 2020-07-16T06:49:48Z | 2020-07-16T06:49:37Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/399.diff",
"html_url": "https://github.com/huggingface/datasets/pull/399",
"merged_at": "2020-07-16T06:49:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/399.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/399... | In "Formatting the dataset" part, "The two toehr modifications..." should be "The two other modifications..." ,the word "other" wrong spelled as "toehr". | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/399/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/399/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/855 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/855/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/855/comments | https://api.github.com/repos/huggingface/datasets/issues/855/events | https://github.com/huggingface/datasets/pull/855 | 743,690,839 | MDExOlB1bGxSZXF1ZXN0NTIxNTQ2Njkx | 855 | Fix kor nli csv reader | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [] | 2020-11-16T09:53:41Z | 2020-11-16T13:59:14Z | 2020-11-16T13:59:12Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/855.diff",
"html_url": "https://github.com/huggingface/datasets/pull/855",
"merged_at": "2020-11-16T13:59:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/855.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/855... | The kor_nli dataset had an issue with the csv reader that was not able to parse the lines correctly. Some lines were merged together for some reason.
I fixed that by iterating through the lines directly instead of using a csv reader.
I also changed the feature names to match the other NLI datasets (i.e. use "premise"... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/855/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/855/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4230 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4230/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4230/comments | https://api.github.com/repos/huggingface/datasets/issues/4230/events | https://github.com/huggingface/datasets/issues/4230 | 1,216,643,661 | I_kwDODunzps5IhIJN | 4,230 | Why the `conll2003` dataset on huggingface only contains the `en` subset? Where is the German data? | {
"avatar_url": "https://avatars.githubusercontent.com/u/37113676?v=4",
"events_url": "https://api.github.com/users/beyondguo/events{/privacy}",
"followers_url": "https://api.github.com/users/beyondguo/followers",
"following_url": "https://api.github.com/users/beyondguo/following{/other_user}",
"gists_url": "... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"Thanks for reporting @beyondguo.\r\n\r\nIndeed, we generate this dataset from this raw data file URL: https://data.deepai.org/conll2003.zip\r\nAnd that URL only contains the English version.",
"The German data requires payment\r\n\r\nThe [original task page](https://www.clips.uantwerpen.be/conll2003/ner/) states... | 2022-04-27T00:53:52Z | 2023-07-25T15:10:15Z | 2023-07-25T15:10:15Z | NONE | null | null | null | 
But on huggingface datasets:

Where is the German data? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4230/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4230/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5195 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5195/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5195/comments | https://api.github.com/repos/huggingface/datasets/issues/5195/events | https://github.com/huggingface/datasets/pull/5195 | 1,434,290,689 | PR_kwDODunzps5CHhF2 | 5,195 | [wip testing docs] | {
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5195). All of your documentation changes will be reflected on that endpoint."
] | 2022-11-03T08:37:34Z | 2023-04-04T15:10:37Z | 2023-04-04T15:10:33Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5195.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5195",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5195.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5195"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5195/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5195/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3873 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3873/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3873/comments | https://api.github.com/repos/huggingface/datasets/issues/3873/events | https://github.com/huggingface/datasets/pull/3873 | 1,163,961,578 | PR_kwDODunzps40LGoV | 3,873 | Create SQuAD metric README.md | {
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3873). All of your documentation changes will be reflected on that endpoint.",
"Oh one last thing I almost forgot, I think I would add a section \"Examples\" with examples of inputs and outputs and in particular: an example giv... | 2022-03-09T13:47:08Z | 2022-03-10T16:45:57Z | 2022-03-10T16:45:57Z | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3873.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3873",
"merged_at": "2022-03-10T16:45:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3873.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Proposal for a metrics card structure (with an example based on the SQuAD metric).
@thomwolf @lhoestq @douwekiela @lewtun -- feel free to comment on structure or content (it's an initial draft, so I realize there's stuff missing!). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3873/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3873/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/633 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/633/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/633/comments | https://api.github.com/repos/huggingface/datasets/issues/633/events | https://github.com/huggingface/datasets/issues/633 | 702,440,484 | MDU6SXNzdWU3MDI0NDA0ODQ= | 633 | Load large text file for LM pre-training resulting in OOM | {
"avatar_url": "https://avatars.githubusercontent.com/u/29704017?v=4",
"events_url": "https://api.github.com/users/leethu2012/events{/privacy}",
"followers_url": "https://api.github.com/users/leethu2012/followers",
"following_url": "https://api.github.com/users/leethu2012/following{/other_user}",
"gists_url"... | [] | open | false | null | [] | null | [
"Not sure what could cause that on the `datasets` side. Could this be a `Trainer` issue ? cc @julien-c @sgugger ?",
"There was a memory leak issue fixed recently in master. You should install from source and see if it fixes your problem.",
"@lhoestq @sgugger Thanks for your comments. I have install from source ... | 2020-09-16T04:33:15Z | 2021-02-16T12:02:01Z | null | NONE | null | null | null | I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this:
```python
from datasets import load_dataset
@dataclass
class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling):
"""
Data collator u... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/633/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/633/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/850 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/850/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/850/comments | https://api.github.com/repos/huggingface/datasets/issues/850/events | https://github.com/huggingface/datasets/pull/850 | 742,369,419 | MDExOlB1bGxSZXF1ZXN0NTIwNTE0MDY3 | 850 | Create ClassLabel for labelling tasks datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.githu... | [] | closed | false | null | [] | null | [
"@lhoestq Better?"
] | 2020-11-13T11:07:22Z | 2020-11-16T10:32:05Z | 2020-11-16T10:31:58Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/850.diff",
"html_url": "https://github.com/huggingface/datasets/pull/850",
"merged_at": "2020-11-16T10:31:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/850.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/850... | This PR adds a specific `ClassLabel` for the datasets that are about a labelling task such as POS, NER or Chunking. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/850/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/850/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6386 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6386/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6386/comments | https://api.github.com/repos/huggingface/datasets/issues/6386/events | https://github.com/huggingface/datasets/issues/6386 | 1,979,878,014 | I_kwDODunzps52Aop- | 6,386 | Formatting overhead | {
"avatar_url": "https://avatars.githubusercontent.com/u/320321?v=4",
"events_url": "https://api.github.com/users/d-miketa/events{/privacy}",
"followers_url": "https://api.github.com/users/d-miketa/followers",
"following_url": "https://api.github.com/users/d-miketa/following{/other_user}",
"gists_url": "https... | [] | closed | false | null | [] | null | [
"Ah I think the `line-profiler` log is off-by-one and it is in fact the `extract_batch` method that's taking forever. Will investigate further.",
"I tracked it down to a quirk of my setup. Apologies."
] | 2023-11-06T19:06:38Z | 2023-11-06T23:56:12Z | 2023-11-06T23:56:12Z | NONE | null | null | null | ### Describe the bug
Hi! I very recently noticed that my training time is dominated by batch formatting. Using Lightning's profilers, I located the bottleneck within `datasets.formatting.formatting` and then narrowed it down with `line-profiler`. It turns out that almost all of the overhead is due to creating new inst... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6386/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6386/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3579 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3579/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3579/comments | https://api.github.com/repos/huggingface/datasets/issues/3579/events | https://github.com/huggingface/datasets/pull/3579 | 1,103,451,118 | PR_kwDODunzps4xBmY4 | 3,579 | Add Text2log Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/68908804?v=4",
"events_url": "https://api.github.com/users/apergo-ai/events{/privacy}",
"followers_url": "https://api.github.com/users/apergo-ai/followers",
"following_url": "https://api.github.com/users/apergo-ai/following{/other_user}",
"gists_url": "... | [] | closed | false | null | [] | null | [
"The CI fails are unrelated to your PR and fixed on master, I think we can merge now !"
] | 2022-01-14T10:45:01Z | 2022-01-20T17:09:44Z | 2022-01-20T17:09:44Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3579.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3579",
"merged_at": "2022-01-20T17:09:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3579.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Adding the text2log dataset used for training FOL sentence translating models | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3579/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3579/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6368 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6368/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6368/comments | https://api.github.com/repos/huggingface/datasets/issues/6368/events | https://github.com/huggingface/datasets/pull/6368 | 1,971,193,692 | PR_kwDODunzps5eRZwQ | 6,368 | Fix python formatting for complex types in `format_table` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | 2023-10-31T19:48:08Z | 2023-11-02T14:42:28Z | 2023-11-02T14:21:16Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6368.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6368",
"merged_at": "2023-11-02T14:21:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6368.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Fix #6366 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6368/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6368/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3370 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3370/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3370/comments | https://api.github.com/repos/huggingface/datasets/issues/3370/events | https://github.com/huggingface/datasets/pull/3370 | 1,069,735,423 | PR_kwDODunzps4vUVA3 | 3,370 | Document a training loop for streaming dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [] | 2021-12-02T16:17:00Z | 2021-12-03T13:34:35Z | 2021-12-03T13:34:34Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3370.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3370",
"merged_at": "2021-12-03T13:34:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3370.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | I added some docs about streaming dataset. In particular I added two subsections:
- one on how to use `map` for preprocessing
- one on how to use a streaming dataset in a pytorch training loop
cc @patrickvonplaten @stevhliu if you have some comments
cc @Rocketknight1 later we can add the one for TF and I might ne... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3370/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3370/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6145 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6145/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6145/comments | https://api.github.com/repos/huggingface/datasets/issues/6145/events | https://github.com/huggingface/datasets/pull/6145 | 1,847,811,310 | PR_kwDODunzps5Xx5If | 6,145 | Export to_iterable_dataset to document | {
"avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4",
"events_url": "https://api.github.com/users/npuichigo/events{/privacy}",
"followers_url": "https://api.github.com/users/npuichigo/followers",
"following_url": "https://api.github.com/users/npuichigo/following{/other_user}",
"gists_url": "... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-08-12T07:00:14Z | 2023-08-15T17:04:01Z | 2023-08-15T16:55:24Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6145.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6145",
"merged_at": "2023-08-15T16:55:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6145.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Fix the export of a missing method of `Dataset` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6145/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6145/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3927 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3927/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3927/comments | https://api.github.com/repos/huggingface/datasets/issues/3927/events | https://github.com/huggingface/datasets/pull/3927 | 1,170,016,465 | PR_kwDODunzps40ewN2 | 3,927 | Update main readme | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"What do you think @albertvillanova ?"
] | 2022-03-15T18:09:59Z | 2022-03-29T10:13:47Z | 2022-03-29T10:08:20Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3927.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3927",
"merged_at": "2022-03-29T10:08:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3927.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | The main readme was still focused on text datasets - I extended it by mentioning that we also support image and audio datasets | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3927/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3927/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5912 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5912/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5912/comments | https://api.github.com/repos/huggingface/datasets/issues/5912/events | https://github.com/huggingface/datasets/issues/5912 | 1,730,299,852 | I_kwDODunzps5nIkfM | 5,912 | Missing elements in `map` a batched dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/1410927?v=4",
"events_url": "https://api.github.com/users/sachinruk/events{/privacy}",
"followers_url": "https://api.github.com/users/sachinruk/followers",
"following_url": "https://api.github.com/users/sachinruk/following{/other_user}",
"gists_url": "h... | [] | closed | false | null | [] | null | [
"Hi ! in your code batching is **only used within** `map`, to process examples in batch. The dataset itself however is not batched and returns elements one by one.\r\n\r\nTo iterate on batches, you can do\r\n```python\r\nfor batch in dataset.iter(batch_size=8):\r\n ...\r\n```"
] | 2023-05-29T08:09:19Z | 2023-07-26T15:48:15Z | 2023-07-26T15:48:15Z | NONE | null | null | null | ### Describe the bug
As outlined [here](https://discuss.huggingface.co/t/length-error-using-map-with-datasets/40969/3?u=sachin), the following collate function drops 5 out of possible 6 elements in the batch (it is 6 because out of the eight, two are bad links in laion). A reproducible [kaggle kernel ](https://www.kag... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5912/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5912/timeline | null | completed | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.